![]() |
| Symbolic image of AI chatbot | Photo: Reuters |
Arnold Schwarzenegger is the actor in the science fiction film 'The Terminator'. One of his famous lines in the popular 1984 film was 'I'll be back.' That is, I will come back. But he did not say this line jokingly. In an interview in 2023, the actor-turned-governor of California said that the vision of automatic weapons shown in the film has now become a reality. The film's director, James Cameron, said, 'I warned you in 1984, but you didn't listen.'
In the story of the film, it is seen that humans are designing artificial intelligence military equipment for the future. But at some point, the weapon goes out of control and turns rogue, which creates a situation of destruction for mankind. However, in the end, that did not happen. Four decades have passed since the release of the film 'The Terminator'. During this time, the situation on the battlefield has changed a lot. Artificial intelligence weapons have been actively deployed in battlefields like Ukraine and Gaza. If the United States and China ever fight in the future, artificial intelligence will play a decisive role in such a war.
The Terminator warned that machines cannot be trusted to make such important decisions as who to shoot and when. This is ingrained in the collective human psyche. The real danger is that humans will have little or no control over these machines. Of course, there are illusions about this. This is why democratic governments, militaries, and societies find false comfort in the idea that they can design better systems.
However, when it comes to human control of AI-enabled weapons systems, the conventional wisdom is more clear. For example, the US government's policy requires that lethal autonomous weapons be designed with appropriate human intervention. This is something that senior officials in the country are constantly emphasizing.
![]() |
| A person sitting on the floor with AI goggles | Photo: Pexels |
The United Nations is calling for a ban on fully automated weapons and is proposing an international law on the subject. The proposal calls for people to be involved in the development of such a system. In addition, many non-governmental organizations such as Stop Killer Robots, Future of Life Institute, and Amnesty International are advocating for human control over automated weapons.
While it is comforting to imagine that humans will prevent mindless algorithms from killing indiscriminately, this consensus is beyond technological reality. The AI models that drive contemporary autonomous weapons are so advanced that they require highly trained operators to supervise them. Furthermore, if large-scale deployment of autonomous weapons is planned, they will require vast amounts of data, speed, and complexity—which could make human control virtually impossible.
Expecting an AI system to analyze and act on its trajectory in normal circumstances is challenging. In wartime, it is even more difficult for humans to control it in conditions such as severe fatigue, limited time and manpower, and communication barriers between units and higher authorities. Rather than indulge the illusion that humans will have the ability to control autonomous weapons in war, militaries must now build confidence in their models for autonomous weapons.
Warfare Increasing
The military competition between the United States and China is increasing. In this, the issue of increasing and deploying automated weapons is becoming inevitable. The two countries have already leaned towards such a fight in the Russia-Ukraine war. On the other hand, the US government has started using artificial intelligence on a large scale for various security purposes. These include various issues such as intelligence analysis, biosecurity, and cybersecurity.
China has long invested in its capabilities to balance US power in East Asia. The US has recently backed away from deploying large-scale warships in East Asia. This is because such a move is no longer sustainable. China has developed more affordable weapons systems there. These include anti-ship ballistic missiles and diesel-electric submarines. Using such equipment, they will be able to counter large US military platforms. China's strategy is called 'denial'.
The US is deploying unmanned drone systems to thwart China's 'denial' efforts in East Asia as a new initiative to maintain its dominance. Their strategy uses automation, AI and drones. According to the US Department of Defense, every sensor and weapon in the US program will collect all kinds of information from the surroundings, which will be combined to create a 'data fabric' system.
In this case, unmanned systems will play the most important role. On the ground, they can carry out deadly attacks and prevent civilian deaths. They can also move in any direction with ease while flying in the sky.
Considering the modern-day war situation, the United States, China, and Russia have accordingly spent billions of dollars on defense systems.
Strategy and management are not the only considerations in war. From an ethical perspective, many observers fear that unsupervised, mindless machines could violate long-standing principles of warfare and that if training with such weapons is biased, vulnerable people could become victims. There is also the risk that the devices could be hacked or stolen and misused.
In addition, due to short-sightedness, strategic results can be reversed. Critics of automated weapons have argued that humans integrate a wider context into their decisions. This makes them better at managing chaos. Machines can only follow a specific script. However, in some cases, humans highlight certain computer mistakes. They believe that humans make fewer mistakes in this case. It must be remembered that humans only make mistakes. However, from an ethical and practical perspective, even the most advanced automated systems, including AI, make mistakes. Even so, the reality is that AI has advanced to a point where human control is minimal. Many still believe that humans will control AI. This further increases the risk to humans.
Experts believe that future wars will be AI-driven and fast. There will be maximum use of information. Because AI-enabled weapons (such as swarms of drones) can be deployed quickly and widely. From this, humans will not have the time or knowledge to independently evaluate that information. For example, Israel's use of AI during the Gaza war can be mentioned.
In the current situation, policymakers and military leaders must therefore take appropriate measures. Senior US defense officials said that AI should only be given the ability to provide advice rather than take direct action.
Some say these weapons should only be allowed for self-defense. However, governments must recognize that even the most sophisticated AI systems will not always be accurate. So more emphasis should be placed on their ethical use.

