Skip links

Algorithmic warfare AI’s promise & boundaries



RISK ASSESSMENT: AI IN MILITARY APPLICATIONS

As far as risk mitigation is concerned, there is no one-size-fits-all solution.

Lt Gen RS Panwar (Retired), former National Cyber Security Coordinator, India

Artificial intelligence as a term eludes definition. However, the general agreement is that it is a broad spectrum of technologies. The primary amongst them is neural network-based AI ML technologies, others being rule-based programming used in expert systems, statistical techniques, etc.

There are characteristics of the technology per se, such as the opaqueness of AI ML systems, the brittleness of those systems, and unpredictability because it can learn directly from data, which we call online learning. And, of course, the intelligence that AI ML and other AI systems endow to machines.

What is of concern is the intelligence that it endows to machines, which leads to handing over more and more autonomy to machines. This is what causes concern when people like Elon Musk, Stephen Hawking and Sam Altman say that there is an existential risk with AI technology. This is intelligence. When these technologies are used to create applications, it leads to different risks. The EU-AI Act has tried to, firstly, list all the technologies. And secondly, it has come up with a risk-based approach for addressing civilian applications. Human rights issues, such as racial and gender bias, right to privacy, etc, often drive AI-related concerns in civilian applications.

Understanding AI Risks

Risks in AI-enabled military systems emerge from the twin considerations of adherence to international humanitarian law and reliable performance on the battlefield. As far as risk mitigation is concerned, there is no one-size-fits-all solution. Therefore, there is a need to adopt a risk-based approach.

The previous EU AI Act has also adopted a risk-based approach in connection with civilian applications. The risk posed by a wide spectrum of AI-powered military systems, comprising weapon systems and other military applications, can be evaluated and addressed at a more granular level. Towards this end, afive-level hierarchy is based on an intuitively appealing rationale. The top three levels pertain to weapon systems, and the bottom two to AI military applications, which are collectively referred to as either non-weapon systems or decision support systems. Then, a simple yet effective mechanism is used for mapping real-world military systems applications into these risk classes. These risk classes can then be subjected to mitigation measures tailored to each class. The risk hierarchy evaluates risk based on certain criteria that reflect the driving concerns, such as reliability under the battlefield and international humanitarian law, such as level of autonomy, whether it is lethal or anti-material, its destructive potential, etc.

Risk Evaluation Parameters

Fully autonomous lethal weapon systems are placed at the highest risk level. Fully autonomous are weapons capable of selecting and engaging targets without human intervention. The Israeli Harop is a good example of such a system. It is a loitering kamikaze drone that hovers around in the area of operations. When it detects the right radar signal, it dives down and destroys the target. Now, the twin effect of its two features of autonomy and lethality puts all systems in this category in the high-risk category.

A fully autonomous nuclear weapons system is an obvious candidate for the unacceptable risk level at the top of the risk hierarchy because of its extreme destructive potential. Now implied in such a categorization is also that such a system should not be developed at all. After all, it poses an unacceptable risk.

Another class of systems that the risk hierarchy places at this level are fully autonomous online learning systems, weapon systems that might learn while in operation and, as a result, display unpredictable and unreliable behaviour. As yet, such systems are not known to exist.

At the lower end of the spectrum of weapon systems in the risk hierarchy are defensive anti-material systems; firstly, they are defensive, and secondly, they are anti-material, not lethal. Even though these may be fully autonomous, such as the U.S. Phalanx close-in-weapon system or the Israeli Iron Dome, which are designed to destroy incoming rockets and missiles. Because of the design and nature of the employment of such weapons, they cannot cause unintended harm to human lives. All semi-autonomous weapons would also fall in this category since these are always under full human control.

It does not make sense to paint the full gamut of AI-enabled military systems posing widely different risks with a common brushby instituting identical mitigation measures for all. We need a differentiated risk mitigation mechanism, which must be evolved and fine-tuned to each risk level. These could vary from a complete ban at one end of the spectrum through different levels of scrutiny, legal review, test and evaluation procedures, mandatory use of explainable AI, etc.


Leave a comment