Skip links

The rise of machines!



THE RISE OF MACHINES!

The loss of human control and judgment in the use of lethal force by LAWs raises serious concern from the humanitarian, legal and ethical perspectives.

In a 2015 International Joint Conference on Artificial Intelligence held in Buenos Aires, over 1000 experts (including Physicist Stephen Hawking and Elon Musk) signed a letter with a grim warning, claiming that the technology could trigger a 'third revolution in warfare" after gunpowder and the nuclear bomb. The experts claimed that lethal autonomous weapons (LAW) could be deployable within a few years. The open letter claimed that "autonomous weapons select and engage targets without human intervention,” and was ideal for tasks such as "assassinations, destabilising nations, subduing populations and selectively killing a particular ethnic group.” Anticipating the rise of machines, the experts warned that it was only a matter of time before “intelligence broke free of biological bonds.”

Professor Hawking is on record warning that the development of AI could mean the end of the human race. Speaking to the BBC in 2014, he said, "Once humans develop artificial intelligence, it would take off on its own and re-design itself at an ever-increasingrate […] Humans, who are limited by slow biological evolution, couldn't compete and would be superseded."

"A huge part of the issue is not the technology itself; it's how militaries use that technology," says Zak Kallenborn, a security analyst at the Centre for Strategic and International Studies in Washington, DC.

The Evolution of LAWs

A 2022 analysis revealed a lack of consensus on the definition of lethal autonomous weapons systems (LAWs), with varying definitions from countries and organisations like NATO. The definitions vary, with some claiming LAWs can understand higher-level intent. In contrast, others argue for autonomous learning and self-awareness, which most researchers believe is far beyond what's possible with AI today.Slaughterbots, also called ‘lethal autonomous weapons systems’ or ‘killer robots’, are weapons that use Artificial Intelligence (AI) to identify, select, and kill human targets without human intervention.

This desire to develop ever-increasing levels of autonomy in military weapons technology is by no means a modern trend but has historical roots. Scholars and experts in the field of cybernetics and AI in the 1950s and 1960s flagged important concerns regarding increased machine autonomy in military operations. With machine learning and computer processing power making great strides, integrating Artificial Intelligence (AI) into military systems is likely to accelerate the shift toward increased and more complex forms of autonomy shortly but surely.

Most cases involve an AI systems component that evaluates sensor data and produces an indicator to the controller, then stores that data for future reference. Ongoing U.S. Department of Defence programmes have envisioned AI playing a crucial role in the computer's accelerated identification and tracking of targets, leaving the human with a limited set of possible courses of action for a potentially lethal decision.

As per data available in the public domain, AI focuses on designing intelligent agents that decipher their environment and take actions to improve their chances of success on the battleground. AI automates learning, decision-making, and problem-solving activities, aiding in fields like planning, learning, natural language processing, robotics, computer vision, speech recognition, and problem-solving. Research has documented the broader current and potential uses of AI within the military, which includes helping, even enabling,cyberspace operations; logistics planning; intelligence, surveillance, and reconnaissance (ISR); and data analysis.

AI technologies have rapidly intruded into various domains, including governance systems, buildings, transportation, and grid systems. These technologies collect large amounts of significant data, making them useful in cybersecurity. However, the malicious use of AI is reshaping the threat landscape, implicating physical, digital, and political security threats. This could lead to the amplification of existing threats, the launch of new ones, and changes in threat characteristics, particularly in cyberspace, due to its easy, inexpensive, and secretive nature. Cyberspace, the fifth warfare domain, uses AI for autonomous military systems, enhancing covert offensive capabilities. This AI arms race, supported by multi-billion-dollar investments, is escalating tensions between nations.

AI-Assisted Weapons

Advancements in AI and cyberspace have raised concerns about national security as a growing number of states acquire these technologies in an attempt to enhance their military capabilities. Leading military powers are researching AI applications for command and control, intelligence collection, logistics, and semi-autonomous weapons platforms. The advent of AI-enabled offensive weapon systems entered the cyber domain when the US entered the fifth domain of cyberspace, and NATO recognised it as an operational domain in 2016.

States are integrating unmanned platforms into drone swarms, with Israel being the first to deploy a swarm in combat. Drone swarms communicate and collaborate, forming a single weapons platform. Today, countries are developing drone swarms for various applications, including anti-submarine warfare, anti-aircraft warfare, and anti-terrorist operations.

AI weapons using machine learning process stimuli are becoming more complex, but their data dependence makes them brittle. As autonomous weapons increase, arms control advocates fear a higher likelihood of catastrophic errors despite some states adopting robust verification programs. However, many militaries value autonomous weapons for their speed, alleged reduced error rate, and potential for defending against drone swarms. Advocates of the cause argue that AI applications can improve aiming and reduce collateral harm, making them a moral imperative.

AI and Cyberwarfare

AI's potential to enhance cyber warfare capabilities has raised concerns among policymakers and academicians. Rapid advances in AI and increasing military autonomy could amplify future cyberattacks. Cyberattacks have recently become more common and sophisticated, targeting governments, critical infrastructure, private corporations, and non-profit organisations. Malicious actors are constantly developing new techniques and utilising AI to create more destructive forms of attack. Adapting AI capabilities to existing cyber warfare tools could enhance their effectiveness and efficiency.

As expected, this has given rise to AI-assisted cyber defence, too. AI-enabled cyber technology can be used for both offensive and defensive purposes, with future cyber-attackers likely to use it for advanced and complex threats. AI's ability to learn and adapt will ensure highly customised, human-impersonating attacks, making future attacks more penetrative. Therefore, analysing offensive AI in cyberspace is crucial for understanding AI-enabled cyber threats. AI-enabled cyber defence mechanisms are becoming increasingly important due to the development and employment of AI-powered weapons.

Active Cyber Defence (ACD) is gaining prominence among policymakers and practitioners as it effectively and quickly responds to cyber offences. The ethical and legal implications of autonomous weapons systems (AWS) gained prominence in 2012 when the U.S. Department of Defence published guidelines for their development and use. Relying only on passive defence to protect cyber assets from a range of threats is inadequate. It is vital to implement agile cyber defences and responses that can keep pace with network activity, preempting attacks before they are operationalised. Governments and companies have started considering the ACD method more frequently for strategic benefits.

A Global Trend

According to the United Nations Institute for Disarmament Research (UNIDIR), a growing number of countries, including the U.S., the UK, China, and Russia, either develop, produce, or use military systems of varying degrees of autonomy, including lethal ones.

Reportedly, Israel is one of the front runners in the race for AI-assisted weapons. Media reports indicate that in the ongoing war in Gaza, the Israeli Defence Forces may be resorting to AI for target selection and engagements. Israel has been developing an AI-based system called Habsora (the Gospel) to select targets rapidly. The Gospel ensures targeting recommendations are provided to researchers seeking to align the machine's suggestions with a human-driven identification process.Aviv Kochavi, who served as the head of the IDF until January, has been cited saying that in Israel's 11-day war with Hamas in May 2021, it generated 100 targets a day. "To put that into perspective, in the past, we would produce 50 targets in Gaza per year. Now, this machine produces 100 targets daily, with 50 per cent being attacked.”

However, the systems' implementation is still in its infancy and ranges from simple and brittle to robust and versatile. Resilient systems should be able to manage the intricacies of a battlespace with multiple dimensions.

Chinese strategists claim that artificial intelligence's value for decision-making will cause future warfare to become a competition over which states can produce computers with the quickest computing capacity. They claim that wartime commanders will be armed with supercomputers that will come to surpass the decision-making capabilities of the humans directing them, what the Chinese call 'Algorithmic Warfare.' The same strategists predict that frontline combatants will be gradually phased out and replaced with intelligent swarms of drones that will give operational-level commanders complete control over the battlefield. They expect over time that the tactical level of warfare will largely become a function of competition between robots and, therefore, in some ways, largely become a game.

The Perils of LAWs

The AI system represents and operates on the world along a logic that accepts that it cannot comprise and calculate the 'real world'. Hence, this results in abstraction, truncating, and rendering, which involves “approximation, biases, errors, fallacies, and vulnerabilities.” With AI, "information flows are diffracted, distorted and lost," casting the world in statistical approximations and acting or suggesting actions upon this interpretation of the word.

The inability of Slaughterbots to distinguish between legitimate and illegitimate targets is why we do not see significant militaries using them. This field is still in its infancy and is heavily influenced by policy decisions that must be made regarding the precise definition of a lawful military target. Stopping research into autonomous weapons today will only stop responsible governments from developing systems that can distinguish between legitimate military targets and noncombatants and safeguard innocent lives, not stop "slaughterbots" that kill without mercy.

However, those of us who were brought up on a steady diet of Terminator Movies need no telling what happens when AI goes terribly wrong. This leads to the phrase AI Assurance-An AI-enabled system is trustworthy to the extent that, when used correctly, it will do what it is supposed to do. Two, when used correctly, it will not do what it is not supposed to do. And three, humans can dependably use it correctly. The last one brings into consideration human-machine teaming. This theme will be one of the most crucial things we must work through in defining those interdependencies, responsibilities, and roles between machines and humans over the next decade.

With a comparatively flat human capability development, the error rate in life-and-death decisions is constant. However, machine accuracy has improved exponentially; experts talk of a potential to surpass human accuracy in combat-kill decisions by AI-assisted weapons. Ethical questions of keeping humans in control to minimise civilian deaths are being raised. In the next 30 to 50 years, semi-autonomous systems will likely continue, with automated portions becoming more capable and human-machine interfaces improving, allowing human operators to increase control over multiple systems while decreasing the level of detail they have to control directly.

As per the ICRC, using autonomous weapons systems entails risks due to difficulties anticipating and limiting their effects. This loss of human control and judgment in the use of lethal force raises serious concern from the humanitarian, legal and ethical perspectives.

Synergia Takeaway

The more challenging and open an environment, the more complex and sophisticated an AI system might need to be employed in such an environment. The more complex and opaque a system is, the less predictable or understandable its decision-relevant actions

are. This "performance-understandability trade-off poses a central paradox of AI" in weapons systems and complicates matters.

Human capability development remains stagnant, but machine accuracy is enhancing, potentially surpassing human accuracy. This leads to more complexities, which could be combated by early investment in cognitive warfare analysis, AI-powered predictive maintenance, and autonomous target recognition.

S

ome autonomous weapons must be banned – specifically, those which target humans, which are highly unpredictable, or which function beyond meaningful human control.

And even those that can be

meaningfully controlled by humans

must

be regulated.


Leave a comment