Cyber Security and Emerging Threats Defence Spending Defense Global Governance Ian Goertz International Law & Policy International Relations Peace & Conflict Studies Security Technology Uncategorized United Nations

‘Killer Robots’ and the Future of Security

In recent months the topic of Artificial Intelligence (AI) and Lethal Autonomous Weapons Systems (LAWS) has come to the attention of the international community. At the end of July, a number of high profile scientists and AI researchers voiced concerns over the militarization of autonomous robots. Elon Musk, Steve Wozniak and Stephen Hawking along with 1000 other experts argue that AI represents “the third revolution in warfare, after gunpowder and nuclear arms.” What is more is the claim that AI robots could lower the threshold of going to battle, which could result in greater loss of life. Prior to this, the UN held a discussion in May 2015 about pre-emptive decision-making surrounding what has become a controversial subject in the international community.

Much of the discussion has focused on stereotyping LAWS as some type of ‘Killer Robot,’ akin to those we see in sci fi series like Terminator, I-Robot, or 2001: A Space Odyssey. This terminology, however, frames the debate in a negative light, and fails to identify the positive features of AI in warfare. However the debate is framed, the fact remains that AI has revolutionary potential on the battlefield. If we are able to put our unfounded science fiction fears aside, perhaps this revolution could be much more beneficial than anti-AI campaigns would have us believe. Many experts have claimed that the technology is only years, not decades away, and therefore NATO and Canadian officials will be pressed to make definitive decisions about the future use of AI technology in warfare. Kathleen Harris’ CBC article references recent government documents that state officials within Canadian Foreign Affairs and National Defence are keeping a neutral presence at this time despite the pressure from human rights and international law groups.

The positive features of AI are as numerous as the criticisms. Computers even with AI can be programmed with a basic black and white code of conduct. For example, compliance with international laws and conventions would be more reliable with an AI than it would be with a person. We have learned through countless decades of conflict that people are notoriously bad at following the rule of law in combat. An AI is not bound by emotional responses. It does not react to fear or anger, and this is something that would ultimately be beneficial to armed conflict.

LAWS have far greater capacity to evaluate and interpret what is happening on the battlefield than people do. As Rosa Brooks put it in her article in Foreign Policy,

“Our eyes face only one direction; our ears register only certain frequencies; our brains can process only so much information at a time. Loud noises make us jump, and fear floods our bodies with powerful chemicals that can temporarily distort our perceptions and judgment.”

Because of this advantage, it is understandable why both the defence industry and the intelligence community would be clamoring to introduce AI platforms into the field, which could result in greater effectiveness and efficiency. AI will not mistake gestures, or instructions, or friends for enemies.

Similar to how Sonny was designed in I, Robot, AI could reproduce all the best parts of the human brain, with incalculably greater speed, accuracy, and reliability. AI could radically improve upon the performance of the typical soldier.

Lastly, the ‘shoot second’ principle is extremely interesting in how it will change the landscape of the battlefield. Soldiers in combat exist in an extremely hostile state, and often react before fully taking in the information in front of them. This is of course not their fault; it is human nature and a natural instinct. LAWS will be able to survey and examine before engaging,

“Unlike human soldiers, they could be programmed to ‘shoot second’ with high accuracy — or even give an enemy the opportunity to surrender after the enemy has fired his weapon — thus potentially decreasing civilian casualties and increasing the chance of capturing enemy combatants.”

This could potentially reshape the way we fight modern combat, for the better. Decreasing civilian and military casualties should always be of the utmost importance in conflict. AI offers a realistic opportunity to do so and at little cost.

Most of the debate surrounding what has been mockingly deemed ‘Killer Robots’ has focused on some sense of human morality; that in some way removal of human presence from the battlefield is a removal of morality as well. It is understandable why some feel that this is the case, but is a LAWS controlled and programmed by a moral person not equally as moral? This debate is certain to rage on for the foreseeable future, and with a tremendous amount of support for both sides. If people can see past their science fiction fears and fictitious moral conundrums perhaps there will be a place for ‘Killer Robots’ in the future of security.

Ian Goertz
Ian Goertz is currently a Research Analyst for Canada’s NATO at the NATO Association of Canada. Ian recently completed his M.A. in Intelligence and Strategic Studies at Aberystwyth University in Wales. Prior to this, he completed his B.A. in Political Science at McMaster University. Ian wrote his dissertation on the role of technological innovation and the intelligence community. His research interests are intelligence and strategic studies, emerging security, international relations of technology and the Internet. He was also a semi-finalist at the Inaugural Cyber 9/12 Student Challenge in Europe, held by the Atlantic Council in Geneva, Switzerland. Ian’s other interests include sports, comic books, video games and science fiction.
http://natoassociation.ca/about-us/contributors/ian-goertz/