Aylin Manduric Cyber Security and Emerging Threats Investment Procurement Security Technology Trade

Dawn of the Killer Robots

Robotinsider
SWORD Talon unmanned ground vehicle. [Robotinsider]
The use of unmanned drones in combat has withstood a measure of controversy already, but soon, concerns about the distance between human judgment and lethal force may take on a whole new dimension. Lethal autonomous robots (LARs) are an emerging military technology that has brought to light ethical and legal questions that until recently belonged to the realm of science fiction. LARs, which have been compared alternately to land mines and unmanned drones, are killing machines programmed to autonomously choose and execute targets without human input.  Over the past five years, numerous countries have begun developing LARs components and technology, including the United States, Israel, South Korea, and Russia.

Last year, a UN expert report warned of the potential dangers of the controversial new technology, recommending that a moratorium be put in place and a high panel on LARs be established to produce new policy addressing the issue. The report questions whether the use of LARs may ever be acceptable, and highlights the ethical problem posed by robots having “the power of life and death” over human beings. The report stresses the urgency of the issue, warning that once countries have developed and deployed LARs, it will be too late to respond appropriately.

The Register
The Register

Activist groups such as the Campaign to Stop Killer Robots and Mines Action Canada have been campaigning to put pressure on the Canadian government to support an international ban on the weapons, hoping for action echoing Canada’s leadership on the 1997 ban on land mines. The initiative, called “Keep Killer Robots Fiction,” has gained momentum in the past week through a series of public events in Ottawa, as well as an increased media presence. These have raised some of the most pertinent arguments against LARs technology.

Allowing robots to choose their own targets can result in deaths that might have been prevented by compassion or human judgment. According to a Human Rights Watch report, LARs cannot abide by international law since they cannot distinguish between soldiers and civilians. The technology also makes battles cheaper and easier to wage by lowering the costs of killing – robots cannot feel guilt, do not suffer trauma, and can be disposed of more cheaply than human soldiers when broken. Robots have the additional benefit of having no families to mourn their loss or protest a war when too many are “killed”. Using LARs does not merely remove human beings from the nasty business of targeted killings, but also distances them from the very act of killing and its consequences.

On the other side of the debate, some have argued that removing a human operator from a drone’s system can actually make lethal weapons safer. According to LARs proponents, robots can target and aim better than human beings, and will achieve tactical objectives without succumbing to human vulnerabilities like emotion or personal bias.

While the unmanned “human-in-the-loop” drones currently used in American counterinsurgency missions are quite vulnerable to hackers, LARs advocates argue that removing the robot’s receptivity to remote commands can protect it from threats like the virus that infected drones at a Nevada Air Force base in 2012. The 2012 virus apparently infected the drone-control system when an operator used it to play an online game, demonstrating an alarming detachment from the seriousness of the job. On the other hand, LARs would likely still be vulnerable to GPS-jamming, which can be done cheaply by amateurs.

Climbing cinderblocks is half the battle
Digital Strategy Consulting

The biggest question facing LARs policymakers, however, is the question of accountability. Whether or not LARs can be used ethically or legally is a moot question if there isn’t a human being to be held responsible for rule-breaking after the fact. Slow policy development and knowledge gaps leave unanswered questions about whether military commanders who commission the use of LARs can be held accountable for decisions made by the robots without human input. Deterring violations of any agreements regarding LARs is impossible since robots don’t think about the future and all human beings diffusely involved in the robot’s decision – programmers, commanders, builders – are too distant to be blamed for violations. The programs themselves are fairly unpredictable, so it can be difficult to anticipate how the LARs will respond to unanticipated circumstances, like if two enemy sets of LARs were pitted against each other. If the Canadian Campaign against Killer Robots fulfills its mission, we may never have to find out.

 

Aylin Manduric
Aylin is working on a Hon. B.A in International Relations and Peace, Conflict, and Justice Studies at the University of Toronto. She works as a compliance analyst for the G20 Research Group and as a civil society analyst for the G8 Research Group. She also volunteers with several global health NGOs, and serves on the executive board of a student group dedicated to global healthcare advocacy. Her research interests include security, counter-terrorism, conflict recovery, and state-building in the Middle East and North Africa. In writing, she hopes to make security and defense issues accessible to readers, and empower youth to take an interest in international relations by offering a balanced perspective on international affairs.