Cyber Security and Emerging Threats Vanita Thind

Fully Autonomous Robot Weapons

Advances in artificial intelligence (AI) and related technologies are paving the way for fully autonomous robot weapons. The use of these weapons would revolutionize military strategies because, unlike drones, they are capable of selecting and targeting specific individuals without human supervision. Ultimately, they could replace humans on the battlefield and become an unprecedented danger.

[captionpix align=”left” theme=”elegant” width=”250″ imgsrc=”http://natoassociation.ca/wp-content/uploads/2013/11/download.jpg” captiontext=”The US Navy’s MK 15 Phalanx Close-In Weapons System is an example of how close we are to producing fully autonomous killing machines”]

What Happens Now?

Since the recognition of the potential buildup of killer robots, 40 states have voiced their interest and concern at the dangers of this form of weaponry. The underlying issue of using killer robots is civilian protection. A fully autonomous robot would be against International Humanitarian Law (IHL), also known as the Laws of War. To be exact, the principle of distinction, proportionality, and military necessity would be undermined. In addition to the lack of compliance with IHL, the issue of human emotions also arises. Robots are incapable of subjective decision-making. Human emotions, consciousness, and judgement are non-existent. This poses a threat to ordinary civilians because a robot does not know the intentions behind human actions and, without human judgment, the principle of distinction is threatened. Another terrifying thought relates to the increase in armed conflict. World leaders could potentially engage in more wars for the purpose of “solving” miniscule issues as a result of the decreasing rate of losing soldiers on the battlefield. The question of accountability is also of the utmost concern. Who will be held responsible for inhumane conduct committed by these machines?

States Are Responsible

These “killer robots” will take another 20-30 years to fully develop. In response to their development, a cohort of states that are parties to the Convention of Conventional Weapons (CCW) will begin international discussions on banning killer robots pre-emptively. These discussions will take place in Geneva in May 2014.

To save humanity from the dangers of autonomous killer robots, it is the responsibility of leaders to adopt national policies dedicated to controlling this potential threat. Specifically, the development of these robots should be prohibited pre-emptively by legally binding oneself to an international treaty. Essentially, robots should not replace human responsibilities.

Vanita Thind
Vanita completed her MA in International Conflict and Security at the University of Kent, Brussels School of International Studies. Prior to that, she received her Honours BA in International Studies from Glendon College, York University. In her final year at Glendon College, Vanita was invited to be a Junior Research Fellow for the European Union Centre of Excellence at York University for her role as a coordinating member of a conference on contemporary Germany. During Vanita’s year abroad, she was actively involved in organizing a conference on the international implications of the Arab uprisings, and she has also worked with the Home Government Department for the Model NATO Youth Summit (MoNYS 2013). Vanita’s research interests are broad, but her main focus relate to conflict resolution, international relations, the Syrian non-international armed conflict, NATO’s security policies, and international humanitarian and criminal law.