Alex Johnson Editors' Forum Emily Mullin Maria Zelenova Nicole Dougherty Samer Khurshid

Should Artificial Intelligence Be the Future of Warfare?

Alex Johnson – Research Analyst

The Implications of Innovation: AI and the Future of War

For those of us who are fans of popular Sci-Fi blockbusters, Hollywood has taught us to fear the possibility of Artificial Intelligence (AI) and its many potential applications in warfare. We have been conditioned to consider AI as a dangerous and disruptive technology that will change the fundamental nature of warfare; that is, politics by violent means. This has been the nature of warfare for millennia and I do not expect AI will change this. Rather, it will likely accelerate an ongoing shift in the character of warfare. 

Over time, warfare has evolved to the point where it now favours intellectual capabilities over physical might and kinetic force. Today, military outcomes are more likely to be determined through the collection and use of critical information, such as intelligence or targeting data, than ever before. AI technology is currently being applied in the more mundane areas of our militaries, including communication, intelligence, and supply chains, in order to streamline operational efficiency, facilitate greater inter-agency collaboration, and improve the overall quality of intelligence. Taken together, these advantages have the potential to reduce the number of casualties in conflict settings. The American Government’s consistent funding increases for the development of AI’s military applications suggests that this technology is indeed the future of warfare. But what does this future hold?

The greatest advances in this technology are currently taking place in the labs of private sector firms in the United States. Thus, we should anticipate government partnerships with these firms designed to facilitate technological innovation through improved R&D resources and data-sharing. Going forward, AI practitioners should establish clear implementation and evaluation frameworks in order to mitigate potential inefficiencies and technological vulnerabilities. Finally, an independent entity to develop and enforce AI best practices should be created to oversee all research in this field.

Nicole Dougherty – Program Editor for Canadian Armed Forces 

Ethics and AI: Why AI Shouldn’t Be the Future of Warfare

Can we trust AI to make decisions about warfare? On some things, it seems uncontroversial. AI could understandably make logistics simpler and more efficient. However, real questions emerge when you begin to think about AI as a tool for military strategy, deployment or in weapon form. How can we trust AI to find the balance between competing interests and exhibit ethical behaviour the way a human can?

The United States has attempted to address this issue. In 2019 the Defense Innovation Board released their recommendations on AI application in defence, which emphasized human control over these technologies, and their role in preventing possible biases that could exist in an AI algorithm. In 2020, the Trump Administration decided to formally adopt these principles, once again emphasizing the idea that the decisions will be traceable to human oversight and that ultimately humans could override any decisions made by the AI. 

However, the entire idea of AI is to increase the speed and limit the manpower that is required to make these decisions. Can we truly have faith that a person is overseeing all major AI decisions in real-time and would be able to detect and prevent any unethical behaviour by an AI? Moreover, how can we counter the innate biases of technology towards our own forces, that may not be able to balance the interests of different combatant and civilian groups against the protection of our own forces. Can we trust AI? Maybe for the little things, but not for everything else. 

Emily Mullin – Research Analyst 

What is the International Response to AI Weapons? 

With countries such as China, Russia, the United States, South Korea, the United Kingdom, and Israel substantially investing in AI-based weapons, algorithmic warfare may arise in the not too distant future. These weapons, known as lethal autonomous weapons systems (LAWS), can select and engage targets without direct human oversight. The ethical and moral concerns surrounding LAWS have sparked an international debate on the lawfulness of their use. It is unlikely that the machines will operate in accordance with the law of armed conflict, otherwise known as international humanitarian law, nor can it be assured that autonomous weapons will respect human rights law. In either case, enforcing accountability may prove challenging as technological advancements continue to blur the line between human and machine agency. 

International efforts to ban the use of LAWS are underway. One such effort is the UN’s Convention on Conventional Weapons (CCW), where 125 countries assemble to discuss imposing regulation on LAWS. The CCW operates by consensus. This principle has been a setback to initiating change because it allows a single dissenting party to overrule a decision. Lately, countries with a large stake in AI technology have stalled progression towards an international accord that would prohibit autonomous weapons. In August of 2019, Russia and the United States prevented talks on a new LAWS treaty, citing the measure as unnecessary given the infancy of the AI industry. Considering that LAWS are being programmed to make life and death decisions, the capabilities of AI technology should not be underestimated. An international treaty must mandate human control over the use of force so that autonomous weapons do not transform the nature of warfare. 

Maria Zelenova – Program Editor for Canada’s NATO

AI and the Future of Conflict: NATO’s Challenges 

New developments in Artificial Intelligence (AI) technologies are changing every aspect of global security and conflict. The development of AI has created a drastic shift in the political, economic, and military influence of the states who are able to keep up with the changing standards for digital capabilities. By contrast, the states who are not able to keep up and adapt with the transformations will inevitably be disadvantaged by geopolitical competition. 

Two domains of AI development are of particular concern to NATO. The first is the use of AI technologies by adversaries in the digital sphere. A particular concern is the evolution of hybrid warfare, as AI broadens the scope for potential weaponization of fake information and distortion of facts. As the web between physical and virtual battlefield spaces becomes more connected, the resolution of future conflicts will depend largely on the capacity of states to coordinate their capacities in both spheres. The second concern is the development and evolution of fully autonomous weapon systems. This prospect poses significant ethical, legal and operational questions for the future of operations by the alliance.  

What do these challenges mean for NATO? Addressing them requires creating coordinated strategies to protect the values that the alliance has held for decades. While such coordination largely exists in the physical security domain, NATO would benefit from identifying the chief AI vulnerabilities in its digital security sphere and creating a regulatory framework for threat assessment and risk reduction. Some governments have already taken up to the task, with the United Kingdom appointing a Select Committee on Artificial Intelligence and the United States launching the American Artificial Intelligence Initiative. Other countries have also created strategies for AI development while identifying different priorities; for instance, the US is predominantly taking advantage of military capability and Canada is leading AI research. The reality of the changing warfare landscapes demands developments in not only technology, but in the conduct of diplomacy and coordination.

The chief challenge for NATO members remains not only adapting to the new technological realities of conflict, but also synchronizing priorities in development. Correctly identifying the gaps in the digital security, as well as the development of fully autonomous weapon systems is the first step to adequate preparedness towards potential attacks from the adversaries. 

Samer Khurshid – Research Analyst

Sky Net Invading: Get John Conner Now!

Throughout history, there are several instances of experts contemplating the trends of futuristic warfare. Prior to WW2, few contemplated that the mighty battleships would fall victim to a flimsy airplane launched from a ship’s deck. Yet, it happened. Today, everybody is certain that AI is the logical next step in the future of warfare. Hence my answer is yes and no. 

Yes, in the sense that with AI, the human element is reduced, speeding up various processes. AI has allowed humanity to expand its knowledge and capabilities. 

No, based on the numerous flaws associated with AI. These flaws were evident in the 2002 US Millennium Challenge. The challenge pitted a fictional Iranian force led by a retired US Marine Corps General against the United States. An easy victory was predicted by the US, given the adoption of AI systems. Ironically, fictional Iran won through the use of non-conventional tactics and communication systems. To add insult to injury, one need only mention Boeing’s MCAS system on the 737 Max aircraft. Even today, the issue is not entirely resolved. Furthermore, numerous campaigns against the use of AI in warfare have risen to prominence, namely the “Campaign against Killer Robots”. These campaigns vociferously protest the induction of AI, citing that AI should not be used in warfare for killing people and destroying communities.

Therefore to conclude, an end all casus belli against AI is ludicrous, given its benefits. Opportunities exist for AI to be tweaked making it less vulnerable to hacking, all while maintaining humanity’s intellectual and intuitional edges over AI. However, humanity needs to understand that the best way to proceed with AI, is to not provide it with human intuition and other human features. Otherwise, we will be begging for “John Conner” to help.  

Cover image: Campaign to Stop Killer Robots meeting (2013) by Campaign to Stop Killer Robots via Wikimedia Commons. Licensed under CC BY 2.0

Disclaimer: Any views or opinions expressed in articles are solely those of the authors and do not necessarily represent the views of the NATO Association of Canada.

Author

  • NATO Association of Canada

    The NATO Association of Canada (NAOC) strives to educate and engage Canadians about NATO and its goals of peace, prosperity and security. The NAOC ensures Canada has an informed citizenry able to participate in discussions about its role on the world stage.

    View all posts
NATO Association of Canada
The NATO Association of Canada (NAOC) strives to educate and engage Canadians about NATO and its goals of peace, prosperity and security. The NAOC ensures Canada has an informed citizenry able to participate in discussions about its role on the world stage.