China Cyber Security and Emerging Threats Edward Tat NATO Russia The United States of America

The Strategic Race for “Algorithmic Warfare” and AI Development

Vladimir Putin might actually be right on this one. AI (artificial intelligence) is “the future”, according to the President of Russia, who has also opined that “whoever becomes the leader in this sphere will become the ruler of the world.” US Air Force leaders are apparently in agreement, and other US military officials have currently been lobbying the Trump administration for increased funding for research and development in “algorithmic warfare” – a new term arising from the US Department of Defense’s (DoD) Project MAVEN which refers to the general use of AI and machine learning (ML) algorithms in the conduct of warfare. With nations like China also racing for AI development, it is clear that algorithmic warfare will be of great concern to those states wishing to maintain military might and its resultant geopolitical power.

 

While Hollywood depicts AI in the form of violent humanoid robots or voices, AI is expected to begin its military debut by fulfilling the less glamourous but still unequivocally vital role of data management. Wars are won or lost with information. As such, nearly all NATO assets present on the physical battlefield are set to feature sensors, cameras, and other state-of-the-art technologies in the near future to eliminate much of the so-called “fog of war” for those in charge of directing combat operations. AI is also set to further enhance network-centric warfare – a military doctrine that takes advantage of information technology.

 

With more of this newly deployed hardware being employed, a massive amount of ISR (intelligence, surveillance, and reconnaissance) data can be collected. Normally, support personnel must sift through countless bytes of footage, audio, and other important information to identify relevant information for commanders to act upon. The Band-Aid solution – increasing the amount of support personnel – is becoming increasingly difficult due to the sheer amount of data that needs to be analyzed by a human facing time and error constraints in a combat situation.

 
[perfectpullquote align=”right” cite=”” link=”” color=”” class=”” size=””]”To put it bluntly, superior AI development by an adversarial state actor could easily mean the defeat of NATO forces on the battlefield.”[/perfectpullquote]

Simply put, AI is the answer to the future needs of NATO countries such as Canada. AI theoretically opens the door to efficient and effective processing, exploitation, and dissemination (PED) of data gathered from the battlefield. This PED could potentially be done at a speed and error rate impossible for any group of humans to match. As a result, relevant battlefield data can be brought to decision-makers faster and more reliably than ever before. With emerging global threats becoming increasingly sophisticated as technology progresses and proliferates, this enhanced speed and accuracy will matter.

 

As President Putin alluded to, the use of AI has been pushed in recent years to governments of major nations as a means to obtain or maintain a strategic qualitative and quantitative edge in military affairs. NATO militaries have distinctly enjoyed a qualitative edge in military power since the end of the Cold War, but any lacklustre development of AI on the part of NATO members could easily reverse this trend. To put it bluntly, superior AI development by an adversarial state actor could easily mean the defeat of NATO forces on the battlefield. AI, then, is the key to strategic dominance in the future geopolitical landscape.

 

The race for AI superiority, of course, had started as soon as AI technology had enhanced to the point where military and political leaders were confident that it could be usefully employed in their militaries. So far, three UN Security Council members – the US, Russia, and China – have very recently made substantive efforts to militarize AI. In April, the US DoD created the Algorithmic Warfare Cross-Functional Team – an R&D group – and will deploy ISR algorithms against the Islamic State of Iraq and Syria (ISIL) by the end of this year. In July, the Chinese government announced its ambition to be “the front-runner and global innovation centre in AI” by 2030 to “elevate national defence strength and assure and protect national security.” Russia’s government also aims to automatize aspects of its military by 2025, and President Putin’s recent remarks have made Russia’s motivations clear.

 

Meanwhile, AI development has flourished significantly in the civilian commercial sector and has arguably driven AI development in the military as a dual-use technology – a familiar term in the discussion of nuclear weapons. From major Silicon Valley companies to the world’s top universities, the latest AI developments are showing great promise to both civilian consumers and watchful military analysts. Specifically, those technologies such as high-speed and high-accuracy analytical software – also known as “smart” software – or marriages between AI and robotic systems are drawing attention to those looking to develop offensive cyber operations software or enhanced military drones.

 

More intriguing, algorithmic warfare could also have a place in hybrid warfare. AI could be used in strengthening propaganda campaigns which would be especially useful when targeting democratic societies that hold elections. AI is also already frequently used in the securities exchanges of major trading hubs by investors and these automated securities-trading systems could very conceivably start and exacerbate a chain reaction of rapid and unexpected trades that would lead to economic damage. The uses of AI seem endless to those looking for new ways to cause damage to another state.

 

It is clear that the recent developments of AI technology have spawned a new type of warfare, much like nuclear technology in the 1940s and cyber technology in the 1990s. “We are in an AI arms race,” according to US Marine Corps Colonel Drew Cukor, and “it’s happening in industry [and] the big five Internet companies are pursuing this heavily. Many of you will have noted that Eric Schmidt [executive chairman of Alphabet Inc.] is calling Google an AI company now, not a data company.” As long as there remains military potential in AI, its development by major powers will never stop. The race is on for AI hegemony and, perhaps, global hegemony.

 

Photo: At a meeting with students from Sirius Educational Centre (2017), by the Kremlin. Public Domain.


Disclaimer: Any views or opinions expressed in articles are solely those of the authors
and do not necessarily represent the views of the NATO Association of Canada.

Edward Tat
Edward Tat is the Program Editor of Emerging Security at the NATO Association of Canada. He is also an associate of the Canada-Turkey Business Council, the Canada-Albania Business Council, and the Canada-Arab Business Council in addition to being the NATO Association's video production and podcasts director, official copy editor, and cybersecurity expert. Edward holds a Bachelor of Arts degree in philosophy, politics, and economics at the University of British Columbia. With an academic concentration in public policy and political economy, his thesis on national offensive and defensive cybersecurity policy is currently pending publication. His work has also been published by the Royal Canadian Military Institute's SITREP and The Phoenix News – a university-wide newspaper. His undergraduate research featured Canadian and American economic policy analysis, Western and Subaltern political thought, statecraft, security intelligence, and hybrid warfare. Edward is an avowed poet and has been involved in debate societies since childhood. In his free time, Edward is an active sports shooter and a vocal member of the National Firearms Association. See his LinkedIn profile HERE.
http://natoassociation.ca/edward-tat/