Culture cyber security Cyber Security and Emerging Threats International Law & Policy Katherine Todd Security Security, Trade and the Economy Society Society, Culture, and Security Technology

AI’s Impact on Society and Security

When was the last time you interacted with artificial intelligence (AI)? The answer to that question is probably more recently than you think. For many people in countries with advanced economies, AI has become part of everyday life in a wide array of personal, business, and security applications. 

 AI is the use of computers and other machines to replicate the decision-making and problem-solving capacities of humans. Experts differentiate between artificial general intelligence, which can equal human abilities, and artificial super intelligence, which can surpass human intelligence. Artificial super intelligence does not exist yet, but the concept is being explored by researchers. There is a subfield called machine learning, which focuses on mimicking human learning so that AI programs can incrementally improve themselves. Recent advances in the use of “artificial neural networks” and “deep learning” in machine learning have allowed researchers to process larger data sets and subsequently perform tests in which AI software outperformed humans. 

The concept of AI was created in 1950, when Alan Turing published an article entitled “Computing Machinery and Intelligence.” In the article, Turing questioned if machines could think and unveiled the famous Turing Test to establish whether a machine has achieved thought. Six years later, John McCarthy coined the phrase “artificial intelligence”while attending the world’s first conference on AI at Dartmouth College. Shortly afterAllen Newell, J.C. Shaw, and Herbert Simon released the first AI software program called Logic Theorist. In the intervening years, a series of research breakthroughs brought AI from a theoretical conversation about what could be possible, to a reality where AI has practical applications in a broad array of sectors and industries.

AI can now be found in technologies for various professions, including the health sciences, banking, finance, insurance, marketing, robotics, manufacturing, e-commerce, social media, and weapons manufacturing. In each sector, AI helps governments, businesses, and individuals experience and provide less expensive and more tailored products and services. McKinsey & Company estimates that, by 2030, around 70 percent of companies will adopt AI, which will contribute 1.2 percent to global gross domestic product growth per year. This explosion of popularity is reflective of the advancements that have been made in AI since the early 2010s. 

In 2011, IBM’s Watson beat Ken Jennings and Brad Rutter in an episode of Jeopardy!. Since then, the ability of AI programs has exceeded that of humans in a variety of tasks. In the current era of ‘Big Data’, the digital footprints we leave behind on the internet are being used by website or data owners for all kinds of purposes, including the accelerated improvement and training of AI programs. These developments can be beneficial for society, like advances in speech recognition creating accessibility accommodations for anyone who is unable to text. 

AI programs can also risk the financial, data, physical, and national security of citizens. AI is known to discriminate against individuals based on sex and race, directly affecting individuals ability to secure employment, housing, or financial products due to biased resume, mortgage, and credit card application processing programs. The AI programs behind facial recognition and data analytics technologies also create data privacy concerns, as the images and information collected and processed through these programs is extracted without the full consent of the data’s creators. Facial recognition and other forms of computer vision, where AI programs recognize images and video and subsequently take the appropriate actions, can even risk the physical safety of individuals by potentially misidentifying them as the perpetrators of crimes, incorrectly deciding to fire AI-enabled weaponry, or causing autonomous vehicle accidents. Aside from physical security, the unreliability of AI-enabled weaponry can threaten national security as well when used in military contexts, particularly in theatres of war. 

Questions of accountability also arise when AI is used in cases such as autonomous weapons or vehicles. Who is responsible if someone dies or is injured? The answer could be the person who wrote the code for the program, the technician that maintains the system, the owner of the AI-enabled system, or a combination of these actors. These ambiguities must be reconciled by using AI in an ethical and responsible manner. 

Countless companies, UNESCO, China, the Council of Europe, the European Commission, Hong Kong, and Singapore have created responsible AI frameworks to guide the use and development of AI. In April 2021, the European Commission proposed the creation of a framework called the “Artificial Intelligence Act.” It is the first act of its kind and would directly regulate the use of AI across the EU and raise standards for responsible AI use around the world. In June 2022, Canada followed the EU’s lead by introducing the Artificial Intelligence and Data Act to the federal legislature. In July 2022, the UK did as well when the Data Protection and Digital Information Bill was proposed in Parliament. 

If the proposed bills and frameworks are passed, they will go a long way towards protecting against issues of bias, privacy violations, and the reckless use of AI. However, regulations will never be able to foresee all of the future applications of these technologies and their associated faults. Due to this uncertainty, users must always remain critical of the technology they are using. AI is not infallible, and its future applications will not be either. Only by using AI and emerging technologies responsibly and thoughtfully can we avoid the pitfalls of such new and unfamiliar territory.

Photo: Image of a robotic hand extending towards the viewer via Possessed Photography on Unsplash, 2018

Disclaimer: Any views or opinions expressed in articles are solely those of the authors and do not necessarily represent the views of the NATO Association of Canada.

Author

  • Kate E. Todd is a federal public servant and Naval Warfare Officer in the Royal Canadian Naval reserves. She was a Junior Research Fellow, Program Editor and Senior Editor at the NATO Association of Canada from 2022 to 2023. Currently, she is a fellow with Arctic360, the Canadian Global Affairs Institute and the North American and Arctic Defence and Security Network and on the Editorial Board of the Canadian Naval Review. Kate received her Master of Public Policy from the University of Toronto’s Munk School of Global Affairs and Public Policy in 2024 and Bachelor of Arts with Honours specializing in political science and minoring in public law from the University of Toronto in 2022. Kate’s research and publications focus on maritime, Arctic, economic and national defence and security as well as economic, infrastructure, and natural resource development.

    View all posts
Kate E. Todd
Kate E. Todd is a federal public servant and Naval Warfare Officer in the Royal Canadian Naval reserves. She was a Junior Research Fellow, Program Editor and Senior Editor at the NATO Association of Canada from 2022 to 2023. Currently, she is a fellow with Arctic360, the Canadian Global Affairs Institute and the North American and Arctic Defence and Security Network and on the Editorial Board of the Canadian Naval Review. Kate received her Master of Public Policy from the University of Toronto’s Munk School of Global Affairs and Public Policy in 2024 and Bachelor of Arts with Honours specializing in political science and minoring in public law from the University of Toronto in 2022. Kate’s research and publications focus on maritime, Arctic, economic and national defence and security as well as economic, infrastructure, and natural resource development.