Bryan Roh Cyber Security and Emerging Threats

The Need for Secure AI Design Frameworks and AI Model Security

AI technologies are becoming widely adopted in industries across the globe for their potential to revolutionize how organizations engage in business. For example, a report published in late 2020 by McKinsey suggests that organizations are increasingly viewing AI as a tool to generate value, and that the COVID-19 pandemic has further spurred business leaders to invest more in AI capabilities. As AI systems become more embedded in critical business operations across multiple sectors, it is vital that organizations ensure that their information security teams are prepared to defend their AI assets.

Businesses are aware of the added cybersecurity risk that comes with deploying AI systems. For instance, Deloitte’s 2020 “State of AI in the Enterprise” showed that 62% of AI adopters saw cybersecurity vulnerabilities as their most concerning AI risk. Despite this, confidence in protecting their AI systems did not fare so well: Only 39% of respondents felt well prepared to address such vulnerabilities.

According to the Cybersecurity Framework by the National Institute of Standards and Technology (NIST), conducting a risk management process is an integral component of any information security program. During a risk assessment, an organization identifies the threats and vulnerabilities that are applicable to the assets they intend to protect. Therefore, organizations that have adopted AI technologies will need to gain a clear understanding of which threats pose the most risk to their AI systems and evaluate what vulnerabilities exist in their AI models that adversaries can exploit and attack. Only after they have completed a thorough risk assessment can security teams begin to deploy relevant security controls to mitigate those kinds of threats.

Although AI systems are still vulnerable to the same kind of conventional cyber attacks that threaten enterprise networks, they are uniquely susceptible to other threats due to their inherent operating mechanisms. As one expert put it, “by virtue of the way they learn, [AI systems] can be attacked and controlled by an adversary”. This means that cybersecurity professionals must take measures to defend their organizations’ AI assets from not just traditional cyber threats, but also new forms of cyber attacks that will explicitly target their underlying algorithms.

A recently published joint report by Europol, UNICRI, and Trend Micro provides three recommendations for doing this:

First, organizations should promote frameworks designed to facilitate secure AI model development processes that closely abide by security-by-design principles. Organizations that have the financial capacity and organizational maturity to invest heavily in AI technologies are choosing to develop and manage their own AI solutions with internal data science and engineering teams. From a security perspective, this highly complex and expensive undertaking has many stages throughout the AI development pipeline that threat actors could exploit. Everything from the initial collection of data used to train AI models to the AI’s consumption of data in production environments and their storage have the potential to be compromised. Ensuring the security of the entire AI model development lifecycle is therefore an important part of an effective risk management strategy that will contribute towards protecting an organization’s AI assets from cyber threats.

Second, data protection frameworks aimed at protecting the data being used to develop, train, and operate AI systems should integrate into the cybersecurity regimes of organizations. AI systems are dependent on the accuracy of the data they consume to provide the functions they were designed for, which gives threat actors a strong incentive to target the data integrity of AI systems. There are three types of data that AI systems generally need: data used to create a predictive model; data used to test and assess the built model; and real-time or operational data that is fed to the finished models in production environments. All three data types are prone to integrity attacks and need to be secured. For example, threat actors could employ data poisoning attacks that deliberately change and/or add malicious data to the datasets used to train AI models, corrupting the AI’s decision-making ability and leaving the possibility of backdoors being installed and exploited. They could also utilize what are known as adversarial examples that fool AI models in production environments into outputting false results by tampering with targeted input data to throw off the AI’s classification ability.  

Thirdly, technical standards to promote best practices in cybersecurity for the development and deployment of AI systems should be put forward by the cybersecurity industry. The abovementioned joint report points to the creation of the Industry Specification Group on Securing Artificial Intelligence (ISG SAI) as a good example of how technical standards for AI security are currently in development. According to ETSI, the ISG SAI is “the first technology standardization group focusing on securing AI”, and approaches AI security in three domains: utilizing AI to enhance cybersecurity, protecting organizations from AI-augmented attacks, and securing AI systems themselves from attacks.

AI is poised to revolutionize the world’s technological landscape. But just as the world’s dependence on machine-learning algorithms to support and augment critical operations is increasing, so too will the cost and impact of cyber attacks. Incentives for malicious cyber actors to target AI systems in organizations will likely continue to rise in parallel to the growing list of benefits that AI solutions provide them with. This reality is only further exacerbated by the fact that cybercriminals are now starting to weaponize AI for themselves.

Thus, there is much work to be done by industries and governments to protect the world’s AI systems from a constantly evolving digital threat landscape. As one researcher at Princeton University noted, “If machine learning is the software of the future, we’re at a very basic starting point for securing it”. Progress, however, is ongoing, and reports like the one done by Europol, UNICRI, and Trend Micro are a prime example of how public and private sectors are collaborating with one another to tackle arguably one of the greatest technological challenges of the 21st century.

Photo: A brain thinking (2021), by BlenderTimer via pixabay. Public Domain.

Disclaimer: Any views or opinions expressed in articles are solely those of the authors and do not necessarily represent the views of the NATO Association of Canada.

Bryan Roh
Bryan Roh is a Research Analyst for the Cybersecurity and Emerging Threats programme at the NATO Association of Canada. He received a Cybersecurity Analyst diploma from Willis College, and a Bachelor of Arts from the University of Toronto, where he specialized in security issues related to the Asia-Pacific region. He is a former Compliance Director for the G7 Research Group and frequently publishes research work online. Bryan is also a former reservist in the Canadian Armed Forces where he developed an interest in information and national security issues.