Synthetic video content has become a permanent feature of today’s digital ecosystem, subtly shaping perceptions and influencing political and social decisions. Alongside this, hybrid warfare, blending conventional military tactics with cyberattacks and disinformation, has become a key strategy for adversaries. By exploiting the openness and connectivity of modern societies, these methods destabilize governments and deepen internal divisions.
Recognizing the seriousness of this threat, global leaders have increasingly raised the alarm. International summits now feature high-level discussions on AI’s disruptive potential, with policymakers, media experts, and business leaders voicing concern. German Foreign Minister Annalena Baerbock’s stark warning at the Global Media Forum in Bonn encapsulates the issue: “Artificial intelligence makes disinformation cheaper, easier, and more effective.”
In this evolving threat landscape, NATO must work closely with its partners to bolster collective resilience against AI-driven disinformation. A coordinated, forward-looking strategy is essential to addressing these emerging security challenges while safeguarding the democratic principles that such tactics aim to undermine.
How Russia Uses AI to Shape the World
Russian President Vladimir Putin has repeatedly emphasized the strategic importance of artificial intelligence, famously declaring that the nation leading in AI will become “the master of the world.” As of 2025, the Ministry of Digital Development, Communications and Mass Media of the Russian Federation, headed by Minister Maksut Shadayev, has allocated 7.7 billion rubles, equivalent to C$100.1 million CAD, for the implementation of the federal project ‘Artificial Intelligence,’ as part of the National Strategy for the Development of Artificial Intelligence in Russia. This signals just how central these AI tools are to Moscow’s long-term strategy.
In line with this vision, Russia has long employed bot farms to amplify state-approved narratives abroad, shaping foreign public opinion and advancing its geopolitical objectives. At the heart of Russia’s propaganda machine is RT (formerly Russia Today), its state-run international media outlet, which plays a central role in disseminating Kremlin-approved narratives abroad. With backing from Russian intelligence services, RT’s influence operations have been supercharged by the integration of artificial intelligence. One of the most sophisticated tools in this arsenal is Meliorator, an AI-driven software package designed to mass-produce fake but ‘authentic’-looking online personas.
In July 2024, a joint cybersecurity statement by agencies from the U.S. Federal Bureau of Investigation (FBI) and Cyber National Mission Force (CNMF), Canadian Centre for Cyber Security (CCCS), and the Netherlands General Intelligence and Security Service (AIVD) confirmed that RT affiliates had used Meliorator to create fictitious profiles posing as citizens of countries such as Germany, Ukraine, and the United States. These personas were deployed across platforms like X to flood the information space with tailored content, false narratives, and targeted disinformation. The tool enabled Russian operatives to subtly manipulate international public opinion and sow discord across societies, aligning tightly with the Kremlin’s long-term vision for dominance in the information domain.
Deepfakes: Russia’s AI-Powered Psychological Warfare in Ukraine
Russia’s use of deepfakes as a psychological weapon became impossible to ignore during the war in Ukraine. One of the earliest and most striking examples occurred on March 18, 2022, when a fabricated video of Ukrainian President Volodymyr Zelenskyy appeared online, falsely urging Ukrainian troops to surrender. AI was used to mimic both his appearance and voice, creating a chillingly realistic scene that was broadcast on a hacked Ukrainian TV channel and widely shared on social media, with one version receiving over 120,000 views on X. In the video, Zelenskyy addressed Ukrainian citizens, warning them: “It is only going to get worse, much worse. There will no longer be a tomorrow. I ask you to lay down your arms and return to your families and surrender to Russian forces.”
Though the exact source of the deepfake remains unclear, it was widely linked to pro-Russian actors aiming to undermine morale and destabilize Ukraine from within. Zelenskyy quickly countered the attempt with a live video filmed outside his office, debunking the fake and reassuring the public of his leadership and Ukraine’s continued resistance. Meanwhile, Ukraine’s land forces issued warnings, urging citizens to stay vigilant against manipulated videos designed to spread false calls for surrender.
Over a year later, in November 2023, a more sophisticated AI-generated video emerged, this time featuring Valerii Zaluzhnyi, the then Commander-in-Chief of Ukraine’s Armed Forces, allegedly announcing a ‘coup d’état’ following his supposed resignation. The video quickly gained traction on anonymous Russian Telegram channels and through Kremlin-aligned sources, who presented it as irrefutable evidence of a rift between Zaluzhnyi and President Zelenskyy. The intent was clear: to depict Zelenskyy as an internal threat unfit to lead and destabilize internal cohesion within Ukraine,
Unlike the earlier Zelenskyy deepfake, this version was far more refined, with higher production quality that made it difficult for casual news consumers to spot inconsistencies. Though the video contained noticeable flaws, such as awkward transitions and lip-syncing errors, many viewers, especially those unfamiliar with the technology, missed these telltale signs. In an information-saturated environment, where few have the time or desire to examine content closely, the video’s message became far more compelling, especially when it resonated with pre-existing fears or biases.
The stark improvement in quality between the 2022 Zelenskyy deepfake and the far more convincing Zaluzhnyi video in 2023 underscores the Kremlin’s strategic investment in using AI as a tool of psychological warfare.
AI as a Double-Edged Sword: Ukraine’s Use of Deepfakes in Information Warfare
It is worth noting that in the realm of information warfare, AI is a ‘double-edged’ sword, it is not exclusive to one side of the conflict. Ukraine, recognizing the power of AI, has mirrored Russia’s use of deepfake technology in its own strategic efforts. Far from being passive, Ukraine has deployed AI in support of its military operations, particularly in the early phases of the Kursk offensive. During this time, the Ukrainian military used AI tech to manipulate public sentiment and disrupt Russian defense strategies. For example, one deepfake featured a fabricated address from the governor of the Kursk region, supposedly urging men over 18 to report to military recruitment offices. This tactic was aimed at sowing confusion and heightening panic among local Russian populations, undermining both morale and the organization of defense efforts.
A Call to Action for NATO nations in the Age of AI-Driven Disinformation
The evolving threat of AI-driven disinformation is one that Western democracies can no longer afford to ignore. While the challenge is immense, the true issue lies in the fact that democratic systems, with their complex and often slow decision-making processes, struggle to keep pace with these technological advancements.
However, it is important to note that Western democracies have not been entirely passive. In recent years, governments have increasingly recognized the need to address AI-driven disinformation. The European Union, for example, has moved forward with the AI act, which regulates the use of artificial intelligence including provisions aimed at ensuring transparency and mitigating risks posed by AI-generated disinformation. Similarly, the U.S. has implemented initiatives to bolster the cybersecurity of elections and set up frameworks for countering foreign influence operations, including those utilizing AI. NATO itself has also recognized the growing threat posed by disinformation, adopting its Artificial Intelligence Strategy in October 2021, a step aimed at addressing hybrid threats and the use of AI to manipulate information.
Despite growing awareness, NATO’s approach to information warfare has remained largely reactive. While the alliance has made strides in recognizing the need for preemptive strategies, its efforts often lag behind the evolving tactics of adversaries. For instance, NATO’s response to Russian disinformation only gained momentum after the 2014 annexation of Crimea, prompting the creation of initiatives like the Strategic Communications Centre of Excellence. Yet these measures have primarily addressed existing narratives rather than anticipating future campaigns.
To meet today’s challenges, NATO must move beyond damage control. A proactive posture, one that treats disinformation as a core national security threat, will be essential to defending democratic resilience and societal cohesion. The alliance must develop a comprehensive collective-defense doctrine that treats the information ecosystem as a critical front in the battle for global security. This means improving real-time detection systems and boosting cross-border collaboration and also developing offensive strategies, within ethical frameworks, that allow nations to counter disinformation with equal force.
The evolving nature of AI-driven disinformation forces a fundamental reevaluation of how we approach security in the 21st century. The growing sophistication of AI will continue to challenge public trust and political stability, and only those who can adapt quickly and decisively will be able to withstand this new era of information warfare. The stakes are incredibly high, and the real question now is not whether these tools will be used to undermine democracies, but how effectively nations will respond with their own tools to combat and defend against such threats.
Any views or opinions expressed in articles are solely those of the authors and do not necessarily represent the views of the NATO Association of Canada.
Photo credits: Photo: AI and Disinformation (2024) via Flickr. Licensed CC BY 2.0