In today’s digital age, mis-and disinformation has become pervasive, fueled by advancements in AI and deep-fake technology, while the centralized nature of social media platforms accelerates its spread, amplifying false narratives to vast audiences with unprecedented speed. NATO recognized this growing threat to democracy during the Washington Summit last summer, emphasizing its critical impact in the information age. Russia, in particular, has earned infamy for funding and employing these tactics, most recently aiming to exploit NATO’s support for Ukraine by utilizing disinformation to fragment democratic unity.
In late 2024, the U.S. Treasury Department announced that Iran’s Islamic Revolutionary Guard Corps and a Moscow-based affiliate of Russia’s GRU, its military intelligence office, attempted to interfere in the 2024 U.S. election. Both Russian and Iranian groups were sanctioned for their mis/disinformation efforts targeting electoral processes. Bradley T. Smith, the U.S. Treasury’s acting undersecretary for Terrorism and Financial Intelligence, stated, “The governments of Iran and Russia have targeted election processes and institutions and sought to divide the American people through targeted mis/disinformation campaigns.” In response, Iran’s representative to the United Nations in New York and the Russian embassy denied any involvement in U.S. election interference.
For its part, the EU recently levelled its 16th sanctions package against Russia on February 24, 2025, further expanding restrictions to counter Moscow’s evolving tactics. This latest package includes additional measures against individuals and organizations actively engaged in spreading pro-Russian propaganda, both within Europe and globally. Among those sanctioned is the Groupe Panafricain pour le Commerce et l’Investissement, accused of orchestrating covert pro-Russian operations in the Central African Republic and Burkina Faso. Additionally, an African news agency spreading Russian propaganda, and its editor-in-chief, Artem Kureev, were sanctioned for coordinating disinformation campaigns across Europe and Africa.
Mis/disinformation campaigns have evolved far beyond simple political interference. Today, they increasingly target societal trust, economic stability, and international relations. The complexity and scale of these efforts make them harder to trace and attribute, as they often involve a web of shadowy networks, third-party entities, and the latest technologies like AI. One of the most alarming technological developments contributing to the rise of mis/disinformation is the use of “deepfakes.” These media manipulations, which utilize advanced AI to alter or fully generate images, voices, videos, and even text, pose a significant threat. The Government of Canada defines deepfakes as “media manipulations based on advanced artificial intelligence (AI), where images, voices, videos, or text are digitally altered or fully generated by AI.” A notable example of the dangers deepfakes present occurred in the summer of 2024 when a fabricated video surfaced, showing a U.S. State Department official falsely claiming that a Russian city was a target for Ukrainian strikes using American weapons. The video spread rapidly across Telegram channels, with Russian state media and government officials amplifying the misleading narrative. This incident highlights how easily false information can be disseminated, especially when sophisticated AI tools are used to create highly convincing fake content.
The rise of AI technologies presents a double-edged sword in the battle against disinformation. On one hand, AI can be harnessed to detect and combat false information, but on the other hand, it can be used to exacerbate the problem by making mis/disinformation campaigns more convincing and harder to detect. Companies like OpenAI, Meta, and Microsoft have acknowledged that these technologies are being weaponized by foreign governments to influence public opinion. Countries such as China, Iran, and Israel have increasingly turned to AI to carry out covert influence campaigns aimed at manipulating perceptions and spreading propaganda. These mis/disinformation networks don’t just rely on deepfakes; they also use tools like ChatGPT to translate and adapt misleading content into different languages, expanding the reach of their narratives across borders. By leveraging AI in this way, they can target a broader international audience, fueling division and distrust among populations that might otherwise remain uninfluenced.
As mis/disinformation networks grow more sophisticated, third-party entities and shadow organizations are able to spread false narratives with ease, amplifying their reach and impact. One such organization, known as “Doppelganger,” has become notorious for its persistent efforts to push Russian-backed mis/disinformation. This group, first identified by the EU DisinfoLab in 2022, has created fake news websites that impersonate respected outlets like The Guardian, The Washington Post, and NATO. By masquerading as legitimate sources, Doppelganger has been able to publish fabricated articles and influence national headlines, spreading false information designed to manipulate public opinion and sow division. Their operations are an example of how mis/disinformation campaigns can infiltrate trusted spaces and undermine the credibility of established news organizations. As a result of their activities, Doppelganger has faced sanctions from both the United States and the European Union, signaling a growing recognition of the need to clamp down on these types of covert influence operations.
So, what can be done to counter the rise of mis/disinformation in this age of technological warfare? Experts and organizations are stepping up to confront this growing challenge. Marcus Kolga, a Canadian cybersecurity expert and human rights advocate, founded DisinfoWatch, an organization dedicated to monitoring and exposing foreign mis/disinformation and influence operations. Furthermore, organizations like the Samara Centre are shaping the conversation about the health of Canada’s democracy. Their SAMbot project tracks online abuse during Canadian elections, providing insights into how mis/disinformation impacts democratic processes.
At the global level, multilateral efforts are increasingly important in addressing the scale of these threats. Canada’s leadership of the G7’s Rapid Response Mechanism (RRM) shows how international cooperation can be mobilized to counter disinformation. The RRM works to quickly identify, respond to, and address harmful foreign influence operations across democratic countries, enabling a coordinated response that transcends borders. Additionally, organizations like the European Union and NATO are developing frameworks to address disinformation while strengthening democratic resilience. Through such initiatives, both locally and globally, we can build a more informed public, strengthen democratic institutions, and protect democracy from the effects of mis/disinformation.
Photo: Rahul Pandit, Blue And Red Light From Computer (2019) Via Pexel. https://www.pexels.com/photo/blue-and-red-light-from-computer-1933900/
Disclaimer: Any views or opinions expressed in articles are solely those of the authors and do not necessarily represent the views of the NATO Association of Canada.