Centre For Disinformation Studies

How Fakes Become Facts In Three Steps

Disinformation does not always need trolls or hackers. Sometimes it only needs an algorithm.

Build fake sites, get them indexed, and boost them through social media and AI. That simple formula can turn fiction into something that looks like fact. “You can’t outshout disinformation. You have to outsmart it,” says Viktoriia Romaniuk, Deputy Editor of StopFake.org and Director of the Mohyla School of Journalism in Kyiv.

Romaniuk has spent years studying how false narratives take root and spread online. This article unpacks the three-step playbook behind that process: seeding, indexing, and amplification, and explains how AI and search systems now accelerate each stage. The process starts long before a post goes viral, in the hidden spaces where content is first planted.

Step One: Seeding

Every influence campaign begins with content, and that often means creating dozens or even hundreds of websites that feel familiar and local. These sites publish articles on everyday topics such as city news, cultural events, or energy prices. Nothing dramatic, at least at first. The goal is to occupy space and gain a foothold in the search results before anyone notices. Researchers at Data & Society call this practice filling “data voids”. These are topics that have little reliable coverage, where the first voice to appear often becomes the dominant one. Russia’s CopyCop and Doppelgänger networks, identified by investigators, lean on this method. They imitate real outlets with near-identical layouts and headlines, then quietly twist the narrative. One article might cite a fabricated expert questioning NATO’s goals. Another might point to an invented poll suggesting Canadians are losing faith in Ukraine.

It all looks ordinary until you look closer.

Step Two: Indexing

Once the content is out there, the machines get to work. Search engines and AI sweep up almost everything they can find. They try to order pages by relevance and by the web of links pointing to them. They do not verify the gathered information in any meaningful way. When many look-alike sites point to each other, the system reads that pattern as a signal of importance. Researchers have shown that this can lift misleading material simply because it is well connected or well optimized.

Artificial intelligence tools that summarize text often learn from the same pool of material. Large language models that power search assistants and Q&A features are a good example. They rely on what has been collected and ranked. If tainted or biased pages sit in that pool, the model is likely to reflect them. Sometimes it will even quote them.

This is where trouble tends to begin. Once a claim shows up in both search results and an AI answer, it starts to feel familiar. People often read that familiarity as credibility. Ask an assistant about NATO deployments or refugee policy, and it may unknowingly echo a manipulated site. Viktoriia Romaniuk calls this “a new stage of interference that teaches AI systems to repeat and legitimize false narratives.”

The effect is a loop that feeds itself. People plant a story. Algorithms collect it. AI repeats it. Then people cite the AI because it sounds calm and neutral. By that point, the lie has a life of its own.

Step Three: Amplification

The final step is the easiest and the most algorithm-friendly. Given that social platforms reward engagement, clicks, comments, shares, and watch time are treated as signals of quality. Emotional posts, especially anger and moral outrage, spark faster reactions. Platforms read those reactions as quality signals, so sensational claims travel farther and faster than sober corrections.

Operators know how to work with this bias. They frame stories in moral or emotional terms, add a spark of novelty to catch attention, and rely on a handful of bots or fake accounts to create an illusion of early interest. Once engagement passes a certain threshold, the platform’s recommendation system takes over and begins showing the content to new audiences. The same emotional triggers drive more reactions, and the cycle sustains itself long after fact checks appear or moderators intervene.

Large troll farms are no longer required. A small coordinated push can flood feeds with comments that sound human. “Not long ago, bots and trolls did the heavy lifting of pushing narratives,” Romaniuk said. “Now AI does it too, and it is much harder to tell whether a comment comes from a real person or a model. The problem is content that is managed by artificial intelligence.” In practice, volume begins to look like importance. Platforms read bursts of engagement as quality. People interpret repetition as the truth. Journalists see a crowded topic and assume public interest. Even AI systems retrieve what appears most referenced. Visibility sets the agenda and crowds out corrections.

A Playbook in Motion 

The Doppelgänger operation shows how all three steps connect. It began by cloning real outlets such as The Guardian and Deutsche Welle, changing a few sentences in each article to alter meaning. Those cloned pages linked to one another until search engines began ranking them as credible. The stories spread on Telegram and X, and soon AI summaries began to include them in “balanced” overviews of world events.

By mid-2025, researchers and authorities noted that Russia-aligned networks were broadening to new audiences, and Canadian topics were appearing more often in related ecosystems. They published that Alberta was considering joining the United States. It did not go viral, but left traces that could resurface in search or AI results later. Persistence, not popularity, is the goal.

Why Canada Is Vulnerable

Canada’s information space is open, multilingual, and highly connected. Those qualities make it both admirable and easy to exploit. Journalists, teachers, and policymakers increasingly use AI tools to gather background information. If the sources feeding those tools are polluted, falsehoods can slip into reports or lessons without anyone realizing it. Both the Communications Security Establishment and the Foreign Interference Commission have warned that hostile actors are testing AI-assisted influence operations on Canadian audiences. Romaniuk noted that Canada has also appeared in Russian narratives, including attempts to exploit sensitive issues such as regional divisions and migration. “Disinformation always looks for a pressure point,” she said. “It finds what already hurts and presses harder. AI just makes that process faster.”

What Outsmarting Looks Like

If algorithms can be exploited, they can also be managed with more care.

Romaniuk believes countries like Canada should build small technical teams that can detect cloned websites and trace how AI models cite them. Cooperation with technology companies is essential. “We need to unite and negotiate with the owners of social networks and the companies developing artificial intelligence. We should look for ways to work with Meta and Google” she said.

She also stresses the importance of supporting credible journalism. When trusted local media vanish, people turn to social networks, where manipulation is cheap and efficient. Digital literacy matters too. Teaching citizens to check a source or question an AI summary may sound basic, but it strengthens the first line of defence.

Romaniuk is not against removing dangerous content. “If there is information that poses a threat to the national security of a given country, we should not be afraid to remove it” she said. The real challenge is to balance transparency with safety in an environment where lies move faster than truth.

A Subtle Battle

Algorithmic manipulation is no longer about loud propaganda or hacked databases. It works quietly, adjusting what people see, not just what they believe.

For Canada, the task is to secure the channels that deliver information before they are turned into tools of distortion. It may sound technical, but at its heart, it is about trust: who we believe and why.

Outsmarting does not mean building louder megaphones. It means designing systems and habits that resist easy manipulation. In the end, every algorithm still learns from us. What it learns next will depend on how seriously we take that responsibility.


Any views or opinions expressed in articles are solely those of the authors and do not necessarily represent the views of the NATO Association of Canada.

Photo retrieved from Pixabay.

Author

  • Mila Luhova is a Junior Research Fellow at the NATO Association of Canada, where she focuses on disinformation studies, hybrid warfare tactics, and democratic resilience. With a background in nonprofit leadership, journalism, and international advocacy, she has spent over a decade advancing civic empowerment and global cooperation.

    Her work is rooted in lived experience. As founder and editor-in-chief of Покоління Ї (Generation Yi), Mila led a team of more than twenty journalists amplifying Ukrainian voices. In the U.S., she helped make history by integrating the Ukrainian language into the Cook County election system, giving thousands of voters the right to engage fully in democracy.

    Over the past decade, she has managed humanitarian budgets and built partnerships with governments and international organizations.

    Mila writes on complex geopolitical issues, including the use of frozen Russian assets to support Ukraine. She is passionate about foreign affairs, defense, and policies that strengthen democracy and international cooperation.

    View all posts
Mila Luhova
Mila Luhova is a Junior Research Fellow at the NATO Association of Canada, where she focuses on disinformation studies, hybrid warfare tactics, and democratic resilience. With a background in nonprofit leadership, journalism, and international advocacy, she has spent over a decade advancing civic empowerment and global cooperation. Her work is rooted in lived experience. As founder and editor-in-chief of Покоління Ї (Generation Yi), Mila led a team of more than twenty journalists amplifying Ukrainian voices. In the U.S., she helped make history by integrating the Ukrainian language into the Cook County election system, giving thousands of voters the right to engage fully in democracy. Over the past decade, she has managed humanitarian budgets and built partnerships with governments and international organizations. Mila writes on complex geopolitical issues, including the use of frozen Russian assets to support Ukraine. She is passionate about foreign affairs, defense, and policies that strengthen democracy and international cooperation.