Centre For Disinformation Studies Cyber Security and Emerging Threats Ryan Atkinson

Truth Makes You Free: A Special Report on Countering the Threat of Disinformation

The demonstrable effectiveness of the dissemination of misinformation provides clear evidence that strong countermeasures are needed. Researchers at MIT found that misinformation is more prominent and effective on Twitter than facts, with lies spreading faster than true stories. Vosoughi et al (2018) used a data set of “rumour cascades” on Twitter between 2006 and 2017 that included about 126,000 rumours spread by roughly 3 million people, and found that “the top 1% of false news cascades diffused between 1000 and 100,000 people, whereas the truth rarely diffused to more than 1000 people.”

 

Recent examples present further evidence of the problem misinformation poses, including the recent heated encounter between United States President Donald Trump and CNN reporter Jim Acosta at a press conference immediately after the 2018 U.S. Midterm Elections. Acosta’s questions resulted in his being temporarily banned from the White House, which was lifted following CNN’s threat of a lawsuit. U.S. Press Secretary Sarah Sanders attempted to justify the ban by sharing a video on Twitter of an encounter during the press conference, which experts have claimed was doctored, and demonstrates the use of tampered information to justify the suppression of robust journalism.

 

The 2018 U.S. Midterm Elections at the beginning of November involved examples of inaccurate and outright false information being spread for the purpose of influencing voter opinions. Facebook blocked a number of accounts prior to the Midterms regarding concerns they were “engaged in coordinated inauthentic behaviour” by foreign proxies to manipulate voter decisions. One example of misinformation during the Midterms included a tweet shared by Georgia Gubernational Candidate, Brian Kemp, who claimed that members of the Black Panthers were intimidating voters at the polls based on a misleading photo shared on Breitbart that was actually taken in a different context a week earlier.

 

Another example during the Midterms was the false claim that George Soros was directly influencing voter poll counts through ownership of the Smartmatic voting machine company. Soros does not own the company and the only connection between Soros and Smartmatic is that the Chairman of Smartmatic sits on the board of Soro’s Open Society Foundation. Smartmatic voting machines were not used during the 2018 U.S. Midterm Elections nor during the 2016 Presidential Election.

 

The prevalence of misinformation in the past few years has been heavily discussed in contexts like the 2016 U.S. Presidential Election, United Kingdom Brexit Referendum, and the 2017 French Presidential Election. More recent examples include targeted efforts against the Canadian Armed Forces in NATO Operations in Latvia, and the recent referendum in Macedonia.

 

Fake news is often used as an interchangeable term with other similar but distinct concepts like misinformation and disinformation. Fake news generally refers to news stories fabricated and lacking verifiable evidence in the form of facts, sources, quotes, or other journalistic elements. Misinformation involves information that is inaccurate or outright false though “mistakenly or inadvertently created or spread: the intent is not to deceive.” Disinformation involves information that is deliberately falsified and spread “to influence public opinion or obscure the truth.” Differences of intent, medium, and objective are crucial for differentiating between these distinct practices to determine how deception is used to an assailant’s advantage and whether said advantage was the intended end.

 

A Brief History of Deception

 

Although the novel mediums of digital platforms and social networks are the focus of most recent analyses of the spread of false information, the concept of disinformation for strategic and political objectives has a long history. In fact, the word “disinformation” or “dezinformatsia” was coined by former General Secretary of the Communist Party of the Soviet Union, Joseph Stalin, because it sounded French and justified his claims that such deception tactics originated in the West.

 

Disinformation examples dating back to the 18th Century include the Potemkin Villages created by Grigory Potymkin for Catherine the Great to view during her 1783 tour of Crimea to observe newly conquered Russian lands. The structures were established to look like magnificent buildings to impress the Empress as she passed, though recent historical efforts have argued the narrative was at least partially fabricated, gaining a mythos to refer to “any deceptive or false construct, conjured often by cruel regimes, to deceive both those within the land and those peering in from outside.”

 

More ancient examples of disinformation include during the reign of the Egyptian Pharaoh, Ramses II, following a battle against the Hittites that ended in a stalemate. This lack of victory was problematic for the Pharaoh because getting into the afterlife required being a great warrior and winning battles, so the Pharaoh had his Luxor tomb enshrined with the depiction of a great victory that never happened. This ancient example of disinformation demonstrates the intended deception not of human foes but of the gods themselves, in an attempt to gain access to the afterlife.

 

Tackling Disinformation

 

The persistence of the problem of disinformation has led many NATO member states to act in varying degrees to counter the problem. Actions have ranged from launching specifically designated websites, writing reports, and passing legislation.

 

Belgium formulated a webpage dedicated to informing the public and encouraging participation against the problem. Italy has also created a webpage fashioned as an online portal for reporting specific instances of disinformation. The webpage has been criticized for lacking specific definitions, relying on official press releases targeting “false and tendentious news,” and thereby granting police significant power to determine what is okay to be included online.

 

The European Action Service East Strategic Communications Task Force runs the “EU vs. Disinfo” webpage, dedicated to countermeasures addressing and responding specifically to pro-Kremlin disinformation. The webpage includes a weekly disinformation review that has incorporated over 3,800 cases of disinformation since September 2015.

 

Germany, France, the U.S., and Canada have all directed efforts towards legislative responses targeting the problem. In Germany, a hate speech law requires the removal of “obviously illegal” online posts within 24 hours, the neglect of which results in fines of up to €50 million. The German law has been criticized as a type of censorship due to its banning of the Twitter accounts of a German satirical magazine. This criticism has resulted in the suggestion of potential revision of the law.

 

Strong censorship is not an appropriate or effective response to the problematic spread of disinformation. Countermeasures need to find a balance that does not use extreme censorship and surveillance to counter fake news. The eradication of free speech and overly zealous surveillance efforts can produce a so-called “chilling effect” on the dissemination of real news and free speech, which would outweigh the proposed benefits of countering disinformation.

 

France’s new laws target the modes of disinformation including social media networks, sharing platforms, and include provisions for regular meetings between government and network platform leadership. Additionally, public education programs have been increased and the Higher Audiovisual Council (CSA) has been given the ability to prevent or terminate broadcasting services that are influenced by other states, such as RT and Sputnik for example.

 

The U.S. Federal Government has proposed the “Honest Ads Act” requiring social networking websites like Facebook to keep records of ads that are created and release information on the buyer, such as who was targeted and what rates were charged. In addition to testimonials from leadership of social network platforms, the U.S. has also invested in assessing near-future threats such as the rising concerns of deepfakes, which challenge the consumer’s ability to decipher between real and fake communications online.

 

The Canadian Response

 

The Liberal Government of Canada has proposed the Elections Modernization Act (Bill C-76) to make changes to the previous Conservative Government’s Fair Elections Act. Central changes work to limit foreign spending on elections, boosting “accessibility and participation democracy,” and ending restrictions previously placed on Elections Canada that discouraged voter turnout. These efforts also include the reinstatement of voter identification cards as valid identification.

 

The proposed goal is to make it easier for Canadians to vote and harder for foreign entities to influence elections, especially through the use of targeted campaign funding. These changes are in line with the findings of a report released by the Public Policy Forum, which recommended third party spending on electoral communications be reported as an expansion of the current regulations targeting what is only spent on advertising.

 

The category of “third parties” includes political parties, candidates, and advocacy groups that Canadians donate to, whose funds are then used to influence citizen voting. Reportedly, the use of third parties was used during the 2015 election campaign, playing a pivotal role in the Conservative defeat. Additionally, amendments of Bill C-76 required social network platforms to create a registry to hold digital advertisements by political parties and third parties, in order that they remain visible to the public for two years.

 

The Canadian Security Establishment released a report in June 2017 on the current threats of disinformation, which included threats faced during the 2015 Canadian Federal Election and anticipated threats for the upcoming 2019 Federal Election. The report claims that Canada’s 2015 Federal Election was targeted by “low sophistication cyber threat activity” characterized as using “single, simple cyber capabilities” that lacked a “lasting effect.” The CSE found no evidence supporting foreign state involvement in targeting Canada’s democratic process during the 2015 election, where the hacktivist group Anonymous was said to be a likely adversary.

 

The CSE expects “multiple hacktivist groups” to deploy cyber capabilities targeting the 2019 Canadian Federal Election to influence the democratic process, with most attempted influence amounting to “low-level sophistication” though admitting some “will be well-planned and target more than one aspect of democratic process.” Political parties, politicians, and the media are “more vulnerable to cyber threats and related influence operations than the elections activities themselves,” the CSE concludes based on Canadian elections involving predominantly paper-based processes. Nevertheless, democratic processes are likely to be targeted through means of suppressing voter turnout, tampering election results, stealing voter information, manipulating political parties and candidates, and using various media platforms to “spread disinformation and propaganda, and to shape the opinions of voters.”

 

Canadian Defence Minister, Harjit Sajjan, has outlined the need to “further educate our citizens about the impact of fake news” claiming Canadian voters will be targeted by fake news and online cyberattacks as a result of Canada standing up for human rights and an international rules-based order. Additionally, Minister Sajjan proposed the formation of the Canadian Centre for Cyber Security within the CSE to focus on foreign signals intelligence collection.

 

From the Ground Up

 

Responses to disinformation do not only come from the government – civil society has an important role in the prevalence of this problem, and perhaps in its solution. Grass roots organizations and individual activists have used direct action to counter the influence of disinformation. Examples include Bellingcat, which uses open source and social media investigations to investigate global cases and put together educational materials for people to use in pursuing their own investigations.

 

StopFake is a journalistic organization formed in March 2014 to refute misinformation spread about the crisis in Ukraine, later transforming into a hub of information analyzing Kremlin propaganda. StopFake has adapted the concept of fact-checking to counter foreign propaganda campaigns. The organization aims to discredit false claims made by online sources by systemically “testing falsification claims made in news reports” with techniques that “cannot prove a story true but might, in the view of StopFake, prove it fake.”

 

These organizations come along with general trends of incorporating real-time fact checking into political discourse, with media organizations relying on in-house fact checking professionals, taking it upon themselves to correct and repel the spread of misinformation. Some notable examples include organizations such as PolitiFact, Factcheck.org, and Snopes.

 

However, an important distinction between such fact checking organizations and StopFake is that “American fact-checking was designed to keep politicians honest, not to counter the systemic and coordinated work of a state-backed propaganda machine,” which StopFake does by evaluating journalistic works to look “for misleading stories based on fabricated evidence.”

 

A study investigating the most effective measures for countering misinformation recommended creating conditions that favour scrutiny and counterarguments, while also utilizing the application of new detailed information. The effectiveness of these efforts to counter the influence of misinformation is difficult to determine, especially as fake news has proved a highly profitable business to the extent that some made big financial gains during the 2016 U.S. Presidential Election, and reports allege they are gearing up for the next election in 2020.

 

Crucially, governments must work with civil society and media organizations, including both traditional and social network platforms, towards interconnected counter-measures. The rise of these grassroots movements demonstrates willing actors to take up the role to counter disinformation. Coalitions between governments, media organizations, civil society, and independent actors must be encouraged and supported. Together these distinct nodes can fight the allure of power-seekers abusing digital interfaces to sculpt a reality of manufactured narratives.

 

The so-called post-fact challenge must emphasize banal truth over sensational or profitable fiction. The challenge is not new. Some have argued that what we call fake news bears a striking resemblance to historical mythologies that provided our ancestors with collective meaning in an otherwise cruel and unforgiving world. According to historian Yuval Noah Harari this is nothing new: “since the Stone Age, self-reinforcing myths have served to unite human collectives,” and humans “conquered this planet thanks above all to the unique human ability to create and spread fictions.” Cooperation has thus been based on this delicate balance between truths and falsities. The challenge is determining how much fiction should be tolerated in the modern age, regardless of its cathartic or comforting nature.

 

Irrespective of the historic prevalence of attractive myths, the comfort to which John F. Kennedy alluded when he said that “the great enemy of truth is very often not the lie – deliberate, contrived, and dishonest – but the myth – persistent, persuasive, and unrealistic,” enjoying “the comfort of opinion without the discomfort of thought.” Opposing fictions with reality requires determination and support for those who believe in maintaining the principles of truth and fact-based knowledge.

 

Governments must work with civil society, media organizations, and other similarly invested entities. Investigative journalism, grassroots fact-checking, and independent anti-disinformation movements are examples where such support should be placed. No matter the profitability of disinformation – whether political, financial, or otherwise – the focus must be on ensuring that the embers of truth are never extinguished.

 

Featured Photo: “Graphic Fake News Website” (2016) by VOA News via Wikimedia Commons. Public Domain.


Disclaimer: Any views or opinions expressed in articles are solely those of the authors and do not necessarily represent the views of the NATO Association of Canada.

 

Author

  • Ryan Atkinson

    Ryan Atkinson is Program Editor for Cyber Security and Information Warfare at the NATO Association of Canada. Ryan completed a Master of Arts in Political Science from the University of Toronto, where his Major Research Project focused on Russia’s utilization of information and cyber strategies during military operations in the war in Ukraine. He worked as a Research Assistant for a professor at the University of Toronto focusing on the local nature of Canadian electoral politics. He graduated with an Honours Bachelor of Arts in Political Science and Philosophy from the University of Toronto. Ryan conducted research for Political Science faculty that analyzed recruitment methods used by Canadian political parties. Ryan’s research interests include: Cyber Threat Intelligence, Information Security, Financial Vulnerability Management, Disinformation, Information Warfare, and NATO’s Role in Global Affairs.

    View all posts
Ryan Atkinson
Ryan Atkinson is Program Editor for Cyber Security and Information Warfare at the NATO Association of Canada. Ryan completed a Master of Arts in Political Science from the University of Toronto, where his Major Research Project focused on Russia’s utilization of information and cyber strategies during military operations in the war in Ukraine. He worked as a Research Assistant for a professor at the University of Toronto focusing on the local nature of Canadian electoral politics. He graduated with an Honours Bachelor of Arts in Political Science and Philosophy from the University of Toronto. Ryan conducted research for Political Science faculty that analyzed recruitment methods used by Canadian political parties. Ryan’s research interests include: Cyber Threat Intelligence, Information Security, Financial Vulnerability Management, Disinformation, Information Warfare, and NATO’s Role in Global Affairs.