Centre For Disinformation Studies Cyber Security and Emerging Threats Ryan Atkinson

Sorting Fact From Fiction: Trolls, Bots, and the Erosion of Informed Debate

You read an article and agree with the author’s argument. Scrolling down you read the comments and connect with one that presents various logical holes in the article. Ideas you had not previously considered change your perspective. Did a human influence your thinking? The quickening pace of advancing technology is making the noticeable difference between humans and Internet bots more indiscernible and will continue to do so to greater and greater degrees as technology continues to progress.

The Internet Research Agency (IRA) is a Russian company located in St. Petersburg that has engaged in political influence operations propagating perspectives online to influence international opinions. Examples of targeted influence operations include the 2016 U.S. Presidential Election, U.K. Brexit Referendum, and 2017 French Presidential Election.

A former IRA employee described working for the organization like being a character in George Orwell’s 1984, “a place where you have to write that white is black and black is white […] in some kind of factory that turned lying, telling untruths, into an industrial assembly line.” He worked in the “commenting department” failing to transfer to the “Facebook department” because they required “you to know English perfectly” in order to “show that you can represent yourself as an American.”

Employees were given lists of topics where three employees would work together on a single news item, coordinating their posts to give the illusion of discussion or debate and “to make it look like we were not trolls but real people. One of the three trolls would write something negative about the news, the other two would respond, ‘You are wrong,’ and post links and such. And the negative one would eventually act convinced.” Documents describe that the IRA expected employees to manage 10 Twitter accounts that obtained at least 2,000 followers and tweeted at least 50 times a day.

Twitter has reported over 50,000 accounts linked to Russia were used to tweet automated material about the 2016 U.S. election. Initial reports claimed that 677,775 Americans interacted with these accounts, but this figure has been increased to 1.4 million people unsuspectingly interacting with accounts associated with Russian proxies. Twitter announced it had identified 3,813 accounts associated with the IRA that had posted 175,993 tweets during the U.S. Presidential Election campaign.

Individuals posing as Americans or citizens of other targeted states influenced public opinion by working to sculpt an individual’s viewpoint in line with Kremlin perspectives towards specific events. Networks of automated Internet bots, short for software robots running tasks online, operated with a similar objective – influencing, shaping, and changing opinions in line with the Kremlin. Social bots are a subcategory of Internet bots and include accounts controlled by software using algorithms to generate content and communicate with human accounts.

Not all bots are malicious, and some serve useful purposes spreading news, coordinating volunteer activities, and assisting volunteers to edit Wikipedia pages, for example. The malicious use of bots has increased in recent years to “manufacture fake grassroots political support […] promote terrorist propaganda and recruitment […] manipulate the stock market […] and disseminate rumours of conspiracy theories.” Noticeably, it is already becoming difficult to tell the difference between human and bot activity online and the distinction between the two will only become more indiscernible with further technological advancements, such as the application of artificial intelligence and deep learning to automated online communications.

“Political bots” are a further subcategory of social bots used to influence political discussion using algorithms on social media. Samuel C. Woolley and Philip N. Howard of Oxford University’s Internet Institute argue they are “written to learn from and mimic real people so as to manipulate public opinion across a diverse range of social media and device networks.” Woolley and Howard use the term “computational propaganda” to describe the “use of algorithms, automation, and human curation to purposefully distribute misleading information over social media networks.” Alessandro Bessi and Emilio Ferrara use specific indicators to differentiate between humans and bots, with some online activity including the presence of default account settings, lacking geographical metadata, retweeting more than generating content, and having “less followers and more followees (sic).”

There are also clear distinctions between humans and bot networks based on the specific posting schedules with “repetitive and nonsensical” content “published at non-human speeds,” specifically “to amplify divisive content produced by human-curated accounts, state-controlled media (e.g. RT), or other proxies, and to attack specific individuals or groups.” Discerning such distinctions will become more difficult as bots are developed to include more advanced technologies, especially as artificial intelligence and machine learning enable bot activity to “adapt to new contexts, suggest relevant original content, interact more sensibly with humans in proscribed contexts, and predict human emotional responses to that content.”

Samuel C. Woolley and Douglas R. Guilbeault of the Oxford Internet Institute have found that bots were able to achieve “positions of high centrality within the retweet network, as evidence of their capacity to control the flow of information during the election.” Such central positions were reached through “retweeting others and being retweeted” thereby reaching “positions of measurable influence during the 2016 U.S. election.” People were retweeting posts made by bots enabling the influence of “meaningful political discussion over Twitter, where pro-Trump bots garnered the most attention and influence among human users.”

One major feature of the problem is that it has been shown to work to manipulate “public opinion, choke off debate, and muddy political issues” specifically targeting “sensitive political movements when public opinion is polarized.” For political campaigns the use of political bots “have become an acceptable tool of campaigners,” according to Woolley and Guilbeault.

Dr. Howard adds that “the political consultants who work [on U.S. campaigns] go off to London and Ottawa and Canberra, and they ply their trade there.” According to Howard, it is through experimentation in past campaigns such as in Ukraine in 2014 and the U.K. and U.S. in 2016, that innovations occur and “carry over into other democracies. Very soon we’ll see the same kind of techniques in the next Canadian election. We’ve already seen them in other democracies.”

Studying political influence operations requires understanding how applied technologies are evolving and rapidly changing how publics can be manipulated to the end of foreign interests. The use of AI will intensify this in addition to the implementation of so-called “deep fakes” which use “facial mapping and artificial intelligence to produce videos that appear so genuine it’s hard to spot the phonies.” A computer is given a lot of images and audio of a person and through an algorithm is able to learn “how to mimic the person’s facial expressions, mannerisms, voice and inflections. If you have enough video and audio of someone, you can combine a fake video of the person with a fake audio and get them to say anything you want.”

Future research must aim to understand modern political influence operations basing policy developments both on what technologies were applied and how advancing technologies will be incorporated into future operations. In order to combat future operations, in which discerning fact from fiction may become nearly impossible, governments and information-providers must strengthen press freedoms, independent fact checking, and anti-disinformation institutions within civil society. Fighting propaganda with counter propaganda must be avoided as it only intensifies the informational security dilemma.

Featured Image: Calm Man is Reading Fire Newspaper | March 22, 2018 By Elijah O’Donell on Unsplash.


Disclaimer: Any views or opinions expressed in articles are solely those of the authors and do not necessarily represent the views of the NATO Association of Canada.

Ryan Atkinson
Ryan Atkinson is Program Editor for Cyber Security and Information Warfare at the NATO Association of Canada. Ryan completed a Master of Arts in Political Science from the University of Toronto, where his Major Research Project focused on Russia’s utilization of information and cyber strategies during military operations in the war in Ukraine. He worked as a Research Assistant for a professor at the University of Toronto focusing on the local nature of Canadian electoral politics. He graduated with an Honours Bachelor of Arts in Political Science and Philosophy from the University of Toronto. Ryan conducted research for Political Science faculty that analyzed recruitment methods used by Canadian political parties. Ryan’s research interests include: Cyber Threat Intelligence, Information Security, Financial Vulnerability Management, Disinformation, Information Warfare, and NATO’s Role in Global Affairs.