Procurement

A Propaganda Machine for the Internet Age, Part IV: The Online Army

“The internet is among the few things that humans have built that they don’t understand.” It is “the largest experiment involving anarchy in history. Hundreds of millions of people are, each minute, creating and consuming an untold amount of digital content in an online world that is not truly bound by terrestrial laws.” One might think, at first glance, that this quote comes from a staunch ideologue who wants nothing more than to dismantle the internet and continue living with a tinfoil hat affixed permanently to their heads, but it was written by none other than Eric Schmidt, Google’s chairman, in the first lines of his book, “The New Digital Age.”

 

Parts one, two, and three of this series covered how a group of conservative investors successfully leveraged pioneering psychographic research in order to build a computer model that used Facebook and other metadata sources to map out personality profiles for every voting-aged American. They formed a company called Cambridge Analytica, headed by Alexander Nix and funded primarily by Robert Mercer, and set out to influence the outcomes of Brexit and the U.S. presidential election in 2016. Cambridge Analytica provided each of these campaigns with the ability to disseminate political messages through so-called “Facebook dark posts” that were tailored to appeal to the specific recipient’s personality profile. However, these were not the only tools available to the Brexit and Trump campaigns.

 

Samuel Woolley, Director of Research at University of Oxford’s Computational Propaganda Project, has dedicated his career to studying the role of bots in online political organization. The average bot, according to Woolley, is a mindless computer script controlling a Twitter account, often called a “Twitter egg” owing to the fact that the bot retains the default profile picture of an egg. These Twitter eggs are programmed to retweet specific accounts that hold a specific political viewpoint. They also auto-respond to Twitter users who use certain keywords or hashtags, and bombard them with a combination of pre-written slurs, insults, and threats.

 

High-end bots, on the other hand, are operated by real people who have taken on multiple false online personas, each with its own unique identifiers, personality traits, and group of friends or followers (who may or may not be bots themselves). These bots respond to other social media users in an attempt to change their opinions. Woolley estimates that a single person working full-time could build and maintain up to 400 fake accounts on Twitter, or 10-20 accounts on Facebook. These bots are so well maintained that it is difficult to differentiate the bots from the actual users, as even Facebook and Twitter struggle to detect these accounts.

 

Large networks of high-end and low-end bots are called botnets, and many of them are recycled for use in various political campaigns, for the right price. For example, during the Brexit campaign, Woolley’s researchers noticed that a network of bots formerly responsible for manipulating the dialogue around the Israeli-Palestinian conflict were then being used to support the Leave.EU campaign. Furthermore, Russia’s bot army has been the subject of particular scrutiny since a CIA special report revealed that Russia had been working to influence the election in Trump’s favour. Philip Howard from the University of Washington coauthored a paper in 2015 about the use of political bots in Venezuela. In it, he and his coauthors said that “the Chinese, Iranian, and Russian governments employ their own social-media experts and pay small amounts of money to large numbers of people to generate pro-government messages.”

 

Woolley’s research has shown that Trump’s campaign relied heavily on the use of bots to spread his political messages across social media. “The use of automated accounts was deliberate and strategic throughout the election, most clearly with pro-Trump campaigners and programmers who carefully adjusted the timing of content production during the debates, strategically colonized pro-Clinton hashtags, and then disabled [their] activities.” This was especially evident on election day, when Trump’s bots outnumbered Clinton’s by 5:1.

 

Woolley is deeply troubled by the prevalence of these bots, “In Western democracies, bots have often been bought or built by subcontractors of main digital contractor teams because there is less necessity to report these deeper layers of campaign satellite workers to election commissions.” His research has revealed an international network of governments, consulting firms (with close ties to government) and individuals responsible for building and maintaining these bots. These bots are used to amplify the political messages favoured by their employer, spread messages that run counter to political opponents and silence dissenting views.

 

What all of this ultimately means is that the future of politics is no longer about who has the better candidate or the deepest pockets, it will be about which campaign will be able to better leverage the big data that they each have access to. In effect, it will be a “battle of automated behaviour change” that is waged entirely over the internet. It will also mean the continued propagation of online political “echo-chambers” and increased polarization among voters as there is less and less cross-dialogue between groups. Furthermore, it is entirely possible in the future, as the prevalence and sophistication of bots improves, that a person perusing their favourite political website or Facebook group could find themselves completely unaware that there were no other human members. Instead, it would be filled with dozens or even hundreds of bots that made the user feel at home and their opinions validated. All the while, the user would be trapped in their own “ideological matrix.”

 

According to research conducted by Jonathan Albright, assistant professor of communications at Elon University, fake news websites are not owned or operated by any one individual entity. However, what they have in common is that each website has successfully manipulated Google’s search engine optimization algorithms in their favour. As a result, they are able to increase the visibility of their fake news sites any time an internet user Googles an election related term.

 

Albright also reveals how this network of websites is being used by Cambridge Analytica to track voters’ online movements and refine their existing personality targeting models: “I scraped the trackers on these sites and I was absolutely dumbfounded. Every time someone likes one of these posts on Facebook or visits one of these websites, the scripts are then following you around the web. And this enables… companies like Cambridge Analytica to precisely target individuals… and to send them highly personalized political messages.” He maintains that the synergy that exists between fake-news networks, Facebook and personality profiling will continue to become more integrated.

 

More troubling, however, is Albright’s research into automated AI scripts that create and distribute large quantities of YouTube videos about current events in the news. These programs react to topics trending on Facebook and Twitter by pairing images relevant to the topic with a computer-generated voiceover, and can create up to 80,000 videos across 19 channels in mere days.

 

Facebook and Google also share part of the blame for the new and potentially dangerous uses of the internet and the metadata available to companies like Cambridge Analytica because they do not make their algorithms public, and thus they are not open to scrutiny from legislators or public watchdogs. There is a glaring need for “algorithmic accountability” – regular audits of these systems to ensure that they are not disseminating false information on hot topics, hate speech, or bias. After all, these systems are built by humans, and thus they are imbibed with the same flaws. For humans, this means that it is impossible to be completely free from bias, as everyone injects a piece of themselves and their worldview into their work.

 

Damien Tambini, associate professor at the London School of Economics, focuses on media regulation. He says that there is no framework in place to deal with the impact that Google or Facebook may have on the democratic process. There is nothing holding Facebook or Google accountable, or obligating them to disclose anything to the public. This stands in stark contrast to the restrictions placed on large media companies, who are subject to competition laws.

 

John Naughton, professor of the Public Understanding of Technology at the Open University, says “Politicians don’t think long term. And corporations don’t think long term because they’re focused on the next quarterly results and that’s what makes Google and Facebook interesting and different. They are absolutely thinking long term. They have the resources, the money, and the ambition to do whatever they want… They want to digitise every book in the world: they do it. They want to build a self-driving car: they do it. The fact that people are reading about these fake news stories and realising that this could have an effect on politics and elections, it’s like, ‘Which planet have you been living on?’ For Christ’s sake, this is obvious.”

 

Photo: Buten Hacker (2017), by Buten via Wikimedia Licensed under CC BY-SA 4.0


Disclaimer: Any views or opinions expressed in articles are solely those of the authors and do not necessarily represent the views of the NATO Association of Canada.

Alexander Sawicki
Alexander Sawicki works for the NATO Association of Canada as the Program Editor of the Procurement section. He graduated with an Honours Bachelor’s Degree in History from Ryerson University in 2016, and throughout his university career has been heavily involved in campus groups concerned with international affairs. Alexander also had the opportunity to work for the Canadian Border Services Agency (CBSA) as a Student Border Services Officer, and gained a wealth of firsthand experience defending the safety and security of Canadians at the first point of entry in the travel and trade continuum. Whenever he gets a chance, he likes to unwind by curling up with a good book and some herbal tea. In the future, Alexander plans on pursuing a Master’s Degree in History.
http://natoassociation.ca/about-us/alexander-sawicki/