Centre For Disinformation Studies cyber security Cyber Security and Emerging Threats Tiffany Kwok

MADCOMs: Human or Machine?

The world of artificial intelligence is undeniably becoming increasingly responsive, personalized, and advanced in its own machine-learning. Infiltrating especially within the social interactions of everyday citizens, MADCOMs are taking the concept of personalized artificial intelligence to another level. Although research on MADCOMs is still in its infancy, a report frequently referenced in this article published by the Atlantic Council provides important insights into this new technology. MADCOMs are a relatively new component of AI, where systems and communications tools for computational propaganda can be used to “influence and persuade people” in a personalized manner, based on the individual’s own personality and background.

In simpler terms, the data that has often been easily collected by giant companies is now being mobilized to be able to infer “personality, political preferences, religious affiliation, demographic data, and interests.” By being able to collect and infer all this basic information from many social media users, marketing companies are not only able to cater advertisements to specific users, but the utilization and constant improvement of chatbots are creating an exceedingly volatile and unpredictable environment for web users everywhere.

“You won’t be able to tell if it is machine or human.”

This is a phrase often used when referring not only to the advanced nature of the MADCOM technology but also the reality of the quickly approaching future. Just like many other AI technologies, MADCOM is a learning machine, meaning it will only become smarter through frequent use. Once MADCOM is able to fully comprehend and use the collected data to back its chatbot conversation replies, it will be able to generate information of its own, including responses to news articles, or responses to personal messaging forums. This provides a large opportunity for disinformation to run rampant, as well as the ability for technology users to be susceptible to the changing of opinion and behaviour under the influence of MADCOMs.


While the wave of artificial intelligence is proving to be hard to keep up with, researchers already predict the rapidly-changing nature of this technology to be capable of handling real-world phenomena. Whereas most technology currently uses natural language processing (NLP), the next wave of technology will be capable of natural language understanding (NLU). This means that artificial intelligence will be able to have contextual adaptation, equipping the technology with increasingly humanlike abilities and tendencies.

Further, the capabilities of machine learning carry heavy implications, as there is no longer a need for continued programming for responses – instead machines, like humans, can learn through continual trial and usage.


In this day and age, bots are already being used by a large variety of actors including but not limited to “NGOs, nations, corporations, politicians, hackers, and terrorist organizations to influence conversations online.” Different bots include propaganda bots, used to spread political messages; follower bots, which generate likes and followers; and roadblock bots, which can divert conversations and are often insidious. Further, there are constantly new bots being developed to perform varying functions, with companies and governmental bodies breeding a silent competition to create increasingly advanced bots.

One active chatbot today is “Xiaoice”. ‘She’ has garnered many users, and has received an outpouring of positive feedback, with reviews calling her ‘an available friend’. While she currently requires a team of engineers to run, and is used only corporately and for the nation-state, her engineering team see room for further advancement in her near future.

As with the evolution of MADCOM, the question becomes whether or not journalists, publications, and news forums will be able to compete with MADCOMs that are capable of reacting and interpreting news at speeds quicker than humanly possible. The possibility of human-generated speech and online communications being overwhelmed is becoming a foreseeable reality.


As of recent 2018 reports regarding online disinformation, the Canadian government has voiced their concern with Russia and Saudi Arabia’s attempts to exacerbate “separatist sentiments in Quebec”. This attempt to weaken Canadian democracy has been noted and is one of the many disinformation attacks that Canada is slowly gearing itself up to defend against, with the upcoming federal election in mind. Concerns about Russian troll farms, “messaging emanating out of Russia”, and the overall threat of MADCOMs have led to calls for increased levels of awareness across Canadian society.


Like always, the defence against MADCOMs and disinformation is not a battle to be fought alone by a single actor. To combat MADCOM attacks and all related disinformation attempts, governments, technology sectors, academia, and all citizens should be proactive about detecting and preventing the spread of false information. For all governments, there should be a closer look at cybersecurity threat tracking, diplomatic pressure and sanctions for malign actors, and equipping citizens to be smarter consumers of information. Monitoring the technology sector, keeping up with the latest advances in machine learning, as well as producing innovative technologies to help counter these attacks is crucial.

For academia, it is ever more important to study ahead, and to produce content that is not only accessible for those in academia but also available to the layperson, in order to ensure the equal education of all. Research centres like the University of Toronto’s Citizen Lab and the Max Bell School of Public Policy at McGill have already made important strides in this regard and can serve as a template for greater public engagement.

Finally, for the average citizen, being aware and keeping on top of the latest technological advancements such as MADCOMs will help create a higher level of awareness amongst all, lessening the available opportunities for these disinformation attacks. Practically speaking, being a smart and picky web user through being aware of what information you are releasing to social media accounts, is in itself a step in the right direction. Being proactive and keeping your eyes open to what personal information is out in the web will help to preserve and maintain the control you have over your own privacy and security.

Featured Image: “Chatbot, Chat, Application”, via PublicDomainPictures.net. Licensed under CC0 1.0.

Disclaimer: Any views or opinions expressed in articles are solely those of the authors and do not necessarily represent the views of the NATO Association of Canada.

Tiffany Kwok
Tiffany Kwok is an incoming third year student at the University of Toronto pursuing a Double Major in Political Science and Urban Studies, with a Minor in French language. She is passionate about human rights, refugee movements, and women in politics. In 2018, she spent a month attending University of Oxford to complete a course on Human Rights and International Relations. Tiffany finds passion in starting and continuing conversations about topics ranging from human rights protections to multiculturalism. In 2019, she has attended the New York United Nations Youth Assembly, was chosen for the UofT Women in House program, represented the riding of Spadina Fort-York at Daughters of the Vote in Parliament, and was also a participant at the Prime Minister’s Canada Youth Summit early May. She is also the incoming President of the UofT Chapter of RefugeAid, an NGO committed to standing up for, and raising money and awareness for refugees arriving in Canada. Post internship, Tiffany is striving towards a career of working at the United Nations, and within Canadian politics.