Increasingly, NATO countries have begun to describe digital media disinformation not as a discrepancy in communication between formal and informal state actors, but as a risk to national security. This shift is reflected in the Cyber Threat Assessment for Canada 2025, which notes that foreign state actors are using artificial intelligence (AI) to manipulate information, thus undermining democratic trust. Although the perception of digital disinformation as a security threat is evolving, policy responses continue to prioritize countering hostile state actors and removing harmful content rather than addressing how corporate structures enable these campaigns to operate effectively.
This article maintains that it is possible to regulate these corporate structures; however, regulation should focus on the risk and liability to corporations, rather than speech or algorithms themselves. To advance this approach, the article proposes a corporate disclosure-based regulatory model, called the Algorithmic Profit Disclosure Regulation (APDR), based on Canadian Corporate and Securities Law. This model mandates corporations to disclose when algorithmically amplified content generates revenue that creates foreseeable risk. Notably, this approach shifts responsibility away from foreign actors alone and to corporate governance and profit-driven decision-making as a root source of disinformation.
In this context, the primary policy concern among democratic governments is not that technology companies simply provide a hosting platform for the circulation of disinformation. Rather, the underlying concern is the business model utilized by technology companies that incentivize content engagement through outrage, conflict, and political polarization. This model relies heavily on emotionally engaging content because heightened emotional responses keep users on platforms longer and increase exposure to targeted advertisements. It is the advertising revenue generated by these commercial engagement patterns that provides technology companies with the majority of their profits. Foreign actors leverage this nature of digital platforms in their campaigns by exploiting algorithms that reward high engagement. As a result, foreign interference campaigns have the potential to succeed due to the level of engagement produced by the algorithms, and not because the content of the campaign is particularly persuasive. Therefore, the resulting security risk associated with foreign interference is not incidental. Rather, it is the direct result of business decision-making made by private companies that embed profit incentives into platform designs.
This situation of disinformation is concerning for NATO, as hostile state and non-state actors conduct information operations that exploit the openness and interconnected information environments within member countries. According to the Communications Security Establishment (CSE), AI content created by Russia, China, and Iran is being used, along with social botnets, during elections, to constantly flood social media. These techniques are inexpensive, scalable, and difficult to trace. For citizens, these techniques can create confusion, suspicion, and fatigue with their political system. For NATO countries, this limits democracy and creates the opportunity for adversaries to operate within a window of opportunity.
Many governments around the world have strived to address disinformation by regulating online content and digital platforms, yet most of these attempts have faced significant limitations. In Canada, hearings held by the House of Commons have shown that major social media platforms respond aggressively to any form of content regulation by way of lobbying, legal threats, and, in some cases, withholding services from those involved in the media. This pattern reflects a familiar form of corporate behaviour: when a proposed regulatory change creates an existential threat to a company’s primary source of revenue, that company will perceive that regulation as a threat to its very survival. As a result, regulatory approaches that directly target content or algorithms often provoke resistance, rather than compliance.
Collectively, these dynamics raise a central question of how governments can regulate corporate reliance on engagement algorithms when platforms greatly rely on them for profitability. Despite this dependence, meaningful regulatory intervention remains possible. However, such regulation cannot be implemented through a form of regulation that seeks to directly govern algorithmic systems. Direct intervention in algorithmic systems is technically and legally complex, underscoring the limits of traditional regulatory approaches. Instead, governance tools that function through corporate accountability, such as disclosure and fiduciary duties, offer a more effective course. Under section 122(1)(b) of the Canada Business Corporations Act and BCE Inc. v. 1976 Debentureholders, directors may consider societal impacts that affect long-term corporate interests, making disclosure an effective tool.
Canada’s existing securities law framework offers a stronger foundation to address algorithm-driven disinformation risks that construct regulatory, reputational, and social harm. Under the continuous disclosure requirements of the Canadian Securities Act, public companies must disclose any material risks – risks that a reasonable investor may perceive as likely to impact the company’s value or operations. Where algorithmically amplified disinformation generates revenue while creating regulatory, reputational, or social harm, this exposure may constitute a material risk as it could possibly lead to legal review, public backlash, or future regulatory intervention. Building on this framework, the proposed Algorithmic Profit Disclosure Regulation would require large social media platforms operating in Canada to disclose their algorithmic profits. This is to publicly disclose whether specific categories of algorithmically amplified content, such as outrage-driven material or clearly false political content, generate potentially material advertising revenue, where reliance on that revenue creates foreseeable corporate risk.
The strength of the proposed legislation lies in both what it regulates and what it does not. The law does not limit free speech or require social media companies to change their algorithms. Instead, it treats misinformation as an economic material change that must be disclosed and addressed by corporate leadership. The House of Commons Report shows that social media companies amplify polarizing content to increase engagement and revenue, while the Canadian Security Establishment finds that the same amplification practices are exploited by foreign governments to interfere in democratic processes.
The importance of this shift is that it reassigns responsibility. This approach does not just treat the disinformation problem as a foreign issue. It assigns responsibility for the problem to the corporation’s governance systems, specifically its senior officers, who knowingly profited from the irresponsible amplification of content. This shift enables citizens to maintain their freedom of expression and increases transparency in how the platforms generate revenues. Although the average user may not care about how platforms generate revenue, they still must disclose this information since it is directed toward potential investors and thus is of greater significance to regulators and other oversight bodies than to users. NATO countries can achieve a decrease in democratic vulnerability without provoking the platforms to retaliate. Although companies may be allowed to block news content that is subject to content laws. Companies would be unable to safely exit from the disclosure regime without undermining investor confidence and access to the markets, meaning that the risks associated with disinformation would be disclosed and accounted for through responsibilities to investors as opposed to public concerns.
The main adjustment is changing the way the issue is thought about. Instead of asking how to go about managing algorithms, the questions should be framed around why corporations’ boards have allowed a profit model to be created that creates a foreseeable danger to the security of their data. Disinformation is not just an issue in terms of the information involved. It is also a failure of governance as to the values that corporations have based on the incentives in place for them to generate profits. By focusing on these corporate incentives through a profit disclosure format for algorithms, Canada is providing a working model for NATO countries to follow, where they will manage the risk factors involved rather than the code itself.
Any views or opinions expressed in articles are solely those of the authors and do not necessarily represent the views of the NATO Association of Canada.
Photo retrieved from Raise.




