Canada regards universal healthcare as a national value and a point of sovereign pride, yet the data that powers it (the diagnoses, treatment notes, and intimate clinical conversations recorded by AI scribes etc.) are currently being processed through servers which fall under American law, with no binding Canadian standard for how it is stored or sold. The EU solved this problem with the 2024 AI Act and the 2025 European Health Data Space, but Canada’s answer, fourteen months after Bill C-27 died, is a ministerial task force and a voluntary code of conduct.
Without coordinated action involving regulatory reform and strategic investment, aimed at reducing foreign dependency while building Canadian capability, Canadian medtech investment will leave, and the health AI ecosystem that the country can potentially own will be built elsewhere. As a result, Canada will not just continue to fall behind in the global technology race, but it will quietly surrender the infrastructure of its own healthcare sovereignty.
Somewhere in a healthcare clinic in Ontario right now, a doctor is using an AI scribe to document a patient visit. The tool listens, transcribes, and summarizes, which reduces administrative burden, improves efficiency, and delivers exactly the kind of clinical AI transformation the World Economic Forum’s 2025 ‘Future of AI-Enabled Health’ White Paper argued was essential for healthcare systems that want to remain competitive and functional. However, something the doctor may not know is that the transcript of that conversation, everything from the patient’s diagnosis, their medication history, and the details of their most intimate health disclosures, is flowing into a server Canada can only protect through the federal privacy statute, which was written over 20 years ago, far before the rise of AI.
What makes this even more of a security risk for Canadians is the fact that in 2018, the United States passed the CLOUD Act, which grants American law enforcement the power to compel US-based technology companies to produce any data they control, regardless of where that data is physically stored. Essentially this means that a server located in Toronto, but owned and operated by Microsoft or Amazon, is still subject to American law, not Canadian law. With Canada’s hospital data management being dominated by the three American companies of Epic, Cerner (Oracle Health), and MEDITECH, and as the cloud infrastructure in control of that data is primarily owned by Microsoft Azure, Amazon Web Services, and Google Cloud, this legal exposure applies to any Canadian patient whose records sit on one of these servers.
This dependency did not emerge overnight; it developed across two decades of procurement decisions made province to province and hospital to hospital, in the absence of any federal standard requiring Canadian ownership or control. US companies now provide services for over 60% of Canada’s cloud market. By the time anyone thought to ask who governed the infrastructure, the reliance had already become structurally irreversible.
Although critics had spent years pointing out the vagueness of Bill C-27 and the Artificial Intelligence and Data Act (AIDA), because of its narrow scope and lack of robust and inclusive stakeholder engagement, it should still be praised for being Canada’s first and only attempt at comprehensive federal AI legislation. And what is often lost in the critique is how specifically and consequentially it would have changed the health AI landscape, had it passed.
Under AIDA, healthcare and emergency services were explicitly classified as a high-impact AI system, also in the same regulatory tier as law enforcement AI and biometric identification systems. Most critically, the Bill’s supply chain accountability framework would have distributed legal responsibility from the developers of machine-learning models, to those who implement or sell these systems to users, meaning that foreign platforms managing Canadian health data would have been legally required to demonstrate compliance with Canadian standards. This would have meant that foreign platforms managing Canadian health data would be legally required to demonstrate compliance with Canadian standards, and not simply rely on American ones
In contrast, the European Union’s AI Act adopted a tiered, risk-based approach, establishing four levels of risk for AI systems, with applied obligations to the first three: unacceptable risk, high risk, limited risk, and minimal or no risk. The Act entered into force in August 2024 and by February 2025, most prohibited AI practices were removed from the market, with penalties for non-compliance coming into force by August 2025. Simultaneously, the EU launched the European Health Data Space, giving European citizens stronger rights to access, control, and to share their personal electronic health data, including across borders.
Together, these two frameworks treat infrastructure and governance as a single coordinated act, not as sequential problems to be solved one at a time. The success is measureable, as Europe’s digital health market is currently valued at $96.68 billion and is projected to reach $222.22 billion by 2030. This is growth driven because of binding regulation, not despite it. And these exist as a piece of sovereign infrastructure that Canada has no equivalent of and currently no plan to build.
In December 2024, the federal government launched the Canadian Sovereign AI Compute Strategy to address the root of the issue: even if hospitals would like to secure their data using Canadian services, the country lacks adequate computing infrastructure to support this. A proposed investment of up to $705 million in a new AI supercomputing system through the strategy’s AI Sovereign Compute Infrastructure Program, along with an additional $300 million, was provided to augment existing public compute infrastructure to address immediate needs. Additionally, this included a $30 million investment specifically for the Canadian-led and Canadian-Controlled VITAL health data platform to pilot a secure digital AI infrastructure to leverage Canadian health data. Although positive, this is only a model of what the broader ecosystem should look like, as VITAL currently covers only the research side of health data, not the clinical layer.
It seems Canada is attempting to build the physical infrastructure of digital sovereignty, but it is still continuing to defer the legal and regulatory infrastructure that would make this physical investment much more significant. For instance, a sovereign data centre still could run Epic, yet be governed by American terms of service and subject to the CLOUD Act. For this reason, Canada’s health data sovereignty currently rests on a contractual promise from an American corporation, not on Canadian law. The 2025 World Economic Forum white paper warned that healthcare systems failing to act on fragmented governance and regulatory inaction risk not reaching their full potential and falling behind the international standard. Canada is not just falling behind; it is funding the tools of its own dependency, and putting its citizens lives at risk.
The case for urgent AI regulation in Canada is typically framed as a patient safety argument, however it is also, and perhaps more urgently, an economic one. According to Mckinsey & Company analysis conducted in 2024, if AI is used at scale the country could save a net $14 billion to $26 billion per year. For a healthcare system that spent 12.7% of its GDP, or $399 billion, on healthcare in 2025, that number represents not just the opportunity for improved efficiency, but the fiscal headroom to reinvest in the very system Canadians celebrate.
Image credit: Servers illuminate a futuristic cityscape with a data center (published 17 July 2025), depicting a futuristic cityscape with illuminated servers and a data centre, by Markus Stickling via Unsplash. Licensed under the Unsplash License.
Disclaimer: Any views or opinions expressed in articles are solely those of the author and do not necessarily represent the views of the NATO Association of Canada.



