Canadian AI Regulation Should Deliver on Freedom of Thought

Protecting freedom of thought requires more than pointers in a non-binding document.

July 13, 2023
Aibots
Robots dance at the Mintasca intelligence technology company booth during the opening of the 2023 World Artificial Intelligence Conference in Shanghai on July 6, 2023. (Ying Tang/NurPhoto via REUTERS)

Debates about artificial intelligence (AI) tend to focus on equality, privacy and freedom of expression, which is understandable. But these systems impact many other rights as well — including our collective right to freedom of thought. That part of the conversation has been particularly thin. Canada’s proposed Artificial Intelligence and Data Act (AIDA) is a case in point.

The draft legislation, contained within Bill C-27, purports to uphold Canadian norms and values, in line with the principles of international human rights law. And indeed, the government is on the right track when flagging certain AI systems as potentially high-risk — a designation that would trigger increased obligations from entities making and deploying such systems. A non-binding companion document to the bill presciently notes that “biometric systems used for identification and inference” — for making predictions about individuals’ psychology — could impact their mental health and autonomy, as would content recommendation systems influencing behaviour at scale. Such systems could certainly interfere with the inner space of one’s mind, something freedom of thought unconditionally protects. It is encouraging to see the government alert to some of the freedom-of-thought considerations related to AI systems.

That said, it’s not enough. Protecting freedom of thought requires more than pointers in a non-binding document. As I explained with my colleague Florian Martin-Bariteau, AIDA remains too vague. The government or the minister for innovation, science and industry will decide key components of the legislative framework later on in rules (confusingly called regulation in Canadian legislative parlance) once the law has passed. This leaves Canadians in the dark about whether AIDA would adequately protect them against manipulative, distorting or invasive systems that may interfere with their mental integrity or ability to think for themselves.

The federal government’s rationale for this approach has been unpersuasive. The companion document invokes precision, interoperability and future-proofing the law to justify postponing key definitions essential to making AIDA operative. These are valid and difficult challenges, but burying pivotal aspects of the legal framework in opaque and brittle subsequent rules won’t solve them. Consisting of only 40 provisions, the draft legislation points to future rules over 15 times. As examples of these rules “to be announced,” what kind of explanations Canadians will get for high-impact systems (s. 11), who will qualify to audit AI systems (s. 15(2)), and what constitutes acceptable justifications for biased AI-generated outputs (s. 36(a)) have yet to be determined.

AIDA needs to set out clear expectations for people making and using AI systems. Doing so is vital to protect all Canadians’ fundamental rights, including their freedom of thought. A good starting point would be to explicitly consider negative effects on human rights when determining whether AI systems qualify as “high-impact.” That designation triggers more stringent obligations on the companies and people making and using such systems. AIDA’s companion document does suggest considering adverse human rights impacts as a potentially relevant factor. That’s an encouraging first step. Now policy makers need to deliver on their high-level commitment to human rights by protecting them in the law itself.

Canadian AI regulation is being developed against the background of accelerating digital regulation worldwide. The European Union is forging ahead. In June 2023, the European Parliament agreed to ban AI systems that subliminally manipulate people’s behaviour or impair their ability to make informed decisions. Further, unlike AIDA, Europe’s proposed AI Act explicitly lists adverse impacts on fundamental rights as a factor for holding AI companies to higher standards. What’s more, the European Union is coordinating its interventions across different sets of legislation. The Digital Services Act (DSA) complements the AI Act with obligations specific to online platforms and search engines, requiring the biggest players to proactively map and mitigate impacts on fundamental rights and mental well-being. Canada’s efforts look tentative and vague by comparison. Absent swift and decisive course correction, it will miss the mark.

AIDA has some laudable goals. But it falls well short of delivering a predictable legal environment. Without hard obligations to back up aspirational goals, Canadians’ freedom of thought is at risk. Inadequate legislation is actually worse than no legislation. While the entities deploying AI systems may have the best of intentions, regulation must take into account less scrupulous players.

As drafted, this legislation may be so permissive as to consecrate the status quo and, even worse, impede recourse for people harmed by AI systems.

Time is running out to get AI legislation right in Canada. AIDA must lay out actionable obligations to uphold human rights. Among other positive effects, such requirements would level the playing field for companies already trying to do the right thing.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Maroussia Lévesque is a CIGI senior fellow, a doctoral candidate at Harvard Law School, an affiliate at the Berkman Klein Research Center, and a member of the Indigenous Protocol and Artificial Intelligence working group.