Canada’s Draft AI Legislation Needs Important Revisions

The proposed law provides little detail about what technologies it will govern, or how.

August 16, 2023
zoomai
Photo illustration by Jonathan Raa/NurPhoto via REUTERS.

AI technology evolves so rapidly it requires an agile regulatory response. This may partly explain why Canada’s proposed Artificial Intelligence and Data Act (AIDA) appeared so suddenly in Part 3 of Bill C-27, the Digital Charter Implementation Act, a bill otherwise aimed at reforming Canada’s outdated private sector data protection law.

Clearly, governing artificial intelligence (AI) is important. With AIDA, however, Canadians are being asked to choose between a rushed and problematic law and nothing at all — at least in the near term. It’s not a fair choice, and there is a better way.

The proposed legislation was introduced without prior public consultation. It provides little detail about what AI technologies it will govern and how. Those details are left to regulations to be drafted later, once Bill C-27 is passed. AIDA would have greatly benefited from consultation, feedback and revision, and it has generated considerable criticism since its introduction. Bill C-27 is now before the Standing Committee on Industry and Technology of the House of Commons, which will consider it this fall.

The AIDA draft has received so much pushback that allies were rallied in the spring to publish a letter supporting it and citing the urgent need for action as a reason to move forward with a less-than-perfect solution.

What follows are five critiques of AIDA. All can be addressed in a thorough revision.

Agility, Regulations and Standards

A core challenge is that AI continues to rapidly evolve. The emergence of tools such as ChatGPT has demonstrated that AI innovation can make leaps that change how it will be used, by whom and for what purposes. As a result, we need adaptable regulatory approaches.

The government describes its approach to AI regulation with AIDA as “agile” and one that will not “stifle responsible innovation,” because it leaves most of its normative content (including defining the “high-impact AI” it will govern) to future regulations. But leaving so much of the law to be articulated in regulations is not agile. Regulations often take longer than anticipated to develop (in this case, there is an ambitious two-year time line). Some fail to ever materialize.

The “private right of action” (meant to allow individuals to sue for damages if harmed by a breach) in Canada’s anti-spam legislation is an example of a statutory provision that, 13 years after enactment, still lacks the regulations needed to give it effect. Supposedly, AIDA’s regulations will be lent agility by incorporating AI standards issued by national or international standards-setting bodies. There may be trade-offs here between speed, democratic engagement and even sovereignty. This is particularly the case where standards stray from technical specifications to address privacy, ethics or human rights. Other elements of agility are absent from the draft law. Agile regulation should be iterative and data-driven, with clear processes to measure impacts and recalibrate approaches. It is not obvious that AIDA will have these.

While there is nothing wrong with a multi-regulator approach, surely it’s important that legislators and the public know and understand the approach before a vote on the bill.

What Will AIDA Regulate?

AIDA is meant to regulate high-impact AI systems. But the definition of high-impact is left to future regulations. The companion document to AIDA, published nine months after Bill C-27 was introduced, provides some clues as to what the government contemplates. It suggests that some technologies may be excluded from the definition of high-impact AI if they are adequately addressed by other regimes.

While there is nothing wrong with a multi-regulator approach, surely it’s important that legislators and the public know and understand the approach before a vote on the bill. What types of AI will be excluded from the definition? Systems used in the financial sector? Self-driving vehicles? Medical devices? Is there a plan to update other statutes and to better empower other regulators to deal with AI? Nobody has yet said.

A Broader Concept of Harm

High-impact AI will likely be defined, at least in part, in terms of the harm it causes. In AIDA, harm is defined largely as quantifiable harm to individuals. That’s too narrow a definition.

It’s clear that AI can have impacts, such as environmental harm and systemic discrimination, that reach far beyond individuals to communities. Further, some harms will not easily be quantifiable. Consider AI used to manipulate a population to change the outcome of an election, for example. There is serious democratic harm here, but it is neither easily quantifiable nor individual.

AIDA rightly recognizes biased output as a form of harm and proposes to monitor and limit it. However, AIDA defines biased output as output that is discriminatory according to the terms of the Canadian Human Rights Act. Yet the concept of bias in AI is broader than discrimination. AIDA focuses only on bias that is discriminatory, rather than identifying biased output as an overall issue with discrimination as a particular harm. This muddies the waters.

Independent Governance and Oversight

Governance and oversight are key problems with this legislation, as has been noted elsewhere. AIDA is risk regulation: its goal is not to provide recourse for those harmed by AI systems. Rather, it’s intended to prevent harm by governing the design, development and commercialization of AI technologies in Canada.

Yet no independent regulator is to be designated or created to ensure that obligations are met. Instead, it will be the minister responsible for supporting the AI industry in Canada (the minister of innovation, science and economic development) who will be charged with determining whether a company has failed to meet its obligations — and who will determine what the consequences of such failure might be. The AI and data commissioner role created under AIDA is to be filled by a subordinate of the minister, and thus also fails to provide the appropriate independence.

An Overall Regulatory Vision

If there is an overall vision for AI governance, of which AIDA is only a part, it’s indiscernible as yet. The companion document (a poor substitute for a white paper and consultation) suggests the missing piece is risk regulation, which is provided by AIDA. It further suggests that remedies for any AI harms are already available — through privacy or human rights commissions, through the Competition Bureau, or even through the courts.

This fails to acknowledge that these entities may lack the resources and expertise — not to mention updated legislation — to investigate and address AI-related harms. There are significant issues around how to assign liability for AI failures, and whether we need new laws to address this. Issues around evidence, trade secrecy, and the sheer cost for individuals to pursue litigation involving complex AI technologies are unaddressed. Although not all these issues fall under federal jurisdiction, some do.

In short, Canada has yet to produce a blueprint for AI regulation that is clear about needs, gaps and objectives. One is urgently needed. The blueprint should identify which existing regulators will play roles and how they will be better enabled to do so. Government needs to engage the public on AI regulation — not just to benefit from diverse perspectives and to gain legitimacy for its plan, but to launch the conversation and help to build AI literacy. All of this work calls for a data-driven approach that can identify and assess regulatory gaps and inefficiencies.

Failing that, AIDA will at best be a hurried patch job — and one that risks checking off the “AI regulation” box when, clearly, so much more is needed. If the government has an overall vision for AI governance, it would be good to see it — and to engage the public on what will be a defining approach to our technological future.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Teresa Scassa is a CIGI senior fellow. She is also the Canada Research Chair in Information Law and Policy and a full professor at the University of Ottawa’s Law Faculty, where her groundbreaking research explores issues of data ownership and control.