Proposed Changes to Canada’s Bill C-27 Do Little to Mitigate AI Harms

The governance model outlined in this document isn’t significantly different from the bill’s original vision.

November 9, 2023
Until recently, AI has existed in a moral and legal vacuum. (Photo illustration by Dado Ruvic/REUTERS)

On September 28, 2023, Canada’s Standing Committee on Industry and Technology (INDU) passed a motion requesting Minister of Innovation, Science and Industry François-Philippe Champagne to release the proposed government amendments to Bill C-27, which the minister had mentioned in his testimony. In response, Minister Champagne sent correspondence to INDU with the list of proposed amendments to the Consumer Privacy Protection Act (CPPA) and the Artificial Intelligence and Data Act (AIDA).

These amendments represent an important step toward aligning Canada’s privacy and artificial intelligence (AI) legislation with the European Union’s AI Act and the Organisation for Economic Co-operation and Development (OECD) AI framework. Specifically, the amendments recognize privacy as a fundamental human right, strengthen children’s data protection rights, propose new fiscal powers for the Office of the Privacy Commissioner of Canada (OPC), introduce new transparency requirements for businesses across the AI value chain and clarify the concept of the high-impact AI system.

Yet the governance model outlined in this document isn’t significantly different from the bill’s original vision, essentially that AI vendors should be allowed to police themselves. The amendments provide no mechanisms of redress for citizens whose rights and lives will be affected by AI technologies. The new office of the AI and data commissioner is limited to functioning as a liaison between Innovation, Science and Economic Development Canada (ISED) and the AI industry. Given that the industry is extremely concentrated, as well as inscrutable, and predictably prioritizes profits over the public interest, the amendments fail to address the bill’s biggest issue — ensuring Canadians are safe and have a voice in the new digital landscape.

ISED has drawn criticism for the lack of proper public consultation on Bill C-27, and for its numerous blind spots. AIDA, specifically, has been described by some as a way to keep both legislative and executive powers within ISED. Another essential issue is that Bill C-27 fails to provide remedies for AI harms.

In this context, the amendments published by ISED last week come as a pleasant surprise. They are admittedly derived from feedback, provided by the privacy commissioner of Canada, Philippe Dufresne, and several members of Parliament during the September INDU hearings. It makes one wonder how much stronger the bill would have been if it had not been drafted behind closed doors.

The document submitted by Minister Champagne to INDU contains two parts: the amendments to the CPPA and the amendments to AIDA.

The CPPA section begins by recognizing a fundamental right to privacy for Canadians. Yet apart from the promise to include the reference to privacy in the preamble and the body of the law, it’s not clear what consequences this change might have, if any. For Canadians’ right to privacy to be respected, the AI industry might need to rethink the entire business of algorithm training, which often happens without receiving consent from the people whose data ends up in the database. Another issue is the surveillance of employees in the workplace. In the United Kingdom, several academic researchers and employment lawyers have been invited to help the government update national labour laws in light of the increasing use of AI. Clearly, Bill C-27 should address this issue.

The CPPA section of the amendments further offers extended privacy and data protections for children. Namely, Bill C-27 recognizes children’s information as sensitive and requires technology vendors to exercise caution when working with the information provided by or about minors. A child’s data cannot be collected without the consent of their guardian, cannot be retained indefinitely and should be removed at the request of the guardian. While demanding safety protections for children is a good and timely measure, the bill only covers personal information. This means children’s data collected and processed in aggregated, anonymized or synthetic formats will not be granted similar protections. This provision will likely become a loophole for AI vendors. The bill is notably silent on the matter of digital advertisements directed at minors.

The third and last amendment to the CPPA gives the OPC the right to levy fines on non-compliant organizations, without the need to engage the Personal Information and Data Protection Tribunal or court. If implemented, this provision could become a game changer for Canada. New enforcement powers would ensure better compliance with OPC requirements, and the fines could help support further investigations into the tech vendors’ wrongdoings. However, as University of Ottawa law professor and CIGI Senior Fellow Teresa Scassa correctly points out, we need to take a closer look at the operational resources of the OPC and other organizations that will be tasked with the enforcement of the new regulation.

Businesses developing and using AI will be required to perform impact assessments, conduct testing, mitigate the risks of biased outputs and provide human oversight over the high-impact algorithms.

The second section of the amendments, on AIDA, appears to be more elaborate than the first one. The first and key part of the amendments is a classification of the high-impact AI systems. ISED lists seven areas where the use of algorithmic systems will be classified as “high-impact”: matters related to hiring, recruitment and promotion; decisions related to provision and cost of services to individuals; processing of biometric information and of human emotions and behaviour; content moderation; delivery of health care and emergency services; use of AI by courts and administrative bodies; and use of AI in law enforcement. It is a positive sign that the bill covers most of the areas where AI harms have been documented, from algorithmic bias in hiring to emotion recognition software to automated health care.

The second provision aligns AIDA with the European Union’s AI Act and the OECD AI framework. Specifically, the bill recognizes the distribution of responsibilities across the AI supply chain, from the creators of machine-learning models for a high-impact system, to those who implement, sell, modify or make these systems available to users. All these actors will need to prepare some accountability frameworks and implement proper guidelines for the cases of serious incidents; these documents will be provided to the AI and data commissioner on request.

The document further calls for more accountability across the AI value chain. Specifically, businesses developing and using AI will be required to perform impact assessments, conduct testing, mitigate the risks of biased outputs and provide human oversight over the high-impact algorithms. The AI developers and providers of services are expected to prepare ample documentation to prove to the AI and data commissioner that they have made some effort to ensure the safety of their high-impact systems. Notably, the requirement for AI vendors to monitor all downstream uses of their technologies comes directly from the EU AI Act, but it’s not clear how this could be done in practice.

The amendments also mention the “general-purpose AI systems.” All technologies that were not classified by the government as “high impact” fall within this category. The same compliance mechanisms are envisioned here: businesses are expected to manage their data responsibly, conduct impact assessments, report serious incidents to the AI and data commissioner, and watermark the content created by automated technologies so it is not confused for human-generated content.

Regrettably, the amendments provide little information about the role of the AI and data commissioner. Based on the original text of Bill C-27 and a few mentions in the text of the amendments, the commissioner’s functions appear rather limited, from monitoring the compliance of AI businesses to helping companies make their high-impact systems better. Unlike all other commissioners within the Government of Canada, this role comes without a mandate, a budget or sufficient operational resources. In short, the commissioner exists to help AI businesses thrive in the new regulatory environment.

Surprisingly, the amendments assign no power to Canadians, the future subjects of these high-impact systems. Nothing is said about the systems that will need to be taken down. To take a real-life example, a growing number of schools in Canada and abroad have implemented controversial facial recognition technologies to monitor the behaviour and emotional well-being of their students. Companies compliant with the new legislation will have proper documentation to prove to the commissioner that their data and algorithms have been checked for security risks and bias. But what if students and their families still feel uncomfortable around these systems? What if other risks are identified? The law provides neither an opt-out option for citizens nor procedures to mitigate harm.

The bigger question is whether we, as a society, welcome the proliferation of algorithmic systems that do not require consent from individuals whose information is being used to create them and whose behaviour they will subsequently monitor. Until recently, AI has existed in a moral and legal vacuum, and this may be the right moment to ensure our core values are protected. Consent is an important issue here.

Another issue is that privacy is just one right affected by AI. We need to be concerned about losing access to democratic governance and oversight. Implementing AI in the public sector often leads to black boxing of vital government services; implementing AI in social services, for example, often works against citizens, either by mistake or by design.

It is not too late to get things right. Clearly, ISED is not well-placed to act as the sole regulator of AI in Canada. There are many ways interdepartmental collaboration could help strengthen Bill C-27: the OPC should be further consulted on AIDA; the Canadian Human Rights Commission should take part in defining the core principles of the law; and the Ministry of Labour should weigh in on the labour protections in AIDA. The proposed role of the AI and data commissioner should be strengthened: it should be separate from ISED and equipped with the staff to help shape further regulations as we learn more about emerging AI tools.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Anna Artyushina is a post-doctoral researcher in digital governance in the School of Urban and Regional Planning at Toronto Metropolitan University.