Voluntary AI Guardrails Risk Placing Corporate Interests Over Public Good

A rushed consultation on an already drafted code is not a recipe for success.

September 13, 2023
Aicorps
(Photo illustration by Jonathan Raa/NurPhoto via REUTERS)

On August 16, 2023, Canada’s federal department of Innovation, Science and Economic Development (ISED) announced it would create a voluntary code of practice (or “guardrails,” in industry terms) to govern generative artificial intelligence (AI). Generative AI refers to systems that draw from large data sets of text, images or audio to produce new content. The release of OpenAI’s ChatGPT in November 2022, for example, sparked headlines expressing excitement and concern as people used the technology to churn out instant books, write submissions to courts and cheat on exams. Visual artists, meanwhile, criticized OpenAI’s AI-powered image generator DALL-E 2, which people used to mimic artwork in the style of artists who had neither granted permission nor would benefit from the resulting AI-generated images.

In developing a voluntary code, Canada is following the lead of the US government, which in July 2023 announced a voluntary agreement on generative AI among seven key companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — with the goal of moving “toward safe, secure, and transparent development of AI technology.” ISED’s stated objective is similar: establish a “sufficiently robust” code in industry to ensure those designing and using generative AI can “avoid harmful impacts [and] build trust in their systems.” The Canadian and American governments both have said that the voluntary agreements will be followed by the introduction of binding requirements. In Canada’s case, that would mean a complement to the Artificial Intelligence and Data Act (AIDA), which is currently awaiting third reading in the House of Commons. ISED’s just-released code of practice, itself crafted with input from “a broad cross-selection of stakeholders,” was to serve as the basis for a brief consultation during the summer of 2023 through virtual and hybrid round tables, and then later be reviewed by the federal government’s Advisory Council on AI.

This rushed consultation on an already drafted code is not a recipe for success. And it is, unfortunately, part of a pattern. I’ve previously written about the Canadian government’s poor track record of unduly narrow public consultations on digital policy issues. AI experts have criticized the government’s process of establishing AIDA as well as the contents of the act itself.

The Canadian government is gambling that a voluntary code of practice with AI companies is the most effective way to prepare for binding requirements. But before we commit to voluntary regulation, it’s important to understand some of the inherent underlying assumptions and challenges, especially when applied to a controversial and rapidly evolving technology.

As a scholar of technology governance, I have studied how states and technology companies use voluntary agreements, which tend to be non-legally binding industry codes of conduct or best practices. In my book Chokepoints: Global Private Regulation on the Internet, I examine eight voluntary enforcement agreements struck between governments and large technology firms, including Google, PayPal and eBay, to target the illicit online trade in counterfeit goods. I found that these voluntary agreements tend to serve governments and industry well, and the public interest less well. That’s because they can be undertaken rapidly, quietly, and with little oversight or accountability.

To be sure, voluntary regulatory agreements with industry are not novel. These are often a key element of corporate social responsibility programs, in which companies may adopt non-legally binding measures to reduce, for instance, environmental waste. States have in the past encouraged — and, in the cases I’ve studied, sometimes coerced — technology companies to adopt non-binding regulatory agreements for more than a decade. In the last decade, for example, the US government has called on technology companies to counter botnets – networks of computers infected with malware to perform illicit tasks without their owners’ knowledge, for example, in distributing spam. The UK government, meanwhile, has urged payment providers to withdraw their payment services from websites selling child sexual abuse content.

For industry, voluntary agreements may be a viable alternative to states threatening legislation. Importantly, being at the table to help draft non-legally binding measures can offer companies an opportunity to influence regulatory strategies or outcomes.

Voluntary agreements may offer certain advantages, especially when regulating fast-evolving technology such as AI, as signatories can rapidly amend these agreements in response to changes in technology or circumstances, a stark contrast to slow-moving legislation. As other scholars have noted, states may perceive — rightly or wrongly — corporate actors as more technologically savvy, cost-effective and efficient regulators than government agencies.

Efficacy and accuracy, however, should be proven, not assumed. States may also strategically use private actors to reach beyond their traditional jurisdictional boundaries or to regulate at a scale unfeasible for government agencies. For states looking to regulate generative AI systems that are created, trained and operated across large data sets for a worldwide audience, rules that extend transnationally or are consistent with other countries are particularly attractive.

For industry, voluntary agreements may be a viable alternative to states threatening legislation. Importantly, being at the table to help draft non-legally binding measures can offer companies an opportunity to influence regulatory strategies or outcomes.

The problem is that this process tends to advantage corporate interests over the public interest. This is especially the case when industry plays a primary role, not only in defining and prioritizing problems but also in setting out solutions.

The Canadian code, for example, calls on industry to “implement measures” to “mitigate risk of biased output.” This sounds reasonable until one recalls that Google fired two key members of its AI ethics team, Timnit Gebru and Margaret Mitchell, in 2020 and 2021 respectively, after they raised concerns about potential harms from Google’s AI tools. (In Gebru’s case this pertained to the risk of bias to marginalized communities from large language models.) Google maintains that Gebru was not fired, and apologized in February 2021 for the way her “exit,” as the company put it, was handled. Google let Mitchell go for allegedly violating its code of conduct and security policies.

The larger lesson from the Google case is the impossibility of industry policing itself, whether via internal ethics teams or non-legally binding codes of practice. Rhetoric aside, corporations will not act against their own commercial interests to address problems relating to generative AI. Once the ground rules are set out as agreed-on facts, as in ISED’s code of practice, it will be difficult for anyone else to introduce alternative perspectives or concerns. That is by design.

ISED’s and the American voluntary codes, for example, portray the risk of biased data as a technical problem that can be largely addressed with better data sets or monitoring practices. Technology scholars, by contrast, point out that bias such as anti-Black racism is entrenched within the design and operation of technologies that reflect broader socio-economic practices, thereby making mere technical fixes impossible. Also absent from the American code and the draft Canadian code are commitments to address generative AI’s massive, unsustainable energy and water consumption. This is a shocking omission in the current climate crisis.

The AI industry is already playing an outsized role in efforts to regulate generative AI because we tend to accord outsized social, economic and political value to those who create tools to capture and interpret data. This means society and policy makers typically valorize engineers and software designers as possessing the most valuable knowledge, as I explain in my new book with co-author Blayne Haggart, The New Knowledge: Information, Data and the Remaking of Global Power. As a result, AI and other issues such as smart cities or platform governance too often become narrowly cast as “technical” problems about which engineers and software designers possess the most authoritative knowledge. What’s lost are the broader social, economic, political and environmental implications.

The Government of Canada needs to get serious about regulating generative AI, and AI more broadly, and it must do so in the public interest. It should cancel plans for a voluntary code of practice, and instead work with critics to improve the troubled AIDA bill. University of Ottawa law professor and member of ISED’s Advisory Council on AI Teresa Scassa, for example, has calls for AIDA to be thoroughly revised.

As things stand, ISED is attempting the impossible: working to commercialize AI while also working to regulate it. As public technology advocate Bianca Wylie notes, in both efforts “it’s all the same team here. A closed circuit.” Break this circuit: regulate in the public interest.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Natasha Tusikov is an associate professor of criminology in the Department of Social Science at York University and a visiting fellow with the School of Regulation and Global Governance (RegNet) at the Australian National University.