AIDA’s “Consultation Theatre” Highlights Flaws in a So-called Agile Approach to AI Governance

Although legislation is urgent, Canada should not rush to pass the ill-conceived AIDA.

November 6, 2023
grok
The xAI and Grok logos are seen in this illustration. Elon Musk's company, xAI, this week introduced Grok, its conversational AI. (Jaap Arriens/NurPhoto via REUTERS)

The following article is an edited version of the author’s submission to Innovation, Science and Economic Development Canada’s consultation on the development of a Canadian code of practice for generative artificial intelligence (AI) systems.


The high-profile release of ChatGPT, the first major generative AI service to go public, has prompted urgent calls for governments to step in with regulation. But when some of the loudest demands for government intervention are coming from Elon Musk, Mark Zuckerberg and other AI tech giants better known for moving fast and breaking things, we should proceed carefully, even if in haste. Canada is among the countries heeding these calls for regulation. But it is doing so undemocratically, at least with regard to how the federal department of Innovation, Science and Economic Development Canada (ISED) is promoting the Artificial Intelligence and Data Act (AIDA), currently before Parliament.

Responding to industry demands for more immediate action on generative AI, the Government of Canada recently implemented a Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems for organizations to adopt as an interim measure until the AIDA comes into force. What’s missing is a serious government-led public process of deliberation that can address essential questions. For example, what is AI actually? What issues does it raise? How should it be governed?

The upshot is that, unless Canada improves the governance process significantly, the country risks creating a dangerous AI regulatory regime — one that neither reflects Canadian values nor avoids international reputational harm, as other states do significantly better. Indeed, there have been severe compromises to the democratic deliberative process around the AIDA.

The rapid, continuing digitalization of our economy, society, culture and politics is having significant consequences across all dimensions of contemporary life. Legislation that directly addresses the fundamental data and algorithmic aspects of societal digitalization, and that reflects the interests of those most affected, is long overdue. So, the introduction of the AIDA last year represented an important opportunity for legislation to catch up with a rapidly changing world. However, the legislative substance — especially the process of its development — has been flawed.

We expect governments to develop legislative or regulatory regimes through a process that is open, transparent, accountable, well-informed, fair, respectful of rights, and inclusive. When this process is well-executed, all participants can understand the rationale for the decisions ultimately made and reasonably feel that their views have been adequately heard.

Why Are Democratic Processes So Important around AI Law?

Compliance with these widely shared governance expectations is especially important in the case of the AIDA, for several reasons:

  • Hype, misinformation and mystification, driven most vigorously by those with a stake in promoting the AI field, have long characterized public discussion of these developments and their policy implications. There is much confusion among citizens and law makers alike about the nature of AI, what opportunities and challenges it presents, and what the policy options are. Legislating it therefore needs to involve more than the usual degree of in-depth research and public education, so that decisions can be well-informed.
  • Because the technical demands for designing, training and widely deploying the current wave of AI systems are so great, the leading companies are tech giants that command vast digital infrastructures and have market valuations in the trillions. They wield enormous power, rivalling that of democratically elected governments. Recently, Google/Alphabet and Meta offered an illustration of such power by threatening not to comply with the pending Online News Act (Bill C-18), forcing the federal government to back down. While Canadians appreciate the valuable services these tech giants provide, many are already wary of their motives and disproportionate influence. Canadians rightfully expect their representatives to demonstrably protect their interests over those of the giants dominating the expanding AI frontier.
  • The promises and perils of AI cover a broad spectrum of social, economic, cultural and political life. They implicate rights, livelihoods and the flourishing of an exceptionally wide range of stakeholders, who should have a say. An inclusive, deliberative process is therefore vital. This calls for the active participation of a comparably broad range of federal government agencies, as well as civil society organizations.

The official documents intended to guide AI policy making nationally and globally espouse these familiar norms of good democratic governance. Consider, as just one recent example, the references to AI governance in the “G7 Hiroshima Leaders’ Communiqué” from May 2023. In it, the national leaders of the Group of Seven (G7) commit to “advancing multi-stakeholder approaches to the development of standards for AI, respectful of legally binding frameworks, and recognize the importance of procedures that advance transparency, openness, fair processes, impartiality, privacy and inclusiveness to promote responsible AI [emphasis added].”

These ideals reflect the Canadian values that the AIDA is claimed to advance and protect. Taken seriously, they provide good general guidance, as well as a basis for assessing practice. But if not implemented, such statements become misleading platitudes, inviting cynicism and further eroding confidence in democratic governance, already under threat.

The AIDA is unusually vague, in that key terms, guiding principles and enforcement mechanisms have been left undefined, deferred until after the bill’s passage. This departure from democratic norms puts parliamentarians and Canadians in the untenable position of being asked to pass a law without being able to assess its substance and likely consequences.

Prime Minister Justin Trudeau’s 2021 mandate letter to the minister of innovation, science and industry instructs him to “advance the Pan-Canadian Artificial Intelligence Strategy [PCAIS].” Launched in 2017, the primary objective of the PCAIS is “to drive the adoption of artificial intelligence across Canada’s economy and society.” With more than $1 billion in federal government funding for AI awarded by 2021, much is at stake. However, the PCAIS is oddly silent on protecting Canadians from the potential harms AI can bring.

The AIDA belatedly addresses these concerns but leaves key details to yet-to-be-specified regulations. Even the anticipated AI and Data Commissioner, whose main role is to enforce the regulations, is not to be an independent regulator, but would report directly to the minister. This is an obvious departure from the well-established principle of regulatory independence. As noted in The Governance of Regulators, a publication in the 2014 Organisation for Economic Co-operation and Development (OECD) Best Practice Principles for Regulatory Policy series, “The assignment to a regulator of both industry development and regulatory functions…can reduce the regulator's effectiveness in one or both functions and can also fail to engender public confidence. Such conflicting functions can impair a regulator's clear role and they do not contribute to effective performance. For this reason, this combination should be avoided.”

In short, the AIDA asks Canadians to give the minister of innovation, science and industry virtually a blank cheque, with no apparent mitigation of the potential conflicts of interest arising from ISED’s prime AI mandate, which is to advance Canada’s industry. This requires an extraordinary degree of trust in the minister, who will have a more than usual need to earn that trust by demonstrating exemplary governance practice throughout the legislative process.

When those sounding the alarm and uncharacteristically calling so urgently for regulation are at the same time racing for supremacy in the emerging AI economy, it suggests their goal is to establish weak rules that serve their ambitions before others can join the debate.

ISED, among many others, argues reasonably that the complexity of AI techniques and rapid pace of their deployment mean that policy development and implementation need to be more “agile” than usual — to move more quickly and flexibly than is typical. This is understandable, given that even under normal circumstances and with the best of intentions, regulatory regimes can be overly rigid and dysfunctional. The digital turn can exacerbate these problems.

But unfortunately, at least in the case of AI, ISED appears to treat the need for agility as justification for short-circuiting vital features of democratic process, consulting selectively with stakeholders in closed modes, with minimal open and broader public involvement, and consolidating control almost exclusively within the minister’s office.

An agile approach is worth considering, but one well-aligned with the norms of democratic governance would look quite different — in particular, open, inclusive well-informed public education and engagement should be accelerated rather than impeded or postponed. While varied in their analyses and prescriptions, many relevant materials around AI governance are already available for the government to draw on to help people understand the state of AI development, the wide range of issues at stake and possible alternative regulatory approaches.

It is also important that we not accept at face value the apparent inevitability of rapid AI development and the more extreme calls for its immediate regulation. The scale and pace of investments in the technology are largely driven by intense commercial and geopolitical competition, with key decisions made by remarkably few individuals. When those sounding the alarm and uncharacteristically calling so urgently for regulation are at the same time racing for supremacy in the emerging AI economy, it suggests their goal is to establish weak rules that serve their ambitions before others can join the debate.

In short, a rushed approach to regulation driven by those already in the field risks prematurely installing a regime mainly favourable to the AI industry, putting those affected at a disadvantage. Good governance may operate more slowly than the tech industry would like but ultimately better serves the public interest.

With these ideals in mind, let’s consider more deeply the deliberative processes around the AIDA, taking the recent AI code of practice consultation exercise as an illustrative case.

Does the AIDA’s Development Comply with Norms of Democratic Governance?

When ISED introduced the AIDA in June 2022, it did so without prior public notice or consultation. The first formal opening for Canadians to present their views came 14 months later with the department’s consultation on the potential elements of a code of practice for generative AI systems. The government envisioned this as being implemented on a voluntary basis by Canadian firms ahead of the AIDA coming into force.

As described in the consultation discussion document or “scene setter,” “Canadian Guardrails for Generative AI — Code of Practice,” the code was intended to serve several purposes, including that it be “sufficiently robust to ensure that developers, deployers, and operators of generative AI systems are able to avoid harmful impacts, build trust in their systems, and transition smoothly to compliance with Canada’s forthcoming regulatory regime. The code will also serve to reinforce Canada’s contributions to active international deliberations on proposals to address the risks of generative AI, including at the G7 and amongst like-minded partners.”

More specifically, ISED claimed it wanted the code of practice to be ready for the Standing Committee on Industry and Technology (INDU) hearings on the AIDA commencing this fall and for the Group of Seven (G7) ministerial meeting in December as part of the Hiroshima AI process. Presentation at these two venues alone highlights that good governance processes are at least as important as the content of the code.

The timing of this consultation is inconsistent with an inclusive, properly informed deliberative process. ISED announced the round table sessions in the dog days of summer, which apart from Christmas week, is the time of year when people are least able to contribute to policy development.

The invitation email I saw was dated late in the day on August 4, with the first round table scheduled just three business days later. The draft code was only sent to attendees the day beforehand. Even parliamentarians were caught by surprise, including, most notably, PCET, the all-party, cross-chamber Parliamentary Caucus on Emerging Technology. The consultation period on the “Consulting with Canadians” webpage is specified as August 4, 2023, to September 14, 2023. This short period was further abbreviated because the discussion document was not posted to the site until nearly two weeks after the period’s starting date.

It is hard to understand the justification for curtailing the opportunity for deliberation. Why was the consultation period on the code elements not extended until at least after Japan, as the G7 chair had presented its own draft code of conduct to G7 digital ministers in September, a draft the government could build on? The G7 draft promised to contain stronger and more widely accepted protective measures than Canada’s draft code of practice. Waiting until then and providing more opportunity for public feedback would have helped avoid the appearance of what analyst Michael Geist has described, in previous similar situations, as “consultation theatre.

Concerns about the consultation approach raised by its timing are exacerbated when one attempts to answer other fundamental questions about the history of the draft code: for example, what policy analysis process informed ISED? Did ISED make any of the code’s provisions requirements for the funding it provided to AI companies? If so, how effective were they? What resources did ISED draw on? What role did the many other relevant government agencies and external actors, especially the AI industry, play (or not) in creating the code? How wide a range of relevant stakeholders, in particular those representing human and civil rights, labour, artists and creators, educators, children, immigrants, and other underrepresented communities that might be adversely affected by AI applications, has ISED reached out to? How have and will their views be reflected in the code? What is the process for finalizing the code going forward? The answers to these questions are important inputs to bring participants along on the work to date and to learn how they may contribute effectively.

The public record provides little insight into any of these questions. Claims in the consultation discussion document that “the Government has engaged extensively with stakeholders on AIDA,” and that the draft code elements were “based on the inputs received to date from a broad cross-section of stakeholders” would benefit from specificity. It is discouraging that one of the most obviously relevant government agencies, the Office of the Privacy Commissioner, reports that ISED did not contact it in relation to the code. To whom did the government reach out to and how, and what was the nature of the input provided?

The prime minister’s mandate letter calls on the minister of innovation, science and industry, along with other ministers, “to include and collaborate with various communities, and actively seek out and incorporate in [the minister’s] work, the diverse views of Canadians.” However, consistent with the AIDA bill and its subsequent companion document, both the draft code and its consultation process give the impression of being primarily shaped by and designed to serve the business interests of the AI industry — reassuring Canadians about selected aspects of generative AI, while placing minimal burdens on AI companies.

The first round table reinforced this impression of favouring AI industry actors. One of the five questions used to guide the discussions identified the initial value chain targets as “developers, deployers and operators,” with no mention of those who contributed to the training data nor users and others potentially at risk. Another question asked whether the code would be practical for these particular target groups, while a third enquired, “Is the proposed code as written implementable by many different kinds of AI companies and technologies?” These are all legitimate questions. But none were directly oriented to the many other kinds of AI stakeholders. The short duration, opacity, unbalanced participant list, and structure of the consultation process puts these other potential contributors at a clear disadvantage.

ISED’s desire for haste and its primary constituency became even clearer within two weeks of the closing of the consultation period. On September 26, Minister of Innovation, Science and Industry François-Philippe Champagne appeared before the Standing Committee on Industry and Technology (INDU) committee as the first witness in its hearings on Bill C-27, the omnibus bill containing the AIDA. Given that the ISED official leading the first round table back in August had stated that the tight timeline for the consultation was to have the code ready for these hearings, it was surprising that the minister made no mention of it. However, the next day, at the All In AI conference in Montreal, billed as “the most important event dedicated to Canadian AI,” Champagne made the first public announcement of the outcome of the consultation, the Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems. Several leading AI companies had already signed. Some INDU members of Parliament were not amused, with one commenting that since the minister officially reported to Parliament, not informing them when he had appeared before them a day earlier was “pretty hard to accept.”

The implemented code leaves most of the current public controversies unaddressed — among them misinformation, unreliability, explainability, privacy, intellectual property and other rights; liability; prohibitions or moratoria on especially high-risk deployments; and the transparency, accountability, accuracy and other responsibilities of the AI companies themselves.

As the consultation discussion document makes clear, a principal aim of the exercise was “to foster widespread AI support.” Even leaving aside other more important aims that a proper public consultation should pursue, fostering widespread support for AI business interests can only be achieved legitimately by proactively welcoming “the diverse views of Canadians” and providing them a genuine opportunity to weigh in knowledgeably.

Unfortunately, the cavalier approach to governance exhibited in ISED’s development of the AI code of practice was not an anomaly, but consistent with its treatment of the AIDA from the beginning. A brief sketch of this trajectory gives a good sense of how the department put its mandate to advance the AI industry in Canada ahead of espoused norms of good digital governance.

When ISED added the AIDA to Bill C-27, for example, it not only did this without prior public consultation or even prior notice. It offered none of the usually expected background materials providing justification for this highly consequential and potentially controversial legislation: no white paper, no policy analysis nor other explanation of any kind. Accurately characterized in the Commons as “more of a draft than a law,” the AIDA assigns key aspects to the regulatory discretion of the minister.

It was eight months before ISED provided further insight into its thinking, in the unusual form of an AIDA “companion document.” Unfortunately, it fails to cite any publicly available ISED-developed background materials that informed the drafting of the bill. It does mention the report of the Public Awareness Working Group of the Advisory Council on AI; however, the report appears to have influenced the legislation very little, since ISED ignored its clear call for a genuine consultative process.

From well before it introduced the AIDA, ISED has guided and sought advice from the federal government’s Advisory Council on Artificial Intelligence. The council is mandated to help build the AI industry in Canada, with no consideration of the wider implications of AI development and deployment. In April, the minister went so far as to enroll the council behind the scenes to promote an open letter titled “Support the Artificial Intelligence and Data Act (AIDA)” that called for the urgent passage of the legislation.

It is clear that no stage of the AIDA development process, from inception to the recent consultation on the draft code of practice, lives up to the espoused goals of reflecting Canadian values around democratic governance, earning Canadians’ trust and reinforcing Canada’s contributions to an international AI governance regime. In relation to the major investments the federal government has made so far in promoting the AI research and business sectors, there is too little on display in terms of AI governance innovation. While Canada has a well-earned reputation internationally for the technical advances that underlie the current wave of AI excitement, it lags well behind its usual comparator states, notably Australia, the European Union, Japan, the United Kingdom and the United States, when it comes to policy development to grapple with the societal implications of population-wide AI deployments.

Process Recommendations

Strengthening the emerging AI governance regime in Canada and earning the necessary legitimacy will require significant improvements to the AI policy development process. Familiar basic principles of democratic governance as well as the OECD’s Recommendation of the Council for Agile Regulatory Governance to Harness Innovation provide valuable guidance for making the corrections required.

Implementing AI systems at scale can have wide societal consequences well beyond the scope of ISED’s mandate. This implies other government ministries and agencies also need to play a formative role in crafting the AIDA legislation. Such government-wide collaboration can build on ISED’s current work with Justice Canada, Global Affairs Canada and the Treasury Board Secretariat in Canada’s negotiations with the Council of Europe (COE) to develop a treaty on AI that prominently values human rights, democracy and the rule of law. The COE’s Consolidated Working Draft of the Framework Convention on AI, human rights, democracy and the rule of law provides useful material for Canada’s own AI regime. Not only is Canada’s AI regulatory regime expected to conform to the convention once ratified, but its general provisions, obligations and principles are better aligned with the goals of avoiding harm, building trust and advancing the public interest than anything ISED has made public so far. Other ministries with obvious contributions to make include Employment and Social Development Canada (labour), Public Safety Canada (cybersecurity) and Canadian Heritage (content creators and artists). The Office of the Privacy Commissioner also has an important, but so far neglected, role to play.

ISED has a mandate to promote the AI industry in Canada and the federal government has made major financial investments in support of this. To avoid conflicts of interest and, just as importantly, the appearance of such conflict, a well-regarded agency at arm’s length from government should lead the public deliberation process, including future consultations to redraft the AIDA and create its regulations.

Consultations, advisory sessions and other forms of stakeholder engagement should be conducted as transparently as possible. Notice of meetings, the documentation provided, a list of participants and records of views expressed should be made public in a timely manner. Exceptions to this would be permissible when explicitly justified; for example, to promote candid discussion of controversial issues, the Chatham House Rule may be appropriate.

To ensure an appropriate range of perspectives is brought to bear, proactive measures will be necessary in many cases. Identifying such stakeholders may require a prior impact assessment exercise.

Sound AI policy formulation calls for “evidence-based decision-making,” a point reiterated in the prime ministerial mandate letter. It should be based on in-depth, independent, expert studies of AI development, realistic assessments of actual and potential benefits and risks, and existing and proposed regulatory measures of other nations, as well as emerging international agreements. Policy-oriented studies should offer the most promising candidates for adoption in the current circumstances.

So far, there is scant public evidence of any such studies. If they have not been conducted, as appears likely, the government should commission them immediately. Australia’s chief scientist has shown recently that governments can start catching up quickly, even in this fast-changing arena, if they make doing so a priority. In any case, ISED should make reports of these studies publicly available, as soon as possible. Canadians should not have to wait for the slow access to information processes when the need is important and urgent.

The current AIDA governance process should be revised to incorporate the significant improvements outlined here. To achieve the desired agility, it is worth considering innovative approaches drawn from system development, which in recent decades has modernized the familiar “waterfall” model, with its standard activities, for example, analysis, design and implementation, occurring in successive stages, with formal signoffs between them.

Once broad foundational premises have been settled, these activities are run in parallel, with frequent communication between the personnel involved. Short cycles of iterative refinement enable convergence on shared objectives, mutual alignment of actors and problem-solution fit more rapidly and reliably than conventional methods. While democratic law making has some obvious unique aspects that need to be accounted for, such an approach can be applied to AIDA. For instance, the foundational premises might draw from the pending COE convention (definitions, general obligations and principles), and include enrolment of key stakeholders and assurance of the resources needed to support active participation throughout the process. Then public education and consultation, research, and drafting legislation, regulations, codes and standards can begin in earnest. Formal parliamentary approval for a new law would come toward the end of the process but then could be implemented quickly thereafter. If begun soon, such an agile approach wouldn’t have to extend the overall multi-year AIDA process as currently scheduled.

While the current trajectory of AIDA’s development is flawed, the federal government still has a chance to craft legislation, regulation and codes of practice that deserve popular support, provide the basis for an AI industry well-aligned with Canadian values, and make a worthy contribution to AI governance internationally. They can do this by reaffirming well-established norms of good governance and adapting them to the current circumstances.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Andrew Clement is professor emeritus in the Faculty of Information at the University of Toronto. His research and teaching interests are in the social implications of information/communications technology and human-centred systems development.