Artificial Governance: AIDA Repeats the Failed Patterns of Digital Regulation

The draft legislation is poorly conceived and rushed; a fundamental rethink is needed.

December 18, 2023
AIbots
Just as AI is formed by the data on which it’s trained, the gaping cracks in data’s regulatory foundation risk swallowing whatever good we hope AI will do overall, the authors argue. (Photo illustration/REUTERS)

In Canada, the Artificial Intelligence and Data Act (AIDA), a bill purported to regulate the development of artificial intelligence (AI) in the private sector, is in the latter phases of consideration by the Federal Standing Committee on Industry and Technology. The bill is a poorly conceived and rushed piece of attempted legislation. As many have argued, AIDA is so weak, both due to the process used to create it and its governance constructs, that it should be stopped entirely.

This stoppage would enable a fundamental rethink that starts with clear answers to the question of “what” we’re trying to accomplish by regulating AI, before jumping into an undeveloped approach to “how.” As things stand, the ongoing lack of broad-based participation of various publics in the governance of AI continues to allow exclusion to automate and accelerate through technology. AIDA is fundamentally anti-democratic.

What is being referred to as “AI” for regulatory purposes in AIDA, as well as in EU governance, is mostly a set of relationships, infrastructures and, at best, advanced statistics. An exceptionally sophisticated pattern-matching tool that focuses on achieving an objective function. An outcome. Here, Canadians might take a page from AI’s playbook and ask what objective function is the federal government trying to achieve by regulating information processing and pattern matching.

Innovation, Science and Economic Development leadership suggests the objective function required is economic development, as displayed through its persistent signalling to the nascent sector. But when the government turns to regulation, the objective function swirls into myriad things: upholding human rights, risk mitigation, addressing novel threats — outcomes the government isn’t programming for as it creates this legislation. While minimizing human harm is the public story behind the rush to regulate, as seen in the amendments presented by Minister François-Philippe Champagne on November 28, that story does not track to the provisions of, or actions being suggested by AIDA. The story for the public creates support for the law, but government actions belie the intent. AIDA is about accelerating the uptake and use of AI across all sectors of Canadian (and international) economies.

But even if the Government of Canada did have a clear, focused objective, people — much like data and, yes, AI — don’t have just one objective function. Whether taken as individuals, collectives or cities, there is no one universal problem set, or pattern set, that drives our choices, lives, policies and the like. There is no one goal any of us are trying to achieve to the exclusion of all others. They’re always rife with complexities and trade-offs. That’s good and important, what governance is designed to protect, and also not something AI can “solve for.”

Governance is how we balance that complexity — known and unknown, predictable and utterly confounding. We design governance in the interest of balancing the complexity of those conflicts, at and between cultures, interests and any number of people. The role of governance is to help us have adaptive methods and processes to realize our many complex, often conflicting desires for use of the information, interests and resources that can help us discover and achieve things far more complicated than a paycheck. Within the construct of governance, we also have a range of different potential methods for engagement and participation. Building a process to support and expand the ongoing participation of various publics in the governance of AI, and other technologies, is the approach needed to meet the current moment. It’s an approach Canada is not taking with AIDA.

The national and international history of data regulation is material to this moment, not only because AI itself is a product of the data on which it’s trained, but also because AI regulation in Canada appears to have failed to learn from this pattern in digital governance.

AIDA Is the Outcome of a Deeply Anti-democratic Pattern

The bill’s appearance last summer took many by surprise. The federal government’s lack of groundwork to tie the bill’s mandate to existing regulatory bodies and mechanisms is exceptional, and troubling. The federal government is rationalizing the omission of this critical work by saying that it’s focused on harmonizing Canada’s regulatory approach with those of other international actors, particularly the European Union.

This approach to regulation — a techno-centric, market-forward approach in the guise of harm prevention, premised on individual rights — isn’t new. Indeed, it’s a pattern in technology regulation. The national and international history of data regulation is material to this moment, not only because AI itself is a product of the data on which it’s trained, but because AI regulation in Canada appears to have failed to learn from this pattern in digital governance.

AIDA’s focus on “AI” as a technology, rather than on the processes through which it gets put into use in context, is one of the clearest signals that these regulatory undertakings are performative. The global regulatory community is too far from practical use of the technologies in situ, in applied operational context, to get into the problems they say they’re trying to fix. They continue to call on experts specialized in generic technical ways, while actively avoiding the people who could walk them through the use of this technology in any range of practical situations. By following such a flawed consultative and narrowly expert-led process, the regulation will actively undermine solutions that might work.

Canada’s stated rationale for AI regulatory harmonization closely mirrors the data harmonization thinking that informed Canada’s existing private sector privacy and data law, PIPEDA, back in the early 2000s. That law was created to support unimpeded international data flows, in much the same way that AIDA and other global regulations are being created to support unimpeded automation and use of AI. In the spirit of refusing the erasure of close-to-hand and helpful information, such as lessons learned from data governance over the past two decades, let’s take a beat and see how data governance has been going, globally.

International Track Record on Data Governance to Date

At the international level, data governance has been a source of division — entrenching existing geopolitical divides. If there’s been one overarching and uniting theme to data regulation, it has been the primacy of corporate interests over individual rights and needs. Essentially, ensuring that meaningful governance doesn’t in any way upset the self-interested-economic-growth-at-all-costs mentality of both technology companies and, more concerningly, governments.

Of course, data governance issues are never fully public or private. Increasingly aggressive interventions in technology platforms from the United States and China, based on political speech, have not only escalated into a trade war; they’ve also resulted in large companies such as ByteDance divesting American operations and Apple exiting China’s consumer market, though its manufacturing has remained.

Data governance has not only accelerated tensions between competitors; it has also introduced faults into some of the world’s strongest international relationships. Governments have been loath to cede even the most minor of potential advantages to other countries — even historically aligned partnerships, leading traditional trading partners to let key relationships languish — when they’re not actively trying to exploit one another.

The United States and European Union, for example, have historically had some of the strongest bilateral trade relationships in the world. But they have yet to establish a bilateral data governance agreement that stands up to judicial scrutiny. In other words, two of the world’s largest and most-aligned global superpowers have not, with trillions of dollars’ worth of incentives and after more than 20 years of attempts, been able to come to an agreement about how to govern data. This is extraordinary.

Even within the European Union, the General Data Protection Regulation (GDPR), which began under the auspices of creating a “one-stop shop” approach to data regulation — has resulted in patchwork implementation and, notably, Ireland exploiting its status so egregiously that it’s been sued by the rest of the European Union and is now preparing a countersuit. While both examples hold too much complexity and nuance to fully explore here, it’s clear that closed-door trade negotiations and courtrooms aren’t the best fora in which to establish core governance policies and infrastructure.

The dynamic of badly designed data governance, rushed into practice based on the (performative) urgent need to mitigate harm, resulting in polarization, chaos and exploitation, is nigh universal. In the United States, the federal government has been so ineffectual on data governance that individual states have created the country’s only meaningful data regulations — and they conflict with each other, on substance and procedure.

Some states have focused on privacy, notice and consent frameworks, while others have moved toward data protection. This has engendered a regulatory environment that is incoherent for any organization hoping to offer uniform services nationally. India, meanwhile, has forced public digital transformation through service infrastructure, even more so than policy, in ways that have resulted in marginalization and death, not to mention countless scams, security breaches and abuses.

In Canada, the distinction between private and public sector data laws, while generally beneficial, is getting increasingly hard to manage. That’s because the private infrastructures used in contexts such as policing, border control and immigration rely on data products and processes not accessible to the broader public sector and civil society. In effect, digital transformation has obscured the mechanics of governance, making it harder for people to participate in shaping the systems on which they rely.

Just as AI is formed by the data on which it’s trained, the gaping cracks in data’s regulatory foundation risk swallowing whatever good we hope AI will do overall.

Contextual Governance Provides an Alternative Approach to Regulation Grounded in Reality and Democracy

In a democracy, we are responsible for the outcomes we create for each other. This is what governance is about. The treatment of AI as an object, rather than as a set of contextual relationships, inherently refuses the complexities that create danger and harm for people. As Lucy Suchman explains, AI should not be simplistically accepted as an object, or a “thing,” at all. And while it is true that various forms of theft, from labour expertise to artistic outputs, often fuel the infrastructures sloppily referred to as AI, there is a well-known upper limit to our capacity to use the ownership of data inputs to govern use. Any of the pressing issues we’re seeking to address about AI — as a society — only make sense when viewed through the lens of contextual and relational impacts.

Canada’s regulators need to approach AI as such. Well-designed governance assumes those relationships and contexts — and that people will have enough knowledge of those processes to effectively represent their own interests. Regulated ineffectually, AI hides process behind computation and, often, trade secret regulation — not only limiting participation, but making it impossible to understand the process involved in delivering an “AI outcome.” This technology repeats historical patterns rather than opening the space we desperately need to break free of them.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Authors

Bianca Wylie is a CIGI senior fellow. Her main areas of interest are procurement and public sector technology. She focuses on examining Canadian data and technology policy decisions and their alignment with democratically informed policy and consumer protection.

Sean Martin McDonald is a CIGI senior fellow and the co-founder of Digital Public, which builds legal trusts to protect and govern digital assets.