Canada Needs an Artificial Intelligence Agency

Generative AI applications are just the tip of the iceberg.

October 5, 2023
AICanada
Dr. Richad Chazal, the medical director of the Heart and Vascular Institute at Lee Health, is framed by a CT scanner at HealthPark Medical Center in Fort Myers, Florida, on September 14, 2023. He is using Cleerly AI technology to test for plaque buildup that can lead to a heart attack. (REUTERS)

If 2022 was the year artificial intelligence (AI) got smart, 2023 is the year it is creating a splash — and by “splash” we mean the kind that sends tsunamis ripping through economic and social systems. Generative AI platforms such as the large language model ChatGPT and image-generating applications such as Dall-E and Midjourney have created a sensation and much consternation among professional workers, and invoked hurried regulatory reaction. But these represent only the tip of the iceberg of AI applications that are now in the industrial pipeline.

Indeed, a new “hockey stick” curve is shaping up: it’s in AI patents.

We have seen such curves before — in industrial production at the dawn of the Industrial Revolution in the early 1800s; in intellectual property creation at the dawn of the knowledge-based economy in the early 1980s; and in data flows at the dawn of the data-driven economy era, circa 2010. In each case, economies were transformed as a new form of productive capital came into widespread use.

As we argue in a recent study, the upturn in the pace of AI patents signifies the arrival of yet another new economic age, based on yet another evolution — the era of “machine knowledge capital.” This evolution will bring automation of many tasks currently carried out by humans, with profound impacts across the waterfront of human industry and activity.

As with previous technological revolutions, the age of machine knowledge capital is being built on a wave of technological breakthroughs that have combined to increase the power of AI systems by orders of magnitude, and by the development of business models that support the rapid and pervasive application of these new capital assets.

As we write, the largest corporations the world has ever known, working with the most advanced technology the world has ever seen, are leveraging the greatest store of data the world has ever gathered, to develop new applications. Millions of developers working in hundreds of thousands of smaller firms and universities are working on powerful development platforms to deploy AI in areas ranging from astrophysics, to infectious diseases, to pharmacology, to military technologies. The newest trillion-dollar firm on the block, Nvidia, is marketing desktop AI workstations preloaded with leading AI development tools such as TensorFlow and PyTorch, at prices starting in the US$4,000 range. We have entered the garage band era of AI development.

There are at least three major implications for Canadian policy.

First, machine knowledge capital will impact every sector of the country’s economy, raising issues across every dimension of public concern.

Second, while Canada can boast of being an AI player with 20 public research labs, 75 AI incubators and accelerators, 60 groups of AI investors, and more than 850 AI-related start-ups, according to the federal government, the reality is that virtually all the AI assets that will be deployed in Canada will be imported. Moreover, virtually every product that Canada exports will be competing in a global market in which the conditions of competition will be shaped by machine knowledge capital assets developed abroad and by the rules of the game adopted by the major AI jurisdictions. Canada will therefore be a standards taker, not a standards maker.

Third, we’ll face an acute shortage of talent: addressing the breadth and particularity of issues raised as machine knowledge capital transforms our economy and society will require skills and knowledge that are in exceedingly short supply.

These implications point to a fourth implication: Canada will inevitably require an AI agency to provide coherent governance. It will need to be staffed with the country’s best and brightest. Work on this should begin now.

Canadian novelist Margaret Atwood has joined other prominent writers in a public call to put a stop to the unauthorized use of literary texts in AI training.

Such an agency will have its hands full.

Globally, AI regulatory frameworks are developing quickly. On May 11, the European Parliament adopted the AI Act, Europe’s flagship legislation that aims to regulate AI based on its potential harms, with a primary focus on high-impact AI systems. There are many strong provisions in this law, including mandatory human rights–based impact assessments, AI watermarks, and the requirement for technology vendors to monitor downstream uses of their products. But what exactly constitutes a high-impact AI system remains to be defined.

The Biden administration’s Blueprint for an AI Bill of Rights also adopts a human rights–based approach to the governance of AI. It is structured around five principles: safety, protection against algorithmic discrimination, data privacy, explainability, and fallback on human intervention — meaning a human backstop. The blueprint is meant to support the development of technical standards and practices for particular sectors and contexts, but does not detail what these domain-specific standards might be. That work remains to be done.

Similarly, Canada’s new AI law — the Artificial Intelligence and Data Act (AIDA) — sets out measures regarding the use of anonymized data in AI systems; addresses the design, development and deployment of AI systems; and distinguishes between general and high-impact AI systems. However, like the European legislation, it does not define the scope of high-impact systems. Moreover, it also passes the burden of evaluation, monitoring and record-keeping obligations on to the developers of AI systems. AIDA’s brevity reflects the fact that it leaves much to regulations that have yet to be developed.

The AI agency will not have a quiet job.

On the domestic front, societal resistance to machine-knowledge capital disruption is already flaring. The Hollywood writers’ strike is but one example of a professional community seeking to legally protect its jobs from AI systems. Canadian novelist Margaret Atwood has joined other prominent writers in a public call to put a stop to the unauthorized use of literary texts in AI training. In January, a group of US artists filed a lawsuit against four AI software companies: Stability AI Ltd., Stability AI Inc., Midjourney Inc. and DeviantArt Inc. The suit claims the companies infringed on artists’ rights by feeding their work into algorithms that created images stylistically close to the originals. Several Canadian artists are waging their own legal battle with AI companies, whose products, as their action lawsuit argues, will lead to the loss of livelihood for the professionals involved in design, illustration and video game development. The Canadian government has already faced a public backlash over biased, discriminatory and ineffective algorithms. This is just the beginning.

Internationally, Canada may yet be able to influence the path of AI regulation through its own measures. For example, when the Italian Data Protection Authority banned ChatGPT on March 31 on grounds of unlawful access to personal data, OpenAI responded by creating a privacy-mindful version of the app that allowed it to return to the Italian market the next month; OpenAI’s competitors have quickly followed suit. However, in general, Canada will need to provide expert participation in the networks forming around AI regulation if the country is to influence the framing of a rules system from the perspective of a small, open, rules-taking economy.

In this regard, in a few short years, Canada will be participating in the review of the Canada-US-Mexico Agreement, which contains strong measures on data flows and protects source code in algorithms. In the latter case, there is an exception for a “specific investigation, inspection, examination enforcement action or judicial proceeding.” Canada will require a competent, trusted and respected counterparty to safeguard our trade interests and establish our standing in the international regulatory field. We will be trading AI, after all.

In short, Canada will need an AI agency to support the development and implementation of regulations, provide after-market oversight, and represent Canada in international fora. AIDA does not provide for such an agency and that is a gap that should be filled.

Where should this agency sit? As University of Ottawa scholar and CIGI Senior Fellow Teresa Scassa has noted, it may not be wise to situate the regulatory and oversight responsibilities in a federal agency that promotes commercial exploitation of AI systems. A new economic age demands new governance structures. To that end, a new AI agency should by rights sit alongside the new national security council — the creation of which has been motivated in part by the need to address the new security environment created by the digital transformation — and a new economic council, which will be needed to help organize the country’s adaptation to this transformation.

Canada, and the world, face tumultuous years ahead. The age of machine knowledge capital is here. It’s imperative we begin building a framework for governing it immediately. Canada was slow to address the governance issues of the knowledge-based economy, and as a result our innovation performance fell in the Organisation for Economic Co-operation and Development rankings. The age of the data-driven economy came and went in the blink of a decade and Canada mostly missed out. The pace of change today is unprecedented — the tsunami of AI apps is rising. AIDA is a start, but we are already playing catch-up. In point of fact, we need the AI agency yesterday.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Authors

Dan Ciuriak is a senior fellow at CIGI, where he is exploring the interface between Canada’s domestic innovation and international trade and investment. He is the director and principal of Ciuriak Consulting, Inc.

Anna Artyushina is a post-doctoral researcher in digital governance in the School of Urban and Regional Planning at Toronto Metropolitan University.