The Best Way to Govern AI? Emulate It

Before we launch a new agency or introduce a new law, let’s use and leverage the tools we already have.

May 8, 2023
2023-05-03T171129Z_868821349_MT1CVMD52681252_RTRMADP_3_COVER-IMAGES copy
Scientists at the University of Texas at Austin have used AI to decode people’s thoughts from a brain scan. The new system, called a semantic decoder, can translate a person’s brain activity while listening to a story or silently imagining telling a story — into a continuous stream of text. (University of Texas at Austin via REUTERS)

In a few short months, generative artificial intelligence (AI) tools such as OpenAI’s ChatGPT have spread like wildfire, spawning an entirely new market of products and services that leverage the technology. At the same time, high-profile AI researchers continue to caution against the speed of these commercial deployments, citing a wide range of unchecked risks to people in the present and future. Those future risks, in particular, recently prompted a number of AI researchers, academics and tech leaders, such as Elon Musk and Steve Wozniak, to sign a controversial letter calling for a “pause” on the development of more powerful AI systems.

Its signatories advocate for “new and capable regulatory authorities dedicated to AI” and “well-resourced institutions for coping with the dramatic economic and political disruptions…that AI will cause.” In a recent article in The Economist, Professor Gary Marcus, a leading AI critic who also signed the letter, similarly calls for “the immediate development of a global, neutral, non-profit International Agency for AI (IAAI), with guidance and buy-in from governments, large technology companies, non-profits, academia and society at large, aimed at collaboratively finding governance and technical solutions to promote safe, secure and peaceful AI technologies.”

Even those skeptical of a “pause” seem to be pushing for a similar strategy. For example, Dr. Rumman Chowdhury, a data scientist and former lead of Twitter’s ethical machine-learning team has openly criticized the letter. She recently penned an opinion piece in support of a new “generative AI global governance body [to be] funded via unrestricted funds” from tech companies, citing the International Atomic Energy Agency and Meta’s Oversight Board as precedents. And former Federal Communications Commission chair and Brookings fellow Tom Wheeler has proposed a “specialized and focused federal agency staffed by appropriately compensated experts.”

In addition to calls for new agencies and institutions, there are also demands for new laws and regulations to govern AI. The most ambitious effort is under way in Europe, where a proposed Artificial Intelligence Act and corresponding AI Liability Directive will introduce stringent, risk-based requirements for AI systems, including general-purpose systems such as ChatGPT, and impose liability for damage to consumers caused by AI products and services. Multiple proposals have also been made in Washington, DC, including a bill from Representative Ted Lieu written by ChatGPT. Even industry seems to be on board, with Google’s Sundar Pichai, OpenAI’s Sam Altman and Microsoft’s Brad Smith all calling for new AI regulations (while simultaneously gutting their internal AI ethics teams).

Requesting new laws and regulations, while continuing to ignore or violate existing ones, is a long-running strategy for the tech sector. And the purported special or novel nature of digital technologies is often a device used to distract from, delay and defer attempts to regulate it.

But all digital technologies consist of the same three building blocks — data, people and corporations — and all three are already subject to a broad array of existing laws and regulations. For example, data protection and privacy authorities worldwide are working to reconcile new AI with existing laws, and AI tools such as ChatGPT and the “virtual friend” Replika have already been subject to investigations and enforcement actions for potential violations of the General Data Protection Regulation (GDPR). Meanwhile, GDPR enforcement of earlier technologies already suffers from a severe lack of resources.

Generative AI tools are built on the shoulders of giants — specifically, we the people. This seemingly powerful or even magical technology was built not from scratch but by harvesting a vast corpus of human-generated data from the historical web and continually retraining and fine-tuning it over time.

In the United States, four federal agencies recently issued a joint statement to remind companies that there is no “AI exemption to the laws on the books.” Similarly, Federal Trade Commission (FTC) chair Lina Khan argued in a recent op-ed that although AI tools are novel, they are not exempt from existing rules, adding, “the FTC is well equipped with legal jurisdiction to handle the issues brought to the fore by the rapidly developing A.I. sector, including collusion, monopolization, mergers, price discrimination and unfair methods of competition.” Unfortunately, the FTC and other federal agencies are even more thinly staffed and resourced than their European counterparts, and face additional cuts to government spending.

So, it’s not necessarily the case that new technologies are outpacing existing laws and regulations, rendering them obsolete. Rather, it’s that technological advancements demand additional resources for their governance. And with each new legal framework or governance mechanism we introduce, the resource problem only gets worse. We also risk losing important institutional knowledge and perspective about how these laws and regulations were applied to earlier technologies. That can encourage a myopic view based merely on the technology du jour.

In other words, with each technology hype cycle, there are calls for new laws, regulations and institutions, despite authorities not having adequately enforced existing laws or invested sufficient resources in established institutions. What makes us think that the new mechanisms will fare any better? Might they just further subdivide limited public resources across a more complicated landscape of actors and rules? This may be the fastest way to ensure that our existing governance infrastructure becomes obsolete.

In general, we can respond to new technologies in three ways. We can apply the letter of existing laws “as is,” effectively ensuring that new technologies remain out of scope (the “law of the horse” problem). We can start from scratch and craft new, technology-specific laws each time, which would effectively require an unlimited supply of resources. Or, we can apply existing laws in new ways, adapting them to fit the spirit of the law and its objectives. Despite the fact that many existing laws, including the GDPR, are meant to be technology-neutral — meaning that the rules should apply equally irrespective of how a technology works or operates — we often forget in practice to adhere to this principle.

Generative AI tools are built on the shoulders of giants — specifically, we the people. This seemingly powerful or even magical technology was built not from scratch but by harvesting a vast corpus of human-generated data from the historical web and continually retraining and fine-tuning it over time. It is able to advance quickly by continually building on what is already there. We could stand to learn from this when it comes to the governance of AI or any other technology.

If we fear that AI will replace human labour, we can invest in continually reskilling people. If we worry about technology obsolescence, we can demand a right to repair. By the same token, new governance mechanisms risk making existing ones obsolete, unless we learn to adapt and repurpose them. This constant starting over is unsustainable. Instead, let’s take a page out of AI’s book. Before we launch a new agency or introduce a new law, let’s use and leverage the tools we already have, at least in the first order, and learn to fine-tune them as we go. This may be our best hope yet of keeping pace with the rapid advancement of AI and the next technology to follow.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Elizabeth M. Renieris is a CIGI senior fellow, lawyer, researcher and author focused on the ethical and rights implications of new technologies. Her latest book is Beyond Data: Reclaiming Human Rights at the Dawn of the Metaverse.