Here’s a Radical Idea: The Doomsaying about AI Governance Could Be Wrong

The tone of anxious pessimism that pervades the discourse around AI governance is unwarranted.

November 21, 2022
robot
Ai-Da Robot, a “robot artist,” made history as the first robot to address Britain’s House of Lords on October 11, 2022. The robot took questions about whether creativity is at risk from the rise of artificial intelligence. (Elliott Franks/Cover Images via REUTERS)

In recent months, laments about the parlous state of governance as it pertains to artificial intelligence (AI) have reached a crescendo. Some critics point to business leaders who enthusiastically embrace AI while paying less attention to regulation. Others take inadequate governance as given. There have been numerous urgent warnings about the need for more and better regulation of this exploding technology.

That this angst is peaking now is unsurprising. AI capabilities have progressed tremendously in the past decade and keep growing year to year. This technology can now generate realistic images from text, predict the sequence of formerly unknown proteins, write code and author marketing campaigns. AI governance mechanisms are, of course, essential in ensuring that AI benefits rather than harms humanity. Forget about the possibility of “the singularity” — a point where technology evolves beyond humanity’s capacity to understand or control it. Right now, as AI becomes more economically entrenched, it will be important to have, for example, robots that enhance and ease jobs rather than replace them. We need racially unbiased facial recognition systems. Regulations are critical for just outcomes.

That said, the tone of anxious pessimism that pervades the discourse around AI governance is unwarranted. Governance will come and is indeed already on the way. History — specifically, the story of America’s transition from the excesses of the Gilded Age to the regulatory spirit of the Progressive Era — offers a model through which AI’s current explosion and pending regulation can be better understood.

The Gilded Age began roughly around the end of the US Civil War, in 1865, and extended into the early twentieth century. It was a time of extraordinary wealth and burgeoning inequality, underpinned by transformative technological change. Steam engines and railways burst onto the scene as did lightbulbs, telephones and the Bessemer process. These technologies came alongside monopolistic companies, new forms of economic organization, robber barons and a fundamental altering of the American lifestyle.

Eventually, however, millions of Americans began waking up to the reality that these advances, while transformative, were not necessarily all good. Muckraking journalists such as Ida Tarbell exposed the ruthlessness of companies such as Standard Oil. Another muckraker, Upton Sinclair, criticized the lack of regulatory oversight in the meat-packing industry in his novel The Jungle.

The new public consciousness inspired by such exposés led to meaningful political change during the presidential administrations of Theodore Roosevelt, William Taft and Woodrow Wilson. Monopolies were broken up, new regulations and regulatory bodies were established, and public sentiment shifted to a tacit agreement that it was time to rein in some of the detrimental effects of new industrial technologies. Admittedly, it took the United States a couple of decades, but in the end technological governance won.

A similar cyclical pattern of technological rise, awareness of problems and corresponding regulation is unfolding now with AI. While the technology has become increasingly embedded in our economy, some of its particularities have also come under greater critical scrutiny, for example, bias in facial recognition systems, the potential harms of overly large language models or the dangers of industry-dominated AI research. These are just a sampling of the critiques.

However, unlike in the Progressive Era, when tangible change took decades to manifest, really only beginning in earnest in the early 1900s, new legislation is already on the horizon. In Canada in October of this year, the House of Commons Standing Committee on Access to Information, Privacy and Ethics released a formal report recommending the regulation of facial recognition technology and greater investigation into AI. The European Commission is in the process of proposing an AI Act that would govern a wide range of AI applications. Not long ago, the White House released a “Blueprint for an AI Bill of Rights” that outlined a set of principles that should govern AI development.

In fact, AI and tech governance seems to be one of the few issues in America on which there is bipartisan political consensus. All of this has come within a decade of the 2012 release of AlexNet, a seminal AI model that many have argued inaugurated the recent deep-learning revolution.

There is, of course, much more important AI governance work ahead. The US “Blueprint for an AI Bill of Rights” is a blueprint, after all. However, positive steps have been taken. That is cause for optimism. History and self-interest suggest that human societies will grapple with this technological revolution as we have with previous ones. Rather than cry that the sky is falling, we should engage in this work.

The views expressed in this article are the author’s alone, and not representative of those of either the Stanford Institute for Human-Centered Artificial Intelligence or the AI Index.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Nestor Maslej is a CIGI fellow and research manager at the Institute for Human-Centered Artificial Intelligence at Stanford University, where he manages the AI Index and Global AI Vibrancy Tool.