Real Harms, Not Doomsday Robots, Should Be the Focus of AI Regulation

An international treaty is likely required, which will take goodwill and effort.

July 31, 2023
robotnice
Japan Airlines’ remote controlled robot “Jet” chats with a boy at the terminal of Haneda Airport in Tokyo on Wednesday, April 24, 2019. (Yoshio Tsunoda/AFLO via REUTERS)

Generative artificial intelligence (AI) has caught the world by storm — and by surprise. What was once confined to a specialized domain has now permeated the daily lives of ordinary citizens. Yet there are no coherent guardrails in place, no user-friendly manuals and no national nor global standards or regulations.

Perhaps this was one reason why the most recent Munk Debate, which took place in Toronto on June 22, focused on whether AI poses an existential risk to humankind — presumably by becoming sentient and destroying its creators. What faith we have in our inventions!

Is this really the question we should be debating now?

To begin, there are many varieties and classifications of AI. Not many such systems could conceivably ever lead to machine sentience. Further, future sentience is not the issue. AI driven social media platforms have already been complicit in horrible outcomes, with one implicated in genocide. You don’t need sentience within the algorithm for that; the sentience resides with the human controller.

Second, the claim in the debate that was made was “we” (whether this was a collective “we” or a specific entity was not clear) would never unleash technologies that would cause harm; they would first be vetted for safety.

As mentioned above, that clearly has not been the case until now. Moreover, if new technologies are to be vetted for safety before release, why wouldn’t corporations rely on academics and other independent professionals to do the vetting? As the industry now operates, how do consumers even know what vetting is taking place? Why must we rely on whistle-blowers, such as former Meta employee Frances Haugen and others, to obtain insights?

And, a third question arises: Is the level of risk important? It seems so. Most countries are moving toward risk-based frameworks for AI governance, recognizing that not all such systems have the same impact and so require different levels and types of regulation.

So, where does the above lead us?

In a word, governance. Technology does not determine what technology should do; people do. We, therefore, require a clear set of rules for the development and uses of these technologies. Rather than indulge in hyperbole about AI potentially becoming some type of archfiend or deity, we should regulate it and ensure that its uses conform to our values.

Clearly, the popular notion that technology moves too fast to be controlled must be erased. We absolutely can control it. We can ensure it isn’t released prior to understanding its impact. We can impose a duty of care on developers, and implement sandboxes to test out new technologies before release; we can ensure that developers take ethics courses and embed human-rights principles in their designs; we can ensure that AI systems presenting high risks — however defined — are very tightly controlled and barred from use in some situations.

It’s important to note here that the view expressed by some technologists that firms require new regulations before they can do the right thing is nonsense. Guidelines for responsible business conduct already exist.

Where regulators can help is looking at the bigger picture, to ensure that regulations account for the many interwoven areas such as data governance, privacy, competition and consumer protection, and that they are up to date and fit for purpose.

We must avoid the old argument that regulation stifles innovation. Rather, let’s acknowledge that abusing human rights stifles innovation! Appropriate regulation instills trust and supports creative enterprise.

There is a legitimate concern about rogue actors or nations using AI in nefarious ways — which reinforces the need for global frameworks. (The need for collective action was evident in the recent Group of Seven (G7) Hiroshima Leader’s Communiqué.) But the development of those frameworks needs to go beyond the G7 to ensure adequate representation of all people. AI, especially generative AI, is now in the hands of billions.

An international treaty is likely required. All of this will require a great deal of effort and goodwill. A Digital Stability Board is a way to begin to deal with interconnected digital issues at the global level.

But please, let’s stop waving our hands and shouting that a doomsday robot is coming to get us. There are real harms at issue. They are already evident. Let’s deal with those, and by dealing with them we also deal with existential risks. And let’s leave the science fiction to the movies.

This article first appeared in the Toronto Star.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Robert (Bob) Fay is a CIGI senior fellow and an expert in the field of digital economy research.