Generative AI Is Here to Stay: Its Users Should Be Accountable First

The genie’s out of the bottle, and the organizations deploying this technology need to get serious about its governance.

May 4, 2023
chatgpt
Generative AI, such as ChatGPT, is a general-purpose technology with a wide range of possible uses, including some not foreseen by its developers. (REUTERS)

Does generative artificial intelligence (AI) pose a threat to society and humanity? In the wake of ChatGPT’s stunning release, many have been asking this question. On March 22, led by the Future of Life Institute (FLI), a group of prominent tech leaders and researchers called for a temporary pause in the development of all systems more powerful than GPT-4 (“Generative Pre-Trainer Transformer-4”).

The open letter — signed by billionaire tech innovators Elon Musk and Steve Wozniak, among thousands of others — cites an absence of careful planning and management. “Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control,” the letter states.

But this argument misses a critical point — the genie is already out of the bottle. ChatGPT is estimated to have reached 100 million monthly active users as of January 2023. And its website already generates one billion visits per month. Beyond its record-breaking status as the fastest-growing consumer application in history, OpenAI’s ChatGPT has transformed the AI landscape. That can’t be undone.

That compels us to look for way to prevent harms, beyond asking for a pause, or government action. Rather than focusing exclusively on the admittedly important roles of AI legislators, labs and developers, much greater emphasis could and should be placed on the urgent need for the organizations deploying this technology to get serious about its governance. The users of generative AI need to be accountable.

In contrast with machine learning — narrow-use AIs such as the algorithms used to detect fraud by banks — generative AI is a general-purpose technology, with a wide range of possible uses, including some not foreseen by its developers. The FLI’s call for a development pause, which now has more than 26,000 signatures, understandably reflects a broad public concern about the magnitude of generative AI’s potential impacts. But interestingly, the furor around ChatGPT has not sparked a conversation about the role of the many organizations already using AI. This is a mistake.

In the new world of general-purpose AI, including generative AI, surely a key responsibility for governance should fall on the enterprise that is proposing to do something with an AI system. Clearly, that organization should evaluate whether the benefits justify the risks and potential negative impacts, in the context of a specific use case. And they should be accountable, whether to boards or shareholders, for those decisions.

Some experts agree that the principal burden of responsibility should fall on the organizations who deploy, rather than those who develop, generative AI. It’s not a loud chorus, but that can and should change. Based on the general-purpose nature of generative AI, it’s not a stretch to see the impracticalities of expecting developers to anticipate and mitigate every risk. In the European Union, for example, the proposed Artificial Intelligence Act calls for a comprehensive approach to risk management for high-risk systems. But how could a developer of generative AI ever anticipate the many high-risk applications in which its system might eventually be used? All the more reason why users are integral to mitigating risk.

Why is the public finger being pointed almost exclusively at AI developers rather than users? Maybe because it’s easier and less costly for users to let somebody else worry about the problem. A 2022 study found AI adoption has more than doubled from 2017 to 2022, with 50 percent of companies using the technology in at least one business area. How many of them have implemented effective governance? Good governance involves a process and practice for asking good questions, getting answers and making judgments. It does not rely on passing the buck. It is neither easy nor simple. But it is necessary.

The time is now for organizations deploying AI to implement robust governance, starting with education and the establishment of guardrails, including measures on trust factors such as explainability, fairness, privacy and ethics. An assessment of AI’s trustworthiness needs to be accompanied by an accountability scheme, outlining roles and responsibilities of a deploying organization.

It is this composite of principles, processes, measures and management that can bring a functional, effective methodology for trustworthy AI to life.

The imperatives of governing AI hold with all forms of this technology. But the special challenges of generative AI complicate the process, due to the opaque nature of the models and the complexity of a value chain with multiple players. Over time, new governance approaches will emerge, including regulatory instruments, legislation, software-based tools and crowdsourcing techniques. For now, however, what matters is an organization’s own ability to impose transparent standards on its own use of the technology. Organizations that use AI, in any form, should see to this without delay.

This article is co-published in the National Post.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Authors

Niraj Bhargava is the CEO and lead faculty at NuEnergy.ai and an expert on artificial intelligence governance.

Mardi Witzel is the Chief Operating Officer of Polyalgorithmic Machine Learning Inc. and an expert in AI governance.