Boardroom Drama at OpenAI Portends a Looming AI Monoculture

Microsoft’s de facto swallowing of OpenAI is not an isolated occurrence.

November 29, 2023
openAI
Sam Altman, CEO of OpenAI, attends the Asia-Pacific Economic Cooperation CEO Summit in San Francisco on November 16, 2023. (Carlos Barria/REUTERS)

In late November 2023, the palace intrigue over control of OpenAI boiled over. But behind the drama lurks a deeper problem: the tremendous concentration of power in the artificial intelligence (AI) industry. A handful of companies are building AI products — and also deciding what rules should bind them. The fox is guarding the henhouse. Exhibit A: In 2019, a corporate restructuring essentially turned OpenAI into Microsoft’s research arm. The corporate giant invested more than US$10 billion in the nominally non-profit AI startup, mostly in the form of cloud-computing essentials for products like ChatGPT to run smoothly.

Microsoft’s de facto swallowing of OpenAI is not an isolated occurrence. Driving the trend is an ever-expanding appetite for computing power. Training and running AI systems is extremely resource-intensive as these products corral large amounts of data and computing power — and leave a toxic environmental footprint. Even as OpenAI got its hands on Microsoft Azure cloud computing, it had to restrict access to its latest GPT-4 model due to insufficient capacity.

The need to secure critical computing power has led to another partnership — between Anthropic and Amazon. Anthropic was founded by former OpenAI employees who left the company over AI safety concerns. Exhibit B: In return for a minority stake in Anthropic, Amazon provides it with computing power. The two companies make strange bedfellows: Anthropic is a vanguard in responsible AI, pioneering ways to embed human rights and even non-Western perspectives in AI systems. Amazon, for its part, has a track record of facilitating problematic facial recognition technology, not to mention deploying AI to micromanage its workers, and union-busting. Although Amazon is nominally providing just the plumbing for OpenAI’s operations, it would be naive to believe Anthropic’s culture, norms and values will remain intact.

AI’s need for computing power favours incumbents, but it doesn’t have to be this way.

Prior to partnering with Microsoft, OpenAI sought public funding to ensure a viable infrastructure, but, according to CEO Sam Altman, “There was no interest.” In hindsight, this was a missed chance to counterweight the AI monoculture led by a few already dominant technology corporations. It was a golden opportunity to imbue public and civic values into what is arguably the most consequential technology of this era.

Not only did the US government fail to take a meaningful oversight role, but Sen. John Neely Kennedy went so far as to try to recruit Altman to chair a potential AI regulatory agency. So, even as the government is contemplating stepping up AI oversight, it cannot help but perpetuate the concentration of power in the hands of those already at the helm. Thankfully, the bipartisan framework on AI legislation corrects matters on this point, promising rules against conflicts of interest when staffing a potential AI regulatory agency.

Once the dust settles at OpenAI, we will be left with weakened corporate governance of today’s most important AI company. The new board — supporters of Altman — sends the message that any criticism of the company’s commercialization is out of bounds.

Although many factors likely ignited the crisis, one seems to have been Altman’s disagreement over board member Helen Toner co-authoring a paper critical of OpenAI’s “copyright issues, labor conditions for data annotators, and the susceptibility of their products to ‘jailbreaks’ that allow users to bypass safety controls.”

Although the twists and turns of this saga indicate that Altman’s total victory was far from a foregone conclusion, its net effect suggests that a board cannot effectively check against risks that, by the company’s own admission, they cannot adequately mitigate.

But this doesn’t have to be business as usual. Instead, let it be a wake-up call for governments to actively curb the consolidation of AI power in the hands of a few corporate overlords. The Biden administration’s recent executive order coordinating federal agencies toward safe and responsible AI is directionally sound. But it’s just a start. Now is the time for specific, binding regulation to ensure that the AI industry, like any other, is accountable to the public.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Maroussia Lévesque is a CIGI senior fellow, a doctoral candidate at Harvard Law School, an affiliate at the Berkman Klein Research Center, and a member of the Indigenous Protocol and Artificial Intelligence working group.