With AI Evolving at Lightning Speed, It’s Time to Take Stock

The AI Index tracks trends in research and development, technical performance, ethics, economics, policy, public opinion, and education.

May 17, 2023
aihearing
IBM’s Chief Privacy & Trust Officer Christina Montgomery, New York University Professor Emeritus Gary Marcus and OpenAI’s CEO Samuel Altman testify before the Senate Committee on the Judiciary Subcommittee on Privacy, Technology, and the Law hearing on artificial intelligence in Washington, DC, May 16, 2023. (Jack Gruber/USA TODAY via REUTERS)

In the past few months, artificial intelligence (AI) has been all over the news. In November 2022, ChatGPT launched and soon became the fastest-growing consumer application in history. A few months later, in March, OpenAI released GPT-4, an even stronger large language model capable of scoring in the ninetieth percentile on the SAT (Scholastic Aptitude Test) and high enough on the Uniform Bar Exam to rate licensing as a lawyer in every US state using the existing bar system.

At the same time, the AI discussion has extended into policy-making and industry circles. In late March this year, Italian regulators banned ChatGPT, citing privacy and data concerns, before reversing that ban in late April. Meanwhile, a recently published report from Goldman Sachs estimates that up to 300 million full-time jobs could be at risk due to AI automation.

It’s clear that AI is becoming enmeshed in our society at an alarmingly rapid rate. As such, it is more important than ever that we pause and take stock of where exactly we stand with this technology. The AI Index at the Stanford Institute for Human-Centered Artificial Intelligence (HAI) does just that. The AI Index tracks progress in AI through various lenses, including trends in research and development, technical performance, ethics, economics, policy, public opinion, and education. This year’s report suggests that last year three dominant trends emerged in the world of AI.

First, generative AI has officially arrived. In 2022 we saw the release of text-to-image models such as DALL-E 2 and Midjourney, text-to-video systems such as Make-A-Video, and language chatbots, most notably ChatGPT. Moreover, AI systems are now furthering science. The technology has recently been deployed to control hydrogen fusion, improve matrix manipulation and create new antibodies.

As one would expect, the new AI models have been subject to criticism. Text-to-image systems have been accused of grabbing imagery from human-generated images without permission, and have been shown to be routinely biased along gendered dimensions. ChatGPT is sometimes prone to producing false responses and can be tricked into unlawful activity such as helping users learn how to build bombs.

This duality captures the current state of AI: it is no longer an experimental scientific tool, confined to the laboratory. Increasingly, it is being used in the real world — sometimes bringing benefit, sometimes doing harm. It will become the responsibility of the companies developing these tools, and the governments regulating them, to think critically about ways in which positives can be maximized and negatives minimized.

Second, industry is pulling ahead of academia. An analysis included in this year’s AI Index Report shows that until 2014, most significant machine-learning systems were released by academic institutions. However, since then, industry has dominated. In 2022 there were 32 significant industry-produced machine-learning models, versus just three produced by academia. The majority of AI intellectual talent is now headed to industry. In 2021, approximately 65 percent of AI Ph.D. graduates took jobs in industry, more than double the number that went to academia.

The growing dominance of industry is unsurprising, given that, as our report suggests, these systems are becoming increasingly larger, more expensive to train and more dependent on powerful computers. Indeed, this industry dominance threatens to result in silos of AI-related technological developments controlled by a small group of actors with select incentives. The British government is already beginning to push back against this trend with proposals for new public investments in AI designed to keep universities in the race. Other governments may have to follow suit.

Finally, the public at large has become fascinated by AI, which translates into a surge in interest from policy makers. The number of bills containing the words “artificial intelligence” that passed into law across 127 surveyed countries grew from just one in 2016 to 37 in 2022. Analyses of the earnings calls of Fortune 500 companies — conference calls that public corporations had with industry analysts to discuss their regular earnings reports — suggest that business interest in AI is likewise growing.

Within this data, some interesting trends are emerging. For example, public opinion about AI differs geographically and demographically. Data from Ipsos, which was highlighted in the report, shows that Chinese citizens are more positive about AI than Americans: more specifically, 78 percent of Chinese respondents agreed with the claim that products and services using AI have more benefits than drawbacks, compared to only 35 percent of the sampled Americans. Worldwide, women are also less likely than men to believe that AI will have positive societal impacts. Moreover, 65 percent of surveyed respondents in another globally representative survey claim that they would not feel safe in self-driving cars.

As the public discussion and debate about AI swells, one thing is clear: this technology will unquestionably become a more integral part of daily life. That reality makes it essential that everyone — from computer scientists, policy makers and industry leaders to regular citizens — more critically consider how AI should be developed and deployed. The stakes, for virtually every aspect of human society, could scarcely be higher.

The views expressed in this article are the author’s alone, and not representative of those of either the Stanford Institute for Human-Centered Artificial Intelligence or the AI Index.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Nestor Maslej is a CIGI fellow and research manager at the Institute for Human-Centered Artificial Intelligence at Stanford University, where he manages the AI Index and Global AI Vibrancy Tool.