Should AI Start-Ups Make Us Re-evaluate Our Emphasis on Big Tech?

An all-eyes-on-big-tech approach to governance is not without risks, especially when it comes to emerging and rapidly advancing technologies.

January 26, 2023
MSFT
A Microsoft logo is portrayed outside the company’s offices in Issy-les-Moulineaux near Paris, France, January 25, 2023. (Gonzalo Fuentes/REUTERS)

In a recent op-ed, US President Joe Biden called for “bipartisan action from Congress to hold Big Tech accountable.” In Europe, a complex suite of new regulations, designed to rein in dominant digital platforms, takes effect this year, just as existing laws find their teeth, resulting in record fines against Meta, Apple and others for their data collection and targeted advertising practices. Law makers around the world, from the United Kingdom to Canada and India, are zeroing in on dominant technology firms, while a recent regulatory crackdown in China erased over US$1.5 trillion off the market value of its leading tech companies. It seems big tech is in for a global reckoning this year. But what if this focus is distracting us from new but rapidly accelerating risks?

Big tech typically refers to technology companies such as Google (Alphabet), Apple, Amazon, Facebook (Meta) and Microsoft, which dominate in market share, user numbers or global revenue. These giants tend to attract the most scrutiny from legislators and motivate new regulations. For example, the European Union’s new Digital Markets Act focuses on “gatekeepers,” companies with an annual turnover of at least 7.5 billion euros within the European Union or an average market valuation of at least 75 billion euros, and at least 45 million EU monthly active users. And the Digital Services Act introduces heightened obligations for “very large online platforms” and “very large online search engines” that have at least 45 million EU monthly active users.

At the same time, there is less regulatory momentum behind addressing the risks posed by small and medium-sized enterprises. For example, the treatment of such firms remains a contentious issue in the bloc’s draft Artificial Intelligence Act, as does the treatment of general purpose AI systems, which can be leveraged for myriad purposes, regardless of size. Similarly, the draft American Data Privacy and Protection Act would impose stringent requirements on “large data holders” and “high-impact social media companies” but exempt firms with gross revenue under US$41 million or processing data of fewer than 200,000 customers. Such thresholds are designed to promote, or at least avoid quelling, innovation.

While laudable, an all-eyes-on-big-tech approach to technology governance is not without risks, especially when it comes to emerging and rapidly advancing technologies such as artificial intelligence (AI) and machine learning (ML). Take, for example, New York-based Clearview AI. The facial recognition start-up was largely unknown before a 2020 exposé in The New York Times revealed its extensive ties to law enforcement, leading to an uproar from civil society. According to the Times, “[Clearview AI’s] system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants.” Despite its vast reach, the company has fewer than 50 employees and moderate venture funding.

Other examples include the many companies offering “generative AI” tools, which use ML techniques to generate text, images, video and other content from natural language and other human inputs. For example, London-based Stability AI, an approximately 100-person start-up founded in 2019, made waves last August when it introduced a free, open-source, deep-learning-based text-to-image model called Stable Diffusion. Despite the company’s calls to use it in an “ethical, moral and legal manner,” the tool has been widely criticized for enabling the promotion of graphic violence, creation and distribution of non-consensual deepfakes and pornography, and infringement of intellectual property. Like Clearview AI, Stable Diffusion was trained on huge volumes of web-scraped data.

Finally, I would be remiss not to mention Open AI, the 120-person, San Francisco-based start-up founded in 2015 by Sam Altman and Elon Musk (who has since left its board). In 2020, OpenAI released GPT-3, a large language model (LLM) trained with web crawling and scraping tools. Last November, it unveiled a free version of “ChatGPT,” a GPT-3-based chatbot, garnering more than a million users within the first week of its release and sending shockwaves across all manner of industries due to its ability to produce essays, emails, poems, and even code. Within weeks, therapy app Koko faced sharp rebuke from mental health experts, customers and the wider public for integrating ChatGPT into its services without user consent. And, in January, New York City became the first municipality to ban the tool’s use in public schools. The start-up also has a Stable Diffusion competitor known as DALL-E.

Small players may also be less encumbered by corporate governance, shareholder obligations or the scrutiny of law makers and the public.

Computational advances combined with big data and the cloud mean that small firms and even individuals can increasingly access AI and ML technologies once available only to large enterprises with deep technical expertise, vast computing power and large proprietary data sets. “Low code” and “no code” AI tools make it cheap and easy to integrate, build and deploy AI applications with little to no expertise, and pre-trained models and LLMs like GPT-3 eliminate the need for large volumes of training data.

But small players may also be less encumbered by corporate governance, shareholder obligations or the scrutiny of law makers and the public. As Erin Griffith and Cade Metz write for The New York Times, “Google, Meta and other tech giants have been reluctant to release generative technologies to the wider public because these systems often produce toxic content, including misinformation, hate speech and images that are biased against women and people of color. But newer, smaller companies like OpenAI — less concerned with protecting an established corporate brand — have been more willing to get the technology out publicly.”

This very reluctance can obscure the significant role that big tech has already played in making small players increasingly powerful. As technology writer Brian Merchant explains, “Tools such as OpenAI’s DALL-E and ChatGPT use huge neural networks to try to assemble new-looking products from massive expanses of old data that has been vacuumed up from the internet as Big Tech has made and managed it — images, articles, and posts created in service of feeding the platform incentives of the Web 2.0 monopolies.” For example, Clearview AI could not be where it is today without the images previously harvested by Facebook and others.

Beyond indirectly providing the underlying data and foundational models for these technologies, big tech often provides direct capital too. For example, Microsoft has invested US$1 billion into OpenAI and just announced a much bigger investment in the company, currently valued at US$29 billion (with plans to integrate OpenAI’s technology into its Office and Azure offerings). For Merchant, “it’s seemingly only a matter of time before these platforms get bought or cloned by the giants, or turned into some onerous subscription-fee service that will steamroll the human creators of the source material.”

This prophecy rings true, particularly as AI technologies’ popularity is soaring while the reputation of tech giants is waning. In fact, big tech’s legal reckoning coincides with an industry-wide commercial downturn featuring widespread layoffs, plummeting share prices and valuations, and shifting social and cultural attitudes. As Merchant puts it, “Some of the most powerful, profitable, and expensive companies in human history … are stuck.” He adds that the model that has engendered big tech, “in which a visionary is entrusted with millions to invent the future, with scant oversight … has hit a wall” (or entrusted with billions, in the case of start-ups like OpenAI). As we see AI ventures repeating these patterns, we have an opportunity to change our own.

With AI/ML growing cheaper and easier to access and scale, small players can present outsized risks. As governments prepare to address these risks, they should remember that big tech was once small. Microsoft and Apple have had half a century to cement their dominance, while Facebook and Google have had decades to grow into companies that sometimes appear too big to fail, or at least to govern. So, while big tech is due for a correction, governments should also keep an eye on the outsized risks posed by newer and smaller ventures.

Specifically, they should reconsider the balance between promoting (or preserving) innovation and protecting people. While tech regulations have traditionally privileged innovation by trying to remedy market failures and harms after the fact, technologies like AI may require us to err more on the side of preventing market failures and harms before they occur. In other words, how we craft our laws, regulations and policies — especially this relative balance of ex post and ex ante mechanisms — matters now, more than ever before.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Elizabeth M. Renieris is a CIGI senior fellow, lawyer, researcher and author focused on the ethical and rights implications of new technologies. Her latest book is Beyond Data: Reclaiming Human Rights at the Dawn of the Metaverse.