SEC Enforcement on Crypto Proves Regulation Can Work

If existing mechanisms can be effective in this technology sector, why not others?

July 5, 2023
binance
Photo illustration by Riccardo Milani/Hans Lucas via REUTERS.

The usual line, when it comes to regulation of the internet and all things digital, is that governments are too slow and stupid, and the tech too complex, novel and global, for regulation to be effective. Law makers can’t possibly understand the nuances of these new companies and machines, can they? It’s a view seemingly confirmed by the United States Congress’s inept interrogation of Facebook CEO Mark Zuckerberg in 2018.

But the continuing enforcement actions of the US Securities and Exchange Commission (SEC) against the cryptocurrency market have put the lie to all that. The only policy areas more shrouded in mystery than “the cyber” (see also: artificial intelligence [AI], big data) are money and finance. Yet somehow a government agency is proving itself more than capable of humbling the scions of the digital age, charging various players with fraud, including FTX and Sam Bankman-Fried in December, and dominant exchanges Coinbase and Binance in early June. That the crypto meltdown (unlike the 2008 subprime loan crisis) did not spill over into the wider economy further shows that governments can effectively regulate online.

From the moment cryptocurrencies hit policy makers’ radar with the announcement by Facebook (now Meta) in 2019 that it planned to create its own digital currency, monetary policy makers have taken cryptocurrency seriously. They’ve acted accordingly, eventually effectively scuttling Meta’s project and continuing to insist, mostly, on regulating crypto within existing legal and regulatory frameworks.

Crypto is hardly the only tech sector steeped in questionable legality. Often, the “things” in Mark Zuckerberg’s infamous saying, “Move fast and break things,” are “laws” (see: Uber and taxi laws; Google and Street View; Facebook and privacy; and now OpenAI and data protection laws). In almost every non-crypto area, it’s been a fight merely to get regulation on the table (see: social media, cultural policy and online harms).

If the SEC shows us that effective digital regulation can happen, why has it been so difficult in other tech-affected sectors? What makes crypto and finance different from, say, telecommunications (social media/streaming), transportation (Uber), and now education (generative AI)?

The critical difference, I’d argue, is that officials and experts in finance insisted on setting the terms of engagement. Regulators stood fast against the bitcoin bros’ self-serving hype, which held that crypto and blockchain were revolutionary, and that old ways of thinking therefore did not apply. Instead, finance officials insisted they would determine how these technologies fit within their existing frameworks.

The formal institutions of finance were buttressed by a strong consensus among professional economists about what money and finance are. This consensus left them less likely to buy the line that cryptocurrencies, and things like non-fungible tokens for that matter, were something completely new. What’s more, money and finance are widely recognized as foundational to states’ structural power. Whether or not people understand the minutiae of finance, everyone agrees it’s important.

The financial sector, in short, is characterized by strong existing institutions; a shared understanding of the intellectual terrain, which allowed them to define issues in their own terms; and a sense of the high stakes involved in allowing new technologies entry into their system.

Economics and finance, moreover, are unusual for their shared sense of their own subject, and the general recognition that their topic is of existential importance (the same holds for national security). Not so these other sectors, which also do not benefit from the perceived centrality of finance to the exercise of state power.

The contrast with other tech sectors is stark. From retail to accommodation to telecommunications to transportation, policy makers and academics have all too often accepted the tech industry’s pitch that its products were so new and innovative that they either shouldn’t or couldn’t be regulated like existing companies that were delivering equivalent services.

And so we got caught up talking about “app companies” and “platforms” — the latter itself a nebulous concept — instead of about retailers (Amazon), broadcasters/telecoms (YouTube, Facebook, Twitter, Netflix), taxi companies (Uber), commercial accommodation companies (Airbnb), libraries (Google Search) and academic paper mills (ChatGPT). We bought the line that these companies were so new, unique and globally powerful that the people who work within their rules weren’t really employees, that their globalness was natural and not a policy choice.

All these industries were already subject to regulations and norms designed to ensure they operate in the public interest. As we’ve since learned, deciding not to hold these companies to these standards of entry has created countless social problems.

For example: By not holding Google Search to the standards of the library sector (this tool, like a library, is supposed to catalogue and organize existing records), we’re now in a situation where our monopoly search provider is incorporating falsehood-dispensing generative AI into its search functions, and people are being told that they now should fact-check their company-provided results to make sure they’re not straight-up lies. How, through another Google-enabled search?

On Amazon, fraudulent listings remain a scourge in a way that would sink any other normally regulated retailer.

In education, paying a paper mill to “Write me an essay on former Australian Football League superstar Adam Goodes” would get a student suspended or expelled. Yet academia is currently split over whether ChatGPT, which works on exactly the same principle of outsourced mental labour, is a potentially legitimate knowledge-creation tool.

The AI mania has become the latest example of the tech company confidence game: insist you’ve invented something completely new and essential that requires a totally new way of thinking about it.

At heart, academia is the guardian of the integrity of our knowledge ecosystem, legitimized by our standards, years-long training and commitment to open methodologies in the pursuit of knowledge. But as a whole, unlike with finance, there is no consensus academic view of the world. While this diversity is usually a strength, it also leaves us unable to insist forcefully on uniform standards that would, say, ban the use of ChatGPT — a tool designed to make it impossible to discern whether the writer did the intellectual work in producing a text — outright from our classrooms, books and journals.

The cost of buying industry-driven hype, of failing to recognize that, for example, YouTube (whose long-time slogan was “Broadcast Yourself”) is more like a broadcaster than not, has been significant in terms of harmful behaviours, such as the aforementioned fraudulent retail listings, the rise of precarious work, and the spread of online hate and misinformation.

Giving these alternative models time to institutionalize has forced governments into socially necessary rearguard actions that have often required them to justify any regulation to bring these companies’ activities up to the status quo ante of, for instance, not having to worry about purchasing dangerously and fraudulently defective goods at a retail store, whether we’re shopping in person or online.

The cycle continues. The AI mania has become the latest example of the tech company confidence game: insist you’ve invented something completely new and essential that requires a totally new way of thinking about it.

That’s not the case. The SEC’s success reminds us that we already have laws, regulations and norms telling us what is lawful and appropriate in pretty much every area of human experience. It further suggests that these new innovations should be made to conform to these existing standards, that they be made fit for (our) purpose. Just as in finance, existing sectors need to insist on their expertise and legitimate right to determine whether and on what conditions these new technologies can come play on their (our) turf.

These innovations must conform to existing laws and norms, in their development, as Elizabeth M. Renieris notes, as well as in their deployment. Companies like OpenAI are continuing the ignominious tradition of moving fast and breaking laws, releasing products with little regard for existing legal frameworks. Brazilian tech lawyer Luca Belli, for example, argues that ChatGPT violates Brazil’s General Data Protection Law “in multiple ways.” Such claims are in addition to accusations regarding copyright violations and chatbots’ production of false information about people that would never be considered acceptable if done in any other way.

The SEC’s recent actions against the cryptocurrency market have been heartening because the impartial enforcement of laws is the bedrock of our democratic societies. But more than that, they demonstrate what’s possible when we insist on holding tech companies to our already-established standards and norms.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Blayne Haggart is a CIGI senior fellow and associate professor of political science at Brock University in St. Catharines, Canada. His latest book, with Natasha Tusikov, is The New Knowledge: Information, Data and the Remaking of Global Power.