Doctorow versus Zuboff

December 2, 2020

The public perception of platform companies such as Facebook, Google and Amazon has shifted radically in the past five years. Companies and technologies once broadly seen as aligned with democratic goods are now rightly viewed as undermining core attributes of our democracy. And both policy makers and the public are gaining a far more nuanced view of the social, economic and political implications of our technology infrastructure. 

But this recent shift in public attitude is predated and informed by nearly 20 years of research and policy activism that’s been warning of precisely the challenges we now see so clearly. What we know about technology design’s racial bias, the pernicious side effects of the ad tech industry and the economic inequalities that are exacerbated by the digital economy, as well as our recognition of the epistemic crisis we now so clearly face, is thanks to this body of work.

This awakening is perhaps best encapsulated by the recent Netflix documentary The Social Dilemma. While the film was widely critiqued for ignoring this very intellectual history — instead featuring a largely ahistorical set of observations made by tech insiders — there is no doubt that its core message about the failings in our tech infrastructure has now reached a huge new audience. Other reasons for this public shift clearly include the election of Donald Trump in 2016 and his subsequent vandalism of the public sphere, the ease with which malicious actors have used platforms to undermine elections around the world, and the growing concerns about the market power exerted by the tech giants’ global monopolies.

But another reason for this shift is that we now have new ways of understanding this problem. Groundbreaking new theories about our society and our economy are justifiably rare. Usually, knowledge is accumulated incrementally rather than in leaps. But the vast changes brought by the internet have demanded new ideas. And perhaps no idea has broken through with greater impact than Shoshana Zuboff’s concept of surveillance capitalism.

First outlined in her 2019 landmark book The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, Zuboff’s theory is that big tech is mining our data — usually without us realizing it — and then using that data, along with machine learning, to control and manipulate our behaviour. The result might be a nudge to buy a product. But it could also be, according to this theory, incitement to join an extremist group on Facebook, or to vote for a particular candidate, or to just keep scrolling. For Zuboff, mass data is the extractive commodity that fuels the attention economy, which has a capacity to literally change the way we think and act.

And I agree with much of her argument. I have testified to governments alongside her, and have learned a tremendous amount from her work. Like all ambitious and important academic works, Zuboff’s book has challenged and enabled us to look at the world differently. And in the last couple of years, Zuboff’s theory has become close to gospel, or at least become a catalyzing framework for many of the disparate communities taking on big tech.

Not everyone is convinced. Cory Doctorow is a science fiction author, activist and journalist. He has just written a book called How to Destroy Surveillance Capitalism, which you can find online for free. Doctorow agrees that big tech poses an existential threat to our democracy, an epistemological threat to our society and an economic threat to our capitalist system, but he doesn’t buy into this idea that the algorithms can actually influence our behaviour. In fact, he says that this sort of tech exceptionalism, which grants mystical power to our tech companies, is a flawed belief held not only by critics of big tech but by big tech itself. Both are incentivized to perpetuate this myth: the critics, because it sets up the dangers of big tech in stark, undeniably pernicious, terms, and big tech, because it helps them sell a product (targeted ads) that has made them the richest companies in history. But Doctorow argues that this myth ultimately has real consequences for those seeking reform, because it’s hard to respond to or regulate mystical black boxes that belie our understanding.

Doctorow argues that the power of big tech isn’t about behavioural control built on the back of data exploitation — the tech part; instead, it’s about the big part.

At the core of Zuboff’s argument is this idea that surveillance capitalism represents a kind of “rogue capitalism”: a perversion of capitalism that emerged in a regulatory vacuum in which the intangible economy was allowed to run amok in a governance world designed for tangible goods and services. But Doctorow flips this on its head. He argues that capitalism itself has become rogue — and that now these huge monopolies are able to conduct surveillance precisely because they’re so big. For him, it is the material circumstances of this economic model that are the problem, not the snake oil that is sold on top of them.

It’s an argument with serious implications. Doctorow argues that the solutions offered in response to the surveillance capitalism framing assume the dominance of our current tech companies.

“Proposals to replace Big Tech with a more diffused, pluralistic internet,” he argues, “are nowhere to be found. Worse: The ‘solutions’ on the table today require Big Tech to stay big because only the very largest companies can afford to implement the systems these laws demand.”

In other words, many of the policy fixes in this space actually give tech companies more power, not less. If we’re trying to clean up the online discourse, many look to Facebook to start policing speech on their platform. But if Doctorow is right, that’s the last thing we want to be doing. It’ll only give big tech more power.

While this tension may seem like a crack appearing in a once-aligned movement at the very moment when it is reaching critical velocity, there are three reasons why this is precisely the type of sophisticated debate needed to take on these powerful companies and interests.

First, when looking at the broad governance agenda emerging around big tech — what is often called the platform governance agenda — most of Zuboff’s and Doctorow’s views are compatible. While this agenda can seem convoluted, there is actually an emerging consensus that to democratically govern the internet, we need new data, competition and content policies. Zuboff and Doctorow agree that we need radically reformed data privacy policy, including new clarity on consent, rules around data collection and retention, and even outright bans on certain uses of data. They also broadly agree on competition policy, in particular on interoperability as a mechanism for creating more dynamic markets. Where they disagree is content policy — in particular, on whether governments should be deciding what people can or cannot say on the internet.

Second, this debate about how and whether to adjudicate speech online is as old as the internet itself and an extension of distinct schools of thought about the digital economy.

Activists such as Doctorow, and many aligned with long-time digital rights organizations such as the Electronic Frontier Foundation, argue that regulations around speech online will adversely affect the rights of those in our society who are already historically marginalized. Such regulations, they say, could, in democracies, grant existing authorities, such as police forces and intelligence services, opportunity to abuse their power, and lead, in illiberal regimes, to even more grave human rights abuses. 

The other school of thought, represented by Zuboff and embodied in takedown policies such as Germany’s Network Enforcement Act and much of the content moderation debate, prioritizes the rights of individuals to be protected from the harms of speech over the absolute rights of speech itself. In democratic societies, this school argues, it should be democratic governments, with their (albeit imperfect) mechanisms for accountability, who make the decisions about permissible speech, not private companies.

There is no clear right and wrong in this debate. In fact, it represents a deep trade-off between the right to free speech and the right to be protected from speech. But it is the trade-off that democratic societies entrust democratic governments to make on their behalf, so I am glad we are all hearing the two sides from such thoughtful minds.

Finally, this kind of debate, between deeply knowledgeable and passionate scholars and activists thinking creatively about our digital infrastructure, represents a healthy and maturing discourse, not a weakening of it. A community stuck in orthodoxy will be incapable of building a coherent theory of change. And both Zuboff and Doctorow argue that it is change that we so urgently need.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Taylor Owen is a CIGI senior fellow and the host of the Big Tech podcast. He is an expert on the governance of emerging technologies, journalism and media studies, and on the international relations of digital technology.