Surveillance Capitalism Wasn’t Built by Powerful Companies Alone

How societal norms and prevailing economic models still contribute to the development of harmful technologies.

November 18, 2020
2020-09-25T000000Z_537400878_MT1SIPA000PWQJ9Z_RTRMADP_3_SIPA-USA.JPG
(Photo by IPA/Sipa USA)

The rose-coloured glasses are now off; our early, idealistic dreams of a decentralized web that would free us from corporate, authoritarian futures have been replaced by a waking nightmare of surveillance capitalism, brought to us by the sprawling titans of big tech who turned their control over our online experience into an advertising bonanza. There are, of course, suggestions on the table for improved governance of the companies, technologies and industries that facilitate surveillance capitalism. Increasingly, experts and policy makers alike are discussing platform interoperability, more effective antitrust laws, personal data stores that complement data protection laws, intermediaries to help us enforce our data rights, and standards that remove unwanted biases from the artificial intelligence (AI) that we build.

Yet, while important, these recommendations are insufficient to curb the reproduction of many of the current harms. That’s because this surveillance economy is made up not only of the powerful tech companies but also of the underlying assumptions, beliefs and economic models that reinforce them. Unless we scrutinize and question these beliefs, we risk merely rearranging the deck chairs on the Titanic.

Fetishizing Data and Machine Predictions

Last spring, when COVID-19 prevented British high-school students from taking final exams, an algorithm was called on to predict their final grades. Unfortunately, the algorithm, which was supposedly designed to overcome existing human biases, instead delivered grades that were widely perceived to be consistently lower than the students and their teachers expected, all of which furthered existing racial and socio-economic injustices.

Newspapers are littered with similar stories of algorithmic failures and biases, ranging from dragnet surveillance systems failing to predict terrorist attacks to facial recognition systems failing to recognize faces. Yet our belief in magical machines continues to hold strong. It’s what Alexander Campolo and Kate Crawford refer to as enchanted determinism: “a discourse that presents deep learning techniques as magical, outside the scope of present scientific knowledge, yet also deterministic, in that deep learning systems can nonetheless detect patterns that give unprecedented access to people’s identities, emotions and social character.” The inner workings of prediction machines are poorly understood — even by their designers — but we are eager to believe that they can accurately learn our emotions, desires and futures.

The trouble, as Campolo and Crawford also point out, is that enchanted determinism is more than just a marketing ploy: it’s a foundational view shared by big tech and some of its loudest critics alike. In The Social Dilemma, a documentary on the dangers of social media, Googler-turned-activist Tristan Harris warns of the power of social media algorithms to accurately predict our interests, dreams and desires, and of the near-perfect control they hand to their masters. But do they actually do all these things? Although the tech optimists are turning pessimistic, exchanging utopian hopes for fear, they are hard-pressed to surrender the belief that the machine is all-powerful. It’s a belief, of course, that keeps a billion-dollar industry spinning.

This surveillance economy is made up not only of the powerful tech companies but also of the underlying assumptions, beliefs and economic models that reinforce them.

Similarly, the increasingly popular notion of data colonialism describes how, as Nick Couldry and Ulises A. Mejias put it in The Costs of Connection: How Data Is Colonizing Human Life and Appropriating It for Capitalism, “vast amounts of personal data [are] transferred through shadowy backchannels to corporations using it to generate profit.” Their book rightfully criticizes the social injustices produced by platform capitalism, but fails to question its own assumption that mass data collection is valuable in the first place.

Of course, just because data may not be “the new oil and just because machine predictions may not be all-powerful does not mean they are harmless. Indeed, the consequences of our reliance on machine predictions are real and well-documented. The high-school graduates eventually had their projected grades overturned in court, but countless job seekers, prisoners and gig workers continue to live with the adverse consequences of prediction-based decision making. As pointed out by numerous scholars, the power of machine predictions lies not so much in their accuracy as in their tendency to make their predictions manifest. Machine predictions can become self-fulfilling prophecies that create the very futures they are meant to inform us about: for example, treating people as if they may commit a crime increases the chances of them doing exactly that. And when we base predictions on historical data, we risk recreating the past, over and over and over.

As participation in society increasingly relies on our being predictable, we find ourselves in the paradoxical situation where we both reject the machines that predict us and aim to improve its predictive powers by advocating for more representative data sets; in the hopes that it might  that the facial recognition that shepherds us through border control will recognize our face, or a soap dispenser will recognize our hands. Or we are forced to become more predictable by constraining our set of motions — a reality faced by Amazon’s warehouse workers, Walmart workers  and many others. In the words of historian Stephanie Dick, “attempts to produce intelligent behavior in machines often run parallel to attempts to make human behavior more machine-like.”

But perhaps that’s the point. Magical machines promise to streamline production processes, reduce complexity and provide a distraction from the messiness of engaging with social problems and human decision making. As David F. Noble observes in Forces of Production: A Social History of Industrial Automation, “If this ideology simplifies life, it also diminishes life, fostering compulsion and fatalism, on the one hand, and an extravagant, futuristic, faith in false promises on the other.” The way out, according to Noble, is to recognize that technology is socially determined and intrinsically political — in other words, to acknowledge that there are neither magical shortcuts nor crystal balls.

Runaway Individualism

Faced with the mass data collection that fuels these magical machines, we tend to focus on the individual as the locus of control. A tremendous amount of energy is channelled toward giving individuals rights over their personal data, creating mechanisms for individual consent or removing sensitive identifiers from data sets. But how useful are they to people when they find themselves confronted with a machine that bases its prediction on them not as individuals but as units of a community, network, social dynamic or stereotype?

Our current pandemic has made one thing abundantly clear: we are all connected. We are social creatures who depend on one another for our continued existence. Our identities are social and continually being shaped and reshaped in relation to others. And yet, when it comes to our digital data, we get stuck between the dichotomous categories of data that is personal and data that is non-personal — the personal, to be protected; the non-personal, to be shared freely and openly.

That’s problematic. For one, personal data is hardly ever just personal. My phone records, messaging, or DNA data all reveal information about others. What my friends or neighbours reveal about themselves reflects on me as well. And my decision to share, say, my COVID-19 status with the world holds important implications for those around me. On the flip side, non-personal data is far more personal than its name would suggest. Data on soil conditions, for instance, can be used by a farmer to improve agricultural output, or it can be used by a commodity trader to manipulate commodity prices. The result may well impact the food prices we pay and thus affect us personally.

Data Governance Revisited

Only when we drop the belief that predictive machines are all powerful, and instead consider them in their social and political context, do we allow for governance models that center human and planetary concerns. Similarly, once we replace notions of individual control over data with a focus on individual and collective agency, do we create the possibility for data governance models that truly challenge the status quo. Below I will explore some of these alternative approaches to data governance.

As a starting point, we can listen. None of the beliefs I have discussed here have gone unchallenged. Many before me have identified the harms done by the combination of tech-utopianism and weaponized individualism. Instead of getting lost in the intricacies of the technologies themselves, we’d do well to listen to those voices, many of them the very people subjected to predictive machines, and include them in our governance designs. In addition, we may look to other cultures and disciplines for different perspectives on self and collectivity. Rather than search for a new ideology to replace the existing one, we should look to embrace a plurality of perspectives.

Next, we can focus on the problem. We need to start with the nail, and then work our way back to understand what a possible hammer could look like. Instead of first asking an engineer, ask those who are affected by the problem. Then, we may come to understand how data could play a role in solving that problem, while always staying aware of the opportunity costs as well as the potential harms we may incur down the road.

Finally, we must build models for collective governance. There are good reasons we need data about ourselves, our communities and our environments. We may need data to hold those in power accountable, for self-reflection, to coordinate actions or to evaluate progress toward common goals. But unlike the masses of decontextualized data that we are currently focused on, that data will often be contextual or qualitative. It will come from many sources and be updated by everyone who is affected by its collection. All who are subjected to its use — and that’s every one of us — need to have a say in what is collected and how it is employed. Being involved means not fearing the messiness of collective decision making but instead embracing it, as a crucial element of any robust governance system.

New data governance models — such as data commons, in which groups of people collectively decide on data collection and use — can help us. In other cases, we may rely on intermediaries and data trusts to help us exercise our rights and execute our decisions. Such intermediaries should have a fiduciary duty to make decisions in our best interest, and the scope of their power should be restricted to a specific purpose. Importantly, collective data governance models are needed for non-personal and personal data alike.

However, even the most sophisticated data governance models cannot replace a holistic approach to problem solving that includes a variety of solutions having nothing to do with data. We should, above all, resist the temptation to shuffle the deck chairs and use these models to legitimize the very systems we aim to challenge. Some things, like predictive policing or facial recognition, need to be rejected outright. 

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Anouk Ruhaak is a Mozilla Fellow embedded with AlgorithmWatch. Her work focuses on data governance design.