Data Governance in the Digital Age | Ungoverned Space: How Surveillance Capitalism and AI Undermine Democracy

Ungoverned Space: How Surveillance Capitalism and AI Undermine Democracy

Published: March 20, 2018

Author: Taylor Owen

Key Points

  • The threat to democracy from misinformation is enabled by two structural problems in our digital infrastructure: the way data is collected and monetized (surveillance capitalism), and how our reality is algorithmically determined through artificial intelligence (AI).
  • Governments face a particular challenge in governing platforms as any efforts must engage with issues of competing jurisdiction, differing notions of free speech and large-scale technological trends toward automation.
  • Policy mechanisms that enable the rights of individuals (data protection and mobility) are likely to be more effective than those that seek to limit or regulate speech.
S

ince the 2016 US presidential election, the term “fake news” has become everything and nothing. It is used as both a description of the corrosive effects of social media on our civic discourse, and also as a political tool by US President Donald Trump to discredit the free press. This has led many to call for a moratorium on its use. But this essay suggests that the term is important not just because of the 2016 election, but because the debate over it in the past 18 months reveals two structural problems in our digital infrastructure.

First, fake news is a product of the way our attention is surveilled and monetized. It is a result of an economy of surveillance capitalism. Broadcast media once had a near monopoly on access to large audiences. If an advertiser wanted to reach a particular demographic, they would purchase ad space with a publisher that claimed to reach that group. Advertising technologies, or adtech, has upended a model that tied content production and financial return together.

Data brokers and platforms use vast sources of corporate surveillance and behaviour data to build highly specific and detailed profiles of each of their users. This data is then sold as commodities. Ads are then individually customized by inferring users’ moods, desires and fears through their call records, app data and even rhythm of keyboard typing. This allows for Facebook to serve far better and far more relevant ads (for example, you actually see something that you might want or be shopping for), but it also can be more intrusive. Facebook has told advertisers that it can identify when a teenager feels “insecure,” “worthless” and when they “need a confidence boost” (Levin 2017).

These ads are distributed directly to users wherever they may be on the internet or, increasingly, the Internet of Things. Simply put, instead of buying an expensive generic ad on NYTimes.com to reach a broad demographic, programmatic ads allow an advertiser to track a person around the internet and, increasingly, the physical world, and precisely target them using highly personalized data and models about their lives.

This has of course killed the revenue model for news (almost all new digital ads now go to Facebook or Google). And it is immensely profitable: Facebook’s annual revenue, nearly all of which comes from online ads, grew to over US$40 billion in 2017.1 But it has also incentivized the spread of low-quality over high-quality content, enabled a race to the bottom for consumer surveillance, and created a free market for attention — where anyone, anywhere can buy an audience for almost any reason.

One result is that while the ecosystem may be maximized for selling products, it is equally as powerful for selling a political message. In one internal Facebook experiment conducted on 61 million users of the social network, about 340,000 extra people turned out to vote in the 2010 US congressional elections because of a single election-day Facebook message highlighting their friends who had voted. This is not necessarily a bad thing. Facebook got a large number of people to vote. The problem is that these tools can be used for nefarious purposes as well and, troublingly, increasingly they are.

Facebook has told advertisers that it can identify when a teenager feels “insecure,” “worthless” and when they “need a confidence boost.”

Second, our digital infrastructure is determined by AI. For example, while there are more than one billion posts to Facebook every day, what we see as single users is highly individualized. This personalization is done using a series of algorithms, which, while tremendously efficient and scalable, have some real limitations. They are largely unknowable, even to those who created them, are at their core commercially driven, and are laden with the biases and subjectivities of their data and creators. They determine what we see and whether we are seen, literally shaping our reality online. And they do so with almost no transparency.

And this problem is going to get much worse. AI-driven tools that allow for live editing of video will soon be used to create individually customized versions of events and to deliver them directly into our personal social feeds. Millions of simultaneously distributed and individually customized versions of reality will be instantly distributed. If fake text caused confusion in 2016, fake video, or so-called Deepfakes, are going to upend our grounding of what is real. Fact or fabrication will be almost impossible to sort out. This ungrounding will only get more pronounced as platform companies roll out their planned virtual and augmented realities and increasingly sophisticated bots — worlds literally created and determined by AI.

It is these twin structural problems of surveillance capitalism and AI, which together sit at the core of our digital infrastructure, that present the governance challenge to our democracies. A set of legitimately empowering tools have scaled, monetized and been automated to a point where a conversation about how they fit into our democratic norms, regulations, laws and ethics is needed. We are heading into new public policy terrain, and what is certain is the days of quiet disruption and alignment between politics and platforms is over. There are four potential looming governance challenges.

First, our public space is increasingly governed by private corporations. Facebook has done a tremendous amount of good. But it is also a public company that made US$40 billion last year, with investors who expect to make more each year. That is a very strong incentive, which may or may not be aligned with the public interest. At the same time, we are increasingly relegating governance decisions to private corporations. But the unilateral nature of this shift toward corporate self governance is something we need to think carefully about, and as more social and political spaces move onto platforms, we need to think about the layered ways in which governance decisions in the public interest are being determined by ultimately unaccountable private organizations.

Second, governments are ill-suited to regulate the scale, complexity and rapid evolution of platforms. To take one example, it is in the government’s mandate to regulate ads during elections. In fact, US election transparency laws were implemented to ensure that travelling candidates would not say different things to different audiences. But how do we monitor a candidate running 50,000 simultaneous micro targeted ads? Or hundreds of interest groups, each running millions? Our current platform ecosystem allows anyone to target any group from anywhere in the world with almost any message. This capability stands in striking conflict with election laws. Facebook’s proposed solution is a degree of transparency. Users will soon be able to see which ads a page is running. But from a governance perspective, the question is not transparency versus opacity, but rather what is meaningful accountability given the public policy challenge. When framed as a question of meaningful accountability, clearly greater transparency from Facebook is going to be required. Surely, for example, governments should have access to detailed data about all paid content seen by their citizens during an election period?

shutterstock_240144466_B&W-web.jpg
Governance decisions are increasingly being relegated to private corporations such as Facebook. As more social and political spaces move onto platforms, careful thought should be given to the layered ways in which governance decisions in the public interest are being determined by ultimately unaccountable private organizations. (Photo: Michal Ludwiczak / Shutterstock.com)

Third, we are at risk of losing grasp of what is real and what is fabricated. As more of our lives become virtual and augmented by technologies we do not understand, there is a need to seriously debate the role of facts and truth in our democracy. In this sense, the proliferation and monetization of misinformation, and the dominance of algorithmic systems, are not just political or public policy challenges, they are epistemological and ontological ones. When common perceptions of reality become ungrounded, when we no longer know what we know and how we came to know it, and when there is no common version of events (however imperfect), how does a society mitigate collective goods? Shared experience is at the core of democracy, and this is slipping away. This is a really hard problem, but it is on our doorstep. Governments, Canada’s in particular, are putting tremendous resources into building the industry of AI, without the equally important task of understanding its social consequence on the economy, the justice system, human rights, health care, how we fight and kill in war, and even how we perceive reality.

Fourth, we are clearly on the cusp of a new wave of government interventions pushing back on the largely ungoverned power of platform companies. Initiatives are going to range from election financing, net neutrality, data privacy and hate speech. The European Union, and Germany in particular, are already leading this charge. We could see the banning of programmatic political ads. And we are on cusp of a new debate about monopoly power and anti-trust. But these are crude tools. And the systems that need regulating are getting more complex. AI will increasingly be the engine of our digital infrastructure, and yet these systems are opaque, hidden from view and, ultimately, unknowable even to those who created them. We do not yet have the governance language to hold AI and platforms accountable.

There are three broad categories of regulatory response. First, governments can impose legal and regulatory constraints on speech itself. Initiatives vary by jurisdiction, but new German anti-hate speech laws, and the potential repeal of section 230 of the Communications Decency Act in the United States, seek to limit what can be said on platforms, and who is ultimately responsible for this speech — the individual who speaks or the company that distributes and monetizes what is said?

Second, government can also force greater transparency and accountability from platforms. The principle of “knowability” embedded in the EU General Data Protection Regulation, and the proposed Honest Ads Act in the United States both force platforms to reveal more details about how they function. They address the opacity of the algorithms that determine what users see on platforms and whether they are seen. Policies in this area ideally strive for meaningful transparency. What do we need to know in order to hold platforms accountable? Anti-trust movements are an extension of this principle in that they regulate what can and cannot be done within the platform economy.

AI will increasingly be the engine of our digital infrastructure, and yet these systems are opaque, hidden from view and, ultimately, unknowable even to those who created them.

Third, and perhaps most promising, there is a set of policy tools that enable the rights of citizens. These may hold the most promise, as they strike at the core structural problem in our digital infrastructure, namely the collection, sale and automation of our data. The idea that a citizen has a right to the data that is collected about them and can even decide whether data is collected at all without any penalization of the services provided to them, radically changes the power dynamic that sits at the core of the platform economy. Data rights and mobility both empower citizens to think critically about their data as a valuable asset in the post-industrial economy, but also could lead to a new generation of data innovation in the economy, as a new ecosystem emerges in competition to surveillance capitalism — an economy that values our data differently. Right-enabling polices will ultimately prove more politically feasible (and therefore more consequential) than those that limit speech.

Platform companies began as tools to help us navigate the digital world and to connect us with our friends and family. These companies are now auto manufacturers, global advertising companies, telecoms, the central distribution channel of the free press and, critically, are absorbing many of the functions once delegated to democratic governments. We simply must bring them into the spirit and norms of our systems of collective governance. Doing so will require moving beyond a strategy that treats the symptoms of how these platforms negatively impact society and instead focus clearly and urgently on the structural causes of these problems.

Facebook didn’t fail when it matched foreign agitators with micro-targeted US voter audiences or when neo-Nazis used the platform to plan and organize the Charlottesville rally. It worked as it was designed. These design decisions are reshaping society as a whole and, increasingly, what it means to be human. This, at the very least, requires a new and reinvigorated debate about power, technology and democracy.

 

1 See www.statista.com/statistics/277229/facebooks-annual-revenue-and-net-income/.

Works Cited

Levin, Sam. 2017. “Facebook told advertisers it can identify teens feeling ‘insecure’ and ‘worthless.’” The Guardian, May 1. www.theguardian.com/technology/2017/may/01/facebook-advertising-data-insecure-teens.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Taylor Owen is a CIGI senior fellow and the host of the Big Tech podcast. He is an expert on the governance of emerging technologies, journalism and media studies, and on the international relations of digital technology. 

Rationale of a Data Strategy

The Role of a Data Strategy for Canadian Industries

Balancing Privacy and Commercial Values

Domestic Policy for Data Governance

International Policy Considerations

Epilogue