Why Transparency Won’t Save Us

The public is burdened with duties it cannot possibly fulfill: to read every terms of service, understand every complex case of algorithmic harm, fact-check every piece of news

February 18, 2021
2019-04-30T215808Z_1305728569_RC16C438F890_RTRMADP_3_FACEBOOK-F8CONFERENCE.JPG
An attendee takes a photograph during Facebook Inc's F8 developers conference in San Jose, California on April 30, 2019. (Reuters/Stephen Lam)

In a society beset with black-boxed algorithms and vast surveillance systems, transparency is often hailed as liberal democracy’s superhero. It’s a familiar story: inject the public with information to digest, then await their rational deliberation and improved decision making. Whether in discussions of facial recognition software or platform moderation, we run into the argument that transparency will correct the harmful effects of algorithmic systems. The trouble is that in our movies and comic books, superheroes are themselves deus ex machina: black boxes designed to make complex problems disappear so that the good guys can win. Too often, transparency is asked to save the day on its own, under the assumption that disinformation or abuse of power can be shamed away with information.

Transparency without adequate support, however, can quickly become fuel for speculation and misunderstanding. Even the Snowden leaks — possibly the most spectacular exposure of data-collection systems in the last 20 years — did not simply illuminate the truth for all. As I write in my book, Technologies of Speculation, the information that Edward Snowden brought forth was often practically impossible for the public to fully grasp. For one thing, the classified National Security Agency files that he copied and disclosed were so sprawling that Snowden himself tacitly admitted that he had not read everything. Even as the initial leaks dominated the news cycle, a survey from the Pew Research Center reported that among those Americans polled, more than half were following the affair either “not too closely” or “not at all closely.”

But to blame an inattentive or ignorant public is to miss the larger point: that too often, information is hung out to dry, thrown to the wolves. The anthropologist Mary Douglas once wrote that it is our great fantasy to think that certainty is produced by “hard facts impinging on neutral minds.” Further, in practice, throwing new facts on the table will regularly generate new uncertainties, because “new and half-tried theories are milling around looking for facts to establish them.”

But to blame an inattentive or ignorant public is to miss the larger point: that too often, information is hung out to dry, thrown to the wolves.

Indeed, Snowden’s revelations were often used as fodder for misinformation and conspiracy theories, many of which tapped into longstanding Orwellian and Cold War tropes that also fertilize “deep state” theories today. Snowden himself was often accused of not only treason, but also of being a Russian agent, a Chinese agent, a double agent and so on. In one well-noted instance in 2014, an Iranian paper cited a translation of a Snowden interview to claim that the Islamic State was the product of an American plot — except that said interview likely never took place.

Similar patterns have played out since with Cambridge Analytica, and with other exposés on the harms of data collection and algorithmic decision-making systems. Case in point is the recent depiction of Facebook or YouTube in the Netflix docu-drama The Social Dilemma as knowing everything about us and controlling exactly what we think. While not exactly a conspiracy theory, it is a credulous fantasy that strengthens the air of inevitability around big tech, and one that distracts us from strategizing genuinely effective forms of platform governance.

Disinformation scholar Whitney Phillips argues that contrary to the cliché, light doesn’t always disinfect: too often, rendering hate speech or conspiracy theories “visible” results in amplifying and legitimizing those views. As we are now seeing with QAnon theories and COVID-19 misinformation, throwing facts on the table doesn’t always have a corrective effect, and may even provoke people into doubling down. In her research on conservative evangelical groups, Francesca Tripodi shows that people fall into disinformation rabbit holes not through a lack of research, but rather an abundance of research — routed through alternative sources or mediators. Just this month, Marjorie Taylor Greene, the QAnon believer who once suggested that Jewish space lasers were behind the 2018 California wildfires, attempted to excuse herself by saying she was just looking at things on the internet, asking questions like most people do every day, us[ing] Google.” The problem lies not with the lack of information but with how we process that information.

Too often, transparency ends up a form of free labour, where we are burdened with dis- or misinformation but deprived of the capacity for meaningful corrective action. What results is a form of neo-liberal “responsibilization,” in which the public becomes burdened with duties it cannot possibly fulfill: to read every terms of service, understand every complex case of algorithmic harm, fact-check every piece of news. This shift in responsibility makes it, implicitly, our fault for lacking technological literacy or caring enough about privacy — never mind that vast amounts of money and resources are poured into obfuscating how our data is collected and used. This is the crux of the problem. Transparency is often valued as the great equalizer, a way to turn the tables on those in power and to correct the harms of technological systems. But sometimes what you need to correct abuses of power isn’t more information — it’s a redistribution of power.

This point becomes all the sharper when information technologies are directly involved in life-or-death situations. The police killing of George Floyd in May 2020 has, again, raised questions around the role of “bodycam” technology. Here is a quintessential example of the faith placed in new technologies to produce new forms of transparency, and thus to correct a long history of discrimination, abuse and violence. Yet Derek Chauvin knelt on Floyd’s neck in full view of not only his colleague’s body camera but also citizen witnesses filming with smartphones. Yes, the footage drew the world’s attention — but it did not deter the officers, and it did not save Floyd’s life. As Ethan Zuckerman has put it, the problem wasn’t information — it was power.

Here we find the pernicious consequence of the myth that information is “sunlight” and that information alone can expunge wrongdoing. Rules are routinely flouted, with officers often citing technical malfunction or lost equipment as an excuse for missing video. Three days after Floyd’s death, the Minneapolis Park Police Department released bodycam footage, but it was so heavily redacted that for much of it you were left staring not at human beings but black squares. This was no isolated incident; it is telling that even as protests over the killing of George Floyd erupted around the country, the Chicago police union sought to destroy police misconduct records, before it was stopped by the Illinois Supreme Court. As researchers (such as Kelly Gates, in her essay “Counting the Uncounted”) have shown, what becomes transparent about those killed by police — their misdeeds, their personal life — tends to be far more exhaustive than the data we can get about the police themselves.

All this is part of a broader pattern in which the very groups who should be held accountable by the data tend to be its gatekeepers. Facebook is notorious for transparency-washing strategies, in which it dangles data access like a carrot but rarely follows through in actually delivering it. When researchers worked to create more independent means of holding Facebook accountable — as New York University’s Ad Observatory did last year, using volunteer researchers to build a public database of ads on the platform — Facebook threatened to sue them. Despite the lofty rhetoric around Facebook’s Oversight Board (often described as a “Supreme Court” for the platform), it falls into the same trap of transparency without power: the scope is limited to individual cases of content moderation, with no binding authority over the company’s business strategy, algorithmic design, or even similar moderation cases in the future.

Here, too, the real bottleneck is not information or technology, but power: the legal, political and economic pressure necessary to compel companies like Facebook to produce information and to act on it. We see this all too clearly when ordinary people do take up this labour of transparency, and attempt to hold technological systems accountable. In August 2020, Facebook users reported the Kenosha Guard group more than 400 times for incitement of violence. But Facebook declined to take any action until an armed shooter travelled to Kenosha, Wisconsin, and killed two protesters. When transparency is compromised by the concentration of power, it is often the vulnerable who are asked to make up the difference — and then to pay the price.

Transparency cannot solve our problems on its own. In his book The Rise of the Right to Know, journalism scholar Michael Schudson argues that transparency is better understood as a “secondary or procedural morality”: a tool that only becomes effective by other means. We must move beyond the pernicious myth of transparency as a universal solution, and address the distribution of economic and political power that is the root cause of technologically amplified irrationality and injustice.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Sun-ha Hong is an assistant professor of communication at Simon Fraser University.