Consensus on Transparency Can Hide a Deeper Debate

Transparency may be the first step toward sound regulation, but what do you do with it once you have it?

June 22, 2022
2022-06-16T195119Z_576077787_RC26TU92S6BH_RTRMADP_3_BRITAIN-TECH-DATA
An illustration photo of people looking at their mobiles, projected against a crowd scene, from May 30, 2018. (REUTERS/Kacper Pempel)

Elon Musk’s attempts to first buy and then, seemingly, back out of buying Twitter may be a slow-motion train wreck, but it has had a few beneficial effects. For one, Musk’s decision to forgo due diligence, potentially leaving him on the hook for a $1 billion breakup fee, provides yet another example of how being rich isn’t the same as being smart.

More wonkily, Musk’s play for Twitter has proven so unsettling to so many that it’s led to a critical reassessment of transparency as a policy objective when it comes to digital regulation.

This discussion is interesting because transparency is one of the few recommendations on which pretty much everyone who works on tech policy, no matter their political or ideological stripe, can agree: namely, that we need more of it.

For example, companies such as Apple, Meta/Facebook and Google have endorsed the Santa Clara Principles on Transparency and Accountability in Content Moderation, which call on companies to “provide sufficient levels of transparency around the decisions they make, in order to enable accountability.” Musk, for his part, has focused on algorithmic transparency; that is, making the source code visible for anyone to analyze or copy.

In the United States, the pursuit of transparency has been of interest to, among others, libertarian and left-leaning groups. Last year, for example, the Charles Koch Institute co-sponsored an event titled “The Future of Speech Online: Making Transparency Meaningful,” the other sponsor being the Center for Democracy and Technology, a non-governmental organization that aims to “promote democratic values by shaping technology policy and architecture, with a focus on the rights of the individual.”

(Although perhaps not household names, the billionaire Charles Koch and his late brother, David, have been the subject of more than one biography and much in-depth investigative reporting, where they’ve been described as “hard-core free-market libertarians” and “the primary sponsors of climate-change doubt in the United States.”)

The Santa Clara Principles themselves were announced “in conjunction with” a May 2018 conference, “Content Moderation and Removal at Scale,” put on by 10 organizations, including the Charles Koch Institute and the libertarian CATO Institute.

Here in Canada, University of Ottawa law professor and digital-rights advocate Michael Geist, whose policy interventions tend to reflect concerns about the potential for government overreach in platform regulation, has advocated for greater transparency regarding social media companies’ algorithms, political advertising and enforcement policies around problematic online speech. New Democratic Party member of Parliament Charlie Angus, for his part, has called for an independent agency to promote algorithmic transparency with respect to social media companies. Greater transparency, this time with respect to data collection, is also on the mind of Queen’s University professor emeritus and surveillance scholar David Lyon.

In short, support for greater transparency in tech runs deep.

Certainly, the tech sector is particularly opaque. Black-box algorithms are used in everything from health care to finance to the provision of social services. Our personal data is being collected, sold, used and abused. The processes used to prioritize content on the search engines that have become the equivalent of library card catalogues can often lead to perverse results. And the automated processes used to prioritize, remove or leave standing problematic online content remain abstract objects of mystery.

Businesses and governments are increasingly using automated processes — that is, algorithms — to make consequential decisions affecting our lives, without being clear about how they are making these decisions. Relatedly, they are also engaging in the collection and use of personal data using complicated methods that are hard to understand, even for those who study these issues.

Experts have already identified many problems with Musk’s simplistic call to make Twitter’s recommendation algorithm “open source”: the actual processes that go into ranking tweets has become too complex; the raw information would not mean much to outsiders without “the underlying data used to ‘train’ [these] algorithms,” or access to the company policies that provide the context within which they operate. Openness could also allow outsiders to game the system.

All true and worth heeding: “transparency” is not a cure-all. That doesn’t make it less essential. It does remind us, though, that we should operate with a more nuanced understanding of what transparency is, and what it can, and can’t, accomplish.

To that end, I’ve been particularly fascinated by how a consensus on transparency can hide a deeper debate: What do you do with it once you have it? Which inevitably leads to the age-old and often ideological conflict at the heart of any attempt to regulate any industry: Who should make the rules, and what should they look like?

Transparency and Power

Transparency and power are intimately related. Transparency is essential to hold companies and governments to account. As criminologist John Braithwaite notes in his recent book Macrocriminology and Freedom, “Tempering power cannot work without a level of transparency that renders abuses in one area visible to another sphere of power.”

Transparency is also a tool to hold actors accountable for their actions, and to compel change. We need this — the collection of information about the inner workings and outcomes of digital companies and automated processes — to understand if things are working the way we want them to. That applies whether it involves, say, in Canada, the government’s use of algorithms in sorting visa applications or its ensuring that its digital cultural policy provides for sufficient representation of marginalized groups on online platforms. Transparency is a means to an end, a “throughput” rather than an outcome.

The key phrase in Braithwaite’s quote is another sphere of power. Putting government use of algorithms aside (although we can easily extend the following discussion to them), companies can be thought of as a sphere of power unto themselves. But they exist in relation to other spheres of power. In our society, the two main spheres of power, to use his language, are the state and the market. Both can use transparency to effect change, but they do so in different ways and with different consequences. Here, we have a long-standing, more fundamental conflict, between those who believe that the free market and industry self-regulation can discipline corporate bad behaviour, and those who argue that only state regulation can bring these companies to heel.

Transparency is a means to an end, a “throughput” rather than an outcome.

This debate is largely, but not exclusively, ideological. Many people hold fervent views on the proper role of the state and the market in society. However, thinking through how transparency “works” in each sphere can help us break through preconceived notions to evaluate which approach — the market or the state — is most likely to deliver the positive outcomes we want.

Transparency and the Market

In the market transparency model, the final enforcer is the consumer. Transparency is designed to give the individual as consumer the information needed to choose among many competing options, and to understand how the market works. That includes information about any adjudication processes the company might have in place for consumers who appeal violations of a company’s terms of service. Consumers can avail themselves of a company’s (clearly presented) adjudication processes or, assuming a sufficient degree of competition, take their business elsewhere.

Typically, a market-based model places its emphasis on things like consumer choice and the promotion of digital literacy, on the assumption that educated consumers can protect their own rights and interests in the market. The state’s role is largely limited to ensuring this literacy, guaranteeing competition — such as in the recent US push to apply its antitrust laws to large online companies — and setting very minimal baselines (think: prohibiting child sexual exploitation), while ensuring that companies are sufficiently transparent and live up to their terms of service. On the whole, this approach leaves a platform’s rules up to the market, propelled by informed consumers and market competition as the main way to discipline bad actors.

The market transparency model has two primary fault lines: the degree of competition possible in the market, and the ability of consumers to process and use all this information. The less competitive the market — and currently, these markets are not very competitive — the more freedom companies have to act with impunity. This relationship matters a lot, given that the whole point of the platform as a business model is to use network effects to achieve a kind of natural monopoly. Proposals to increase competition by making platforms interoperable are one possible way around this problem. However, the further away you are from a fully competitive market, the less responsive companies will be to market pressure.

And even in the presence of strong competition, this model also puts a lot of pressure on busy individuals to make sense of issues such as data governance and content moderation that, frankly, stymie many experts. This individualist approach to regulation is yet another example of what academics call “responsibilization”: giving individuals the responsibility, previously assumed by government departments, to regulate every aspect of their lives. For example, it assumes we all have the time to work our way through companies’ adjudication processes when we feel we’ve been wronged.

The European Union’s General Data Protection Regulation (GDPR), the law that has led to all those pop-up boxes requiring you to “accept” or “reject” the collection of your personal data, is an example of responsibilization in action. How effective is it? Ask yourself: Do people really understand what they’re rejecting or accepting when they click these buttons? Do they understand the consequences? Or do most people — do you — just click “accept” to get to the website?

As every first-year economics student learns, competitive markets can only deliver their benefits if people have “perfect information”: this is where transparency supposedly comes in. However, they also have to be capable of interpreting this information, whether it’s how these data markets work, the different terms of service they encounter or how their actions may affect others. They then have to make use of it — whether that’s figuring out where and how to switch providers (if any are on offer) or navigating a “transparent” private complaints process.

Even the most successful of these market-based proposals assume that individual consumers’ desires aggregate to create socially optimal outcomes. There’s no reason to think this is so, especially since individual decisions about, say, data collection, can affect others. I may be okay with a credit-card company collecting my personal data. But if that data gets sold and used to set credit-ratings thresholds that end up denying credit to my neighbour, they may not be too happy with me. If our informed opinion doesn’t reflect the effects of our actions on others — what economists call an externality — then those actions, on their own, will not lead to socially optimal outcomes.

Transparency and State Regulation

If transparency in a market-based model is designed to empower the consumer, transparency in a state-led model is designed to empower the regulator with the information needed to regulate these companies and algorithmic processes effectively. This transparency renders abuses visible to the state, and the government acts by creating and enforcing laws and regulations. Similar to consumers, regulatory agencies must have a level of digital literacy to understand how to regulate in the public interest, the definition of which is subject to democratic contestation.

The (democratic) state and its regulatory agencies have two advantages over consumers and the market when it comes to addressing abusive behaviour.

First, they are purpose-built for such activities: governments are supposed to pass and enforce laws. In the quest to discover alternative governance mechanisms, we often lose sight of this basic point. Governments’ main challenge in this area involves the need to build capacity to regulate effectively.

Second, democratic governments, unlike individual consumers, are capable of identifying and promoting a public interest, which they can then compel companies to adopt. This identification of a public interest through debate and elections is the essence of democratic politics, and provides the legitimacy underlying government regulations.

What’s more, governments are capable of acting against monopolies, not only by encouraging competition but also by regulating. Unlike a market transparency-based approach, these functions don’t require a fully competitive market to be effective.

The Role of Academics and Civil Society

Academia and civil society, for their part, represent a secondary sphere of power. Calls to increase access for academics to the inner workings of these companies are part of the general call for transparency. However, unlike the state or the market, scholars exert an indirect — although still important — form of influence. The research and advocacy of academics and civil society are inputs into democratic processes, widely defined, working through the state and influencing companies to educate and advocate. These actions are in no way equivalent to actual regulation — even the most influential civil society organization, at its best, can only cajole — and these actors are far too partial to be capable of reflecting, on their own, the balance of a society’s myriad interests. They’re nonetheless essential, especially as a check on state and corporate power.

The Tough Calls

In a sense, calls for transparency are the easy part of digital regulation. This isn’t to say mandating transparency is a simple task. These companies highly prize their confidential information and intellectual property. Trade agreements, for example, are increasingly “prohibiting access to or the transfer of source code as a condition of the import, distribution, sale, or use of software” in ways that leave “only a small window for states to require access to source code.” In a recent article, legal scholar Magdalena Słok-Wódkowska and management scholar Joanna Mazur argue that by using such provisions in agreements such as the Canada-United States-Mexico Agreement, “states hamper their ability to develop regulatory measures that could ensure transparency of algorithmic governance tools.”

And, as mentioned earlier, although companies such as Google have indeed endorsed the aforementioned Santa Clara Principles, “indicating increasing industry buy-in to these important standards,” according to the American digital rights group, the Electronic Frontier Foundation, they retain the power to determine how to interpret them.

There are a million ways to use and abuse transparency. The real fight, as always, is over who will be allowed to set the rules, and what the rules will be.

For example, both Michael Geist and the Canadian Commission on Democratic Expression (CCDE), an initiative of the Ottawa-based Public Policy Forum, emphasize the importance of transparency. Geist leads with it in an episode of his podcast that offers a proposal on how to balance dealing with misinformation and freedom of expression, while the CCDE puts transparency as the number-one theme in a recent report examining what to do to ensure a better balance of power over platforms’ control systems (the other two themes being accountability and empowerment).

Geist’s proposal takes a relatively more market-friendly approach, driven by a desire to prevent speech overregulation. Government’s role would be primarily limited to ensuring companies are being transparent in their operations and are living up to their codes of conduct and community guidelines, backed up by transparent enforcement policies. Rule-setting power in such a model would remain primarily with the company. In contrast, the CCDE seems to envision a relatively greater role for state regulation. Thinking through how transparency works can help us evaluate the consequences of market- and state-led approaches such as these.

In both cases, the devil is in the details, and the big fight is over what these rules will look like, what they would allow and what they would prohibit. Even Geist’s more market-friendly approach would require some baseline standards to be set by government. But what will these be? What would count as “adequate” transparency, or a fair private adjudication process? That’s where the fight is. It’s also where the stakes are highest. Because, at the end of the day, someone — industry or government — has to set the rules, to set limits.

There are a million ways to use and abuse transparency. The real fight, as always, is over who will be allowed to set the rules, and what the rules will be.

To be clear, none of this involves, or should involve, a binary choice between the state and the free market. The CCDE proposal, for example, may be more favourably disposed to regulation, but it also supports greater “public education and digital literacy initiatives.”

In practice, we need both a high-capacity democratic state and a competitive market to move our society toward outcomes in which neither the state nor companies exert arbitrary power over individuals. We need both strong government regulation and markets that are as competitive as possible to bring digital abuses — of data or algorithmic regulation by companies and governments — to heel.

At the same time, though, we must be realistic about the limits of relying on the market to deliver effective regulatory outcomes. Transparency is helpful for consumers and governments, but only governments, and only high-capacity, democratic governments, can make full use of transparency to regulate in the public interest.

Meanwhile, as a matter of practical policy, the problem of building state capacity is, at present, greater than that of companies having too little power. Currently, markets, not governments, are the dominant spheres. Efforts at regulation are an attempt to redress this decades-long imbalance.

What’s more, we are moving into an era of global crises, most notably the climate emergency, and geopolitical challenges, including the outsized influence of American and Chinese platforms on our politics, which the market and companies are unsuited to address. We need a high-capacity state — one able to regulate at a high level while acting democratically. If transparency is the first step toward sound regulation, enabling governments to take advantage of this transparency is the next.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Blayne Haggart is a CIGI senior fellow and associate professor of political science at Brock University in St. Catharines, Canada. His latest book, with Natasha Tusikov, is The New Knowledge: Information, Data and the Remaking of Global Power.