Not Just Governors: Platform Rules and Public Law

June 13, 2022
01_Kaye-BG 01_Kaye-FG 01_Kaye-MG

This essay is part of The Four Domains of Global Platform Governance, an essay series that examines platform governance from four distinct policy angles: content, data, competition and infrastructure.

In a resonant phrase, Kate Klonick conferred upon the major American social media companies the title of “the New Governors.” She deployed that moniker to illustrate what “private platforms are actually doing to moderate user-generated content and why they are doing so” (Klonick 2018). Given, in particular, their scale, their massive user bases and their potential for causing, facilitating, mitigating or preventing all manner of harm, governing aptly characterizes the work platforms do to determine the speech that is tolerated and the speech — and speakers — that are disciplined, even banished. The term has a decidedly internal character: “governing” describes how the companies develop and enforce their own private terms of service, articulated through variously named guidelines, rules or standards, with respect to content that users upload to their privately owned platforms. It suggests a relationship between the companies and their users, the governors and the governed. To be sure, their rules may take into account the off-platform consequences of user-generated content. But whether drawn from company business models, a refraction of the US Constitution’s First Amendment or a human rights framework, these rules essentially look inward in an effort to create an internally cabined kind of platform law.1 And as long as they are inward-looking, contained within a supposed bubble of company governance, one might think that their influence beyond the platform may extend to a conception of developing industry standards — a cross-corporate converging of rules and values — but not much beyond that.

And yet something has changed in recent years, accelerated over the course of 2021. With pressure from the public, governments, politicians, and human rights defenders and monitors, the biggest of the platforms have begun to explain their decisions around high-profile cases. This is not to say that they are adequately explaining the full range of content decisions they make; so far, the trend is haphazard, a tiptoe into the world of “governing” transparency. The research program Ranking Digital Rights, in evaluating 26 of the leading actors in technology and communications, noted in its 2020 index that “the global internet is facing a systemic crisis of transparency and accountability.”2 Nonetheless, in the context of some platform decisions and rulemaking, the companies are more regularly deploying a public-facing rhetoric drawn from public law. Twitter and YouTube explained their decisions to remove or suspend the account of then-US President Donald Trump in more detail than they typically do for such actions. Facebook created a tribunal-like mechanism, the Facebook Oversight Board, to reach decisions on some of the hardest content questions that appear before the company, articulating the centrality of human rights standards in its earliest decisions.

These new steps may help develop a broader understanding of how companies in the technology space conceive of their human rights responsibilities and implement them in the context of specific cases.

Meanwhile, democratic governments are considering regulation that may increase such public reasoning. The European Union has preliminarily adopted a Digital Services Act (DSA)3 (European Commission 2022) that could incentivize company articulation of the reasons for account actions, which (given trends) would likely involve human rights and other public law standards. Consider, for instance, article 15 of the draft DSA, which requires internet companies that take account of content actions to notify account holders “of the decision and provide a clear and specific statement of reasons for that decision.” A German federal court4 went so far as to find that the failure of Facebook to provide adequate notice about its rules and actions interfered with the rights of users in Germany. Similar kinds of transparency mandates are being considered in the United States and the United Kingdom. The mechanisms of international human rights law — at the United Nations and in regional fora — are moving in similar normative directions in a push for clarity and disclosure.

What are we to make of company articulation of public law standards? At one level, this move to public law may be no more than private company implementation of such instruments as the UN Guiding Principles on Business and Human Rights. Put another way, these new steps may help develop a broader understanding of how companies in the technology space conceive of their human rights responsibilities and implement them in the context of specific cases. When fed back into the UN business and human rights mechanisms, company decision making may provide substantial learning about the intersection of state human rights obligations and company responsibilities. Likewise, it may be that the language of content-moderation decision and rulemaking necessarily borrows from public law terminology and frameworks, given the kinds of issues at stake. Approaching content-moderation explanations from this perspective may reflect no more than an effort to create or borrow from a shared language so that company decisions, standards and processes are understandable to a public that increasingly sees the companies as powerful state-like forces within society.

But there is another element here that deserves consideration: a possibility of private rulemaking’s spillover into public norms. That is, will company articulation of principles of public law — whether framed as constitutional law, human rights law or regional law governing fundamental rights — have an impact not only on the platforms’ rulemaking and enforcement but also on public law itself? Might private rulemaking lead to legal development in public law? Or put another way: Will private rulemaking and enforcement influence the shape and content of global norms for freedom of expression, privacy and other human rights? If so, what are the pathways to such impact? Can we expect that public institutions will refer back to company decision making that articulates human rights or other public law standards, almost as a kind of development of a public-private common law of user-generated content? Would this be a good thing? What are the risks, if any, to principles of democratic governance? Should public law incentivize or constrain these developments as an aspect of regulation of the super-dominant companies of the tech industry? Should public regulation aim to channel decisions regarding public norms into democratically legitimate fora?

The global debate over the power of social media — and the power of super-dominant companies in the internet sector, at all levels of “the stack” — has reached an inflection point. Regulation is coming — indeed, in authoritarian environments, it is already here — and the shape and resilience of global norms are at stake. Policy makers, legislators, jurists and the public need to think through the questions just posed in order to determine how, as Benjamin Barber asked nearly a quarter-century ago, the internet can “belong” to us in a democratic sense.

  1. See Report of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression, UNGA, 38th Sess, UN Doc A/HRC/38/35 (2018), online: <https://documents-dds-ny.un.org/doc/UNDOC/GEN/G18/096/72/PDF/G1809672.pdf?OpenElement>.
  2. See https://rankingdigitalrights.org/index2020/executive-summary.
  3. European Commission, Proposal for a Regulation of the European Parliament and of the Council on a Single Market For Digital Services (Digital Services Act) and amending Directive 2000/31/EC, COM(2020) 825 final, online: <https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52020PC0825&from=en>.
  4. Bundesgerichtshof zu Ansprüchen gegen die Anbieterin eines sozialen Netzwerks, die unter dem Vorwurf der “Hassrede” Beiträge gelöscht und Konten gesperrt hat, Nr 149/2021, online: <www.bundesgerichtshof.de/SharedDocs/Pressemitteilungen/DE/2021/2021149.html>.

Works Cited

European Commission. 2022. “Digital Services Act: Commission welcomes political agreement on rules ensuring a safe and accountable online environment.” Press release, April 23. https://ec.europa.eu/commission/presscorner/detail/en/IP_22_2545.

Klonick, Kate. 2018. “The New Governors: The People, Rules, and Processes Governing Online Speech.” Harvard Law Review 131: 1598–670.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

David Kaye is a professor of law at the University of California, Irvine, and director of its International Justice Clinic.

The Four Domains of Global Platform Governance

In the span of 15 years, the online public sphere has been largely privatized and is now dominated by a small number of platform companies. This has allowed the interests of publicly traded companies to determine the quality of our civic discourse, the character of our digital economy and, ultimately, the integrity of our democracies. This essay series brings together a global group of scholars working in four distinct domains of the platform governance policy discourse: content, data, competition and infrastructure.