Regulating Them Softly

October 28, 2019
I

f you consume a decent amount of news coverage and popular commentary on technology policy issues, you doubtless frequently come across the statement that the companies that run popular platforms for user-generated content such as Facebook, Instagram, Twitter and YouTube are “unregulated.”

Although this argument hints at the sentiment amassing an odd form of bipartisan support in the United States and a number of countries in Western Europe — that platform companies currently have insufficient responsibilities to their users and to the public, and their operation is creating a host of negative social and political externalities — it is off the mark in two respects. The first, frequently noted by frustrated academics in opinion pieces and Twitter threads, is that companies serving as intermediaries for user-generated content are regulated; it’s just that these “intermediary liability” provisions, as enacted in legislation such as the European Union’s E-Commerce Directive and the United States’ Communications Decency Act, are intentionally laissez-faire, crafted carefully to protect free expression and allow for innovation.

The second, less discussed problem, is that regulation comes in many flavours. The regulator’s tool box includes various forms of “soft law” alongside the more traditional “hard” legislation. In the past decade, a network of “informal governance” initiatives — backroom deals, informal codes of conduct and other voluntary mechanisms — have been a major channel through which certain stakeholders, such as security-focused actors within the European Union, shaped the policies of platforms long before the current “techlash.”

To ignore these other forms of governance is to miss a hugely important dimension of today’s global politics of online content: the contentious battles between firms, governments and civil society actors that have shaped the global terms of service affecting the billions of people using the American superplatforms each day.

Questions of Corporate Governance

The global governance of corporations has always been a fraught, difficult affair. In the search for profit, environmental standards and labour rights have been ignored, books have been cooked, tax authorities evaded, and worse.

In the past few decades, in the absence of a world government (or meaningful international coordination) for policing and punishing bad actors, a growing number of private organizations and initiatives have been created in an effort to shape corporate behaviour through voluntary standards and transnational rules. Some of the earliest instances of this trend involved codes of conduct, often initiated under the umbrella of large international organizations such as the World Health Organization, which notably struck a deal with Nestlé in 1984 after a multi-year consumer boycott and international activist campaign that followed the company’s infant formula scandal (Sikkink 1986).

To ignore these other forms of governance is to miss a hugely important dimension of today’s global politics of online content.

Since then, initiatives have been increasingly developed by groups of non-governmental and industry organizations, with dozens of efforts that have sought to create standards and outline best practices around sustainability (for example, ISO14001 and the Forest Stewardship Council), labour rights (such as the Fair Labor Association and the Worker Rights Consortium) and many other areas (Fransen and Kolk 2007).

This scaffolding of various voluntary arrangements, public-private partnerships, industry-specific measures and other informal regulatory instruments, often called “transnational governance,” is today an essential feature of the global regulatory landscape for firms across a host of industries.

This landscape is imperfect, but it has become an important part of the battle for corporate accountability. The stakes are high: from finance and natural resource extraction to manufacturing, big corporations can have a significant social and political impact. As UN Special Rapporteur Philip Alston once asked, “Does Shell’s sphere of influence in the Niger Delta not cover everything ranging from the right to health, through the right to free speech, to the rights to physical integrity and due process?” (quoted in Ruggie 2007, 826).

Today, companies such as Facebook, Google, Amazon and Apple have fashioned a global sphere of influence that often begs many of the same questions, and policy makers and some civil society actors have attempted to answer them in a manner similar to other industries: through informal transnational governance.

The EU Approach

In a recent article published in Internet Policy Review, I discussed the role of informal regulation for governing online content published on platforms in Europe.

As the technology lawyer Christopher Marsden (2011) has outlined, internet regulation in Europe has used “co-regulation” and other soft-law measures in the technology industry since at least 2005. Child safety was one early frontier. In 2008, the European Union’s Social Networking Task Force convened multi-stakeholder meetings with regulators, academic experts, child safety organizations and a group of 17 social networks, including Facebook, MySpace, YouTube, Bebo and others. This process led to the creation of the “Safer Social Networking Principles for the EU,” described as a “major policy effort by multiple actors across industry, child welfare, educators, and governments to minimize the risks associated with social networking for children” through more intuitive privacy settings, safety information and other design interventions (Livingstone, Ólafsson and Staksrud 2013, 317).

Once civil society-led governance initiatives emerge, firms often create their own competing initiatives.

The European Union rolled out similar techniques to try to minimize the availability of terrorist content. In 2010, the Netherlands, the United Kingdom, Germany, Belgium and Spain sponsored a European Commission project called “Clean IT,” which would develop “general principles and best practices” to combatting online terrorist content and “other illegal uses of the internet [...] through a bottom up process where the private sector will be in the lead” (quoted in Gorwa 2019). The Clean IT coalition, which featured significant representation from European law enforcement agencies, initially appeared to be considering some very hawkish proposals (such as requiring all platforms to enact a real-name policy, and to “allow only real pictures of users”), leading to push-back from civil society and the eventual end of the project. However, the project helped set the ideological foundations for the European Union’s approach to online terrorist content by advocating for more aggressive terms of service and industry takedowns without formalized legislation.

In 2014, the European Commission laid out its plans for the “EU Internet Forum,” which brought together EU governments with Facebook, Google, Microsoft, Twitter and the anonymous question-and-answer website Ask.FM to discuss how platforms should best combat illegal hate speech and terrorist content (Fiedler 2016). These meetings led to the 2016 EU Code of Conduct on online hate speech, signed by the aforementioned big four, and effectively resulted in platforms tweaking their terms of service globally to better reflect EU interests (Citron 2017).

Civil society groups engage in transnational governance as well, issuing valuable guiding principles and declarations and founding multi-stakeholder transparency and accountability organizations. In 2008, the Global Network Initiative (GNI) was formed following pressure from human rights groups and policy makers over their conduct in China (Maclay 2010). The organization, which features Facebook, Google and Microsoft as members, along with a number of major civil society organizations, developed a set of high-level principles based on international human rights law that each member company says it will internalize; guidelines on how those principles should be implemented in practice, including commitments to engage in human rights assessments and transparency reporting; and an “accountability framework” that outlines the system of oversight, including company self-reporting, independent auditing and various “compliance” mechanisms. The public seems to know little about the organization, perhaps because the public output of the GNI is limited, and because it was formed in another era when governments were perceived as the bad actors of most pressing concern. Relatively little scholarly work has examined its effects and impact, but recent conversations around creating a “Social Media Council” for content moderation follow in the GNI’s tradition of attempted civil society platform oversight.

Why not try and make platform governance more participatory and democratic rather than purely technocratic?

Once civil society-led governance initiatives emerge, firms often create their own competing initiatives. Facebook looks like it will be the first to create a voluntary self-regulatory body for content policy, and recently published an eight-page “Charter” following a consultation period (Darmé and Miller 2019). The body, which will provide some oversight and input into Facebook’s content policy process, has been described by legal theorists recently as a form of “structural constitutionalism” with which the company is becoming more governmental and developing a “judicial branch” of sorts (Kadri and Klonick 2019, 38). A more parsimonious explanation might be to conceptualize Facebook’s efforts as another example of a private, informal governance arrangement, more akin to the dozens of certification, advisory and oversight bodies that have long been established in the natural resource extraction or manufacturing industries (Gorwa 2019).

Although the “resource” being governed in the case of Facebook’s moderation oversight board is novel (user-generated concent, speech) relative to other industries, the process (a company’s policies being scrutinized by a body) is not. Regulatory scholars argue that companies pursue these kinds of arrangements for a host of reasons, including to “improve their bargaining position with other actors, to win public relations points, and to evade more costly regulation” (Abbott and Snidal 2009, 71), all of which seem plausible in the Facebook case. Only time will tell how exactly the oversight board pans out, and if it ends up creating meaningful accountability, but one must be clear-eyed about the motives and interests behind such initiatives.

Making Platform Governance More Just

This hodgepodge of transnational governance initiatives, as well as the past efforts of the European Union and other players, have demonstrated what can happen when firms are brought to the bargaining table in a concerted political effort (often, by the threat of regulation and sanctions). For example, EU pressure led to the creation of the Global Internet Forum to Counter Terrorism (GIFCT), which helps coordinate industry takedowns of terrorist content. The GNI helped incentivize companies to publish transparency reporting, now an industry standard practice to at least a certain extent, and to set out best practices around government content removal requests.

However, these forms of soft regulation have also demonstrated a host of serious due process, accountability and freedom of expression concerns. The GIFCT is ultra-secretive, and publishes very little information about its “Shared Industry Hash Database” of “terrorist propaganda” (Llanso 2019). As European Digital Rights (2013) has documented, the EU Internet Forum and subsequent hate speech code of conduct marginalized civil society voices from the get-go and is problematic in a number of ways. Even the GNI is an opaque organization, with members bound by non-disclosure agreements; it reveals frustratingly little public information about the results of its company audits and other activities.

We need to do better. Civil society needs to be included, not marginalized; these organizations should not be secret, and should have as much transparency as possible; and governance efforts need to become far more representative of the hugely diverse users they purport to represent. Why not try and make platform governance more participatory and democratic rather than purely technocratic?

Of course, that is easier said than done: such an effort will require difficult cooperation and compromise, and must acknowledge from the onset that no group can just go it alone. As the political scientists Ken Abbott and Duncan Snidal have argued, in “transnational settings no actor group, even the advanced democratic state, possesses all the competencies needed for effective regulation” (Abbott and Snidal 2009, 68). Meaningful collaboration will be necessary.

Amid today’s ambitious proposals for social media councils, platform regulators or oversight bodies — all of which may have significant effects on freedom of expression and other human rights — codes of conduct, self-regulatory bodies and other forms of “soft” regulation may appear to be an increasingly attractive proposition. But we need to be careful, studying the lessons from the past decade of platform regulation, as well as learning from comparable accountability mechanisms that have been introduced in other industries. As the politics of content moderation and intermediary liability become not only more visible but also more consequential, the status quo won’t cut it for much longer.

Works Cited

Abbott, Kenneth W. and Duncan Snidal. 2009. “The Governance Triangle: Regulatory Standards Institutions and the Shadow of the State.” In The Politics of Global Regulation, edited by Walter Mattli and Ngaire Woods, 44–88. Princeton, NJ: Princeton University Press.

Citron, Danielle Keats. 2017. “Extremist Speech, Compelled Conformity, and Censorship Creep.” Notre Dame Law Review 93: 1035–71. http://ndlawreview.org/wp-content/uploads/2018/03/Citron-03.pdf.

Darmé, Zoe Mentel and Matt Miller. 2019. Global Feedback & Input on the Facebook Oversight Board for Content Decisions. Menlo Park, CA: Facebook. https://fbnewsroomus.files.wordpress.com/2019/06/oversight-board-consultation-report-1.pdf.

European Digital Rights. 2013. “RIP CleanIT.” European Digital Rights, January 29. https://edri.org/rip-cleanit/.

Fiedler, Kirsten. 2016. “EU Internet Forum against terrorist content and hate speech online: Document pool.” European Digital Rights, March 10. https://edri.org/eu-internet-forum-document-pool/.

Fransen, Luc W. and Ans Kolk. 2007. “Global Rule-Setting for Business: A Critical Analysis of Multi-Stakeholder Standards.” Organization 14 (5): 667–84. https://doi.org/10.1177%2F1350508407080305.

Gorwa, Robert. 2019. “The Platform Governance Triangle: Conceptualising the Informal Regulation of Online Content.” Internet Policy Review 8 (2). https://policyreview.info/articles/analysis/platform-governance-triangle-conceptualising-informal-regulation-online-content.

Kadri, Thomas and Kate Klonick. 2019. “Facebook v. Sullivan: Building Constitutional Law for Online Speech.” St. John’s Legal Studies Research Paper No. 19-0020. https://papers.ssrn.com/abstract=3332530.

Livingstone, Sonia, Kjartan Ólafsson and Elisabeth Staksrud. 2013. “Risky Social Networking Practices Among ‘Underage’ Users: Lessons for Evidence-Based Policy.” Journal of Computer-Mediated Communication 18 (3): 303–20. https://doi.org/10.1111/jcc4.12012.

Llanso, Emma. 2019. “Platforms Want Centralized Censorship. That Should Scare You.” Wired, September 26. www.wired.com/story/platforms-centralized-censorship/.

Maclay, Colin Miles. 2010. “Protecting Privacy and Expression Online: Can the Global Network Initiative Embrace the Character of the Net.” In Access Controlled: The Shaping of Power, Rights, and Rule in Cyberspace, edited by Ronald J. Deibert, John Palfrey, Rafal Rohozinski and Jonathan Zittrain, 87–108. Cambridge, MA: MIT Press.

Marsden, Christopher T. 2011. Internet Co-Regulation: European Law, Regulatory Governance and Legitimacy in Cyberspace. Cambridge, UK: Cambridge University Press.

Ruggie, John Gerard. 2007. “Business and Human Rights: The Evolving International Agenda.” The American Journal of International Law 101 (4): 819–84. www.jstor.org/stable/40006320.

Sikkink, Kathryn. 1986. “Codes of Conduct for Transnational Corporations: The Case of the WHO/UNICEF Code.” International Organization 40 (4): 815–40. www.jstor.org/stable/2706830.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Robert Gorwa is a CIGI fellow researching platform governance, content moderation and other transnational digital policy challenges.