It’s Time to Rethink the Standard Frame of Content Moderation

The volume of social media content makes it impractical to carry out ex post review for every case of takedown.

September 14, 2023
elon
Elon Musk, owner of X, formerly Twitter, attends the Viva Technology conference at the Porte de Versailles exhibition centre in Paris, France, June 16, 2023. (Gonzalo Fuentes/REUTERS)

In January, several UN experts, including the Special Rapporteur on contemporary forms of racism and the Special Rapporteur on violence against women, publicly criticized the leaders of major social media corporations for their platforms’ treatment of hate speech. The chief concern was inadequate enforcement of the platforms’ own policies on hate speech.

Until now, social media companies have largely addressed concerns around hate speech within the ambit of content moderation. And the prevailing form of that moderation is post-incident review, through notice and takedowns of content that violates a given platform’s terms of service. This frame is lacking, however, in that it fails to account for other considerations, including design choices that influence online content. By framing content moderation within the boundaries of ex post review, the focus shifts from the platform practices that might serve as a preventative.

The Standard Picture of Content Moderation

Over the years, conversations around content moderation have been fixated with taking procedural mechanisms around fairness and due process adopted by courts, and attempting to apply these to individual cases of takedown of content by social media companies. In a recent article titled “Content Moderation as Systems Thinking,” legal scholar Evelyn Douek argues that because of this preoccupation, regulatory and academic discourse on injecting accountability and reducing errors in content moderation has overwhelmingly focused on individual procedural rights such as notice, review and appeal. Unfortunately, the emphasis on individual cases can detract focus from the systemic problems within the platform’s ecosystem that contribute to issues like the prevalence of gender and race-based hate speech.

Digital platforms have leaned into demands around individual redress and the replication of judicial processes. The quintessential example is the Oversight Board set up by Facebook in 2018, often referred to as its Supreme Court. The board serves as the highest authority for appeals and revisions (within the platform’s ecosystem) against the enforcement of Facebook and Instagram’s “Community Standards,” which outline the platforms’ terms of service. These platforms have, as Thomas Kadri, professor of law and technology at University of Georgia School of Law observes, embraced the “court-themed branding,” as it allows them to frame their decisions as fair and neutral while downplaying bottom-line considerations of “profit, efficiency, speed, and scale.”

In reality, the sheer volume of content that social media companies handle makes it impractical to carry out ex post review for every case of takedown. Additionally, content moderation teams do not work in silos. They often operate collaboratively, actively engaging with and taking into account the objectives of other teams within the organization, for example, those working on cybersecurity, disinformation and so forth.

Social media companies do periodically publish transparency reports that provide a broad overview of content moderation on their platforms.

Besides the inaccuracy of the standard picture of content moderation, it is insufficient.

A critical inadequacy of relying only on individual review is that it makes detecting systemic problems in the platform’s ecosystem difficult. The occurrence of hate speech content that contravenes the terms of service on social media platforms is not solely attributable to individual errors made by human content moderators or by artificial intelligence (AI) systems. Instead, such content may also persist due to systemic problems or breakdowns in the platform’s internal workings.

But platforms, regrettably, have not always been forthcoming in recognizing or acknowledging systemic failures in their ecosystem.

In June of 2023, the shareholders of Meta voted against a proposal to assess its role in the dissemination of hate speech in India, the company’s largest market. The Internet Freedom Foundation, an advocacy organization that champions digital rights in India, emphasized the significance of the proposal, observing that it sought to confront Facebook’s “failure to address risks and political bias, and voices concerns around inadequate content moderation and lack of transparency in platform practices.”

Beyond the Notice and Takedown Version of Content Moderation

Social media companies do periodically publish transparency reports that provide a broad overview of content moderation on their platforms. Douek, professor of law at Stanford Law School, believes that these transparency reports can be used to uncover the diverse and complex mechanisms of content moderation, which are currently obscure to the public. The effectiveness of these measures may also be published in these reports.

Social media companies have, for instance, responded to pressures to address the spread of hate speech and fake news by making changes in their platform design to prevent or slow down their circulation. These modifications must be treated as part of the company’s larger content moderation policy. For example, in 2018 WhatsApp introduced a limit on the number of times a message could be forwarded by an individual in India, in order to check the viral spread of rumours that had resulted in multiple mob lynching incidents across the country. In a bid to curb misinformation, YouTube has started to place vetted and verified third-party information next to content related to topics that are frequently subject to fake news and conspiracy theories. Twitter (now X) had considered adding a prompt to users, encouraging them to read before retweeting articles they had not actually clicked through, to promote informed conversations. Transparency reports could be used to provide details of such design measures and evaluate how successful, or unsuccessful, they have been.

Other scholars such as Julie E. Cohen, professor of law and technology at Georgetown Law Centre, suggest looking beyond the conventional paradigm of content governance to tackle the spread of hate speech and fake news. Such an undertaking could shed light on how features, capabilities and affordances of digital platforms influence the kind of content we come across online. One critical feature, Cohen notes, is the ability to micro-target advertisements on digital platforms on the basis of users’ social and political affinity.

Moreover, certain platforms allow advertisers to use social, demographic and geographic indicators as well as behavioural profiling data to define and reach their target audience. For example, non-conventional advertisers such as political campaigners can use browsing data to micro-target political advertisements. Platforms in turn reward those advertisers who generate greater engagement with better placement of their ads. This incentive structure encourages the amplified circulation of content within a contained target audience, creating a fecund milieu for the circulation of disinformation and hate speech. Regulating how targeted advertisement is carried out on social media could be valuable in containing the amplification of polarizing content.

Improving mechanisms for post-publication notice and takedown of harmful content without also addressing the features and affordances of digital platforms that enable the creation and spread of such content in the first place is an exercise in futility. Content moderation can only be impactful in the fight to promote the rights of marginalized communities and uphold human rights if we seek to improve the platform ecosystem as a whole.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Amrita Vasudevan is a CIGI fellow and an independent researcher focusing on the political economy of regulating digital technologies and investigating the impact of these technologies through a feminist lens.