Five Things Platforms Can Do Today to Fight Disinformation

February 16, 2021
Facebook Chief Executive Mark Zuckerberg. (Reuters/Stephen Lam)

In the middle of a pandemic and in the aftermath of the storming of the US Capitol, major social media platforms are, once again, under the microscope for their role in enabling the spread of conspiracy theories, hate and disinformation. Rightly so. Platforms are taking actions every day that shape our public information ecosystem, making decisions on how to moderate content, rank content, target content and build social groups, with very little democratic oversight.

Reforms to the legislative framework that define the democratic oversight of digital platforms are sorely needed, but the platforms do not need to wait for the US Congress to reform Section 230 or for the European Union to adopt the Digital Services Act. In fact, there are many measures that platforms could start implementing today to more efficiently fight disinformation.

In November 2020, the Forum on Information and Democracy published a report on how to counter “infodemics.” The 128-page document synthesizes the work of dozens of lawyers and researchers, and was led by a steering committee co-chaired by Maria Ressa and Marietje Schaake. Among that report’s hundreds of recommendations, here is a sampling to illustrate key measures platforms can take to combat the infodemic.

1. Disclose Up-to-Date Information Regularly

In terms of transparency, major platforms should disclose up-to-date reference documents on each core function of their algorithms, including ranking (how they rank, organize and present user-generated content), targeting (how they target users with unsolicited content, usually as a paid service, at their own initiative or on behalf of third parties), moderating and making social recommendations, as well as detecting content. These documents should be released to vetted researchers and regulators, and platforms should explain the objectives of the algorithms’ optimization. Every day, platforms should disclose the top-performing content reaching the highest number of users. Today, only employees of the platforms can know this. Platforms should also create an advertisement library and disclose in real time the advertisements viewed on their services, with advertiser information, the target audience and the number of views the advertisement receives. Platforms should make key disclosures alongside the advertisement itself on the platform, including a visually prominent banner stating who funded the ad. Platforms should allow users to obtain all user information pertaining to them (both collected and inferred) that the company holds, in a structured data format to allow practical data portability and interoperability.  

2. Develop a Secure Platform That Enables Independent Audit

Platforms should develop, at their own cost, a secured platform that enables accredited outside researchers to access the data they need to implement research of general interest and conduct independent audits. “Differential privacy” could allow a safe approach to transparency.

3. Hire More Moderators and Improve Content Review

In terms of content moderation, platforms should reaffirm their commitment to the United Nations Guiding Principles on Business and Human Rights, which actually already compel businesses to respect human rights in places where they operate. Platforms’ content responses must promote individual autonomy, security and free expression, and involve de-amplification, demonetization, education, counter-speech, reporting and training as alternatives, when appropriate, to the banning of accounts and the removal of content. Platforms should hire more moderators and spend a minimal percentage of platforms’ income to improve quality of content review, especially in at-risk countries or situations. When independent fact-checkers determine that a piece of content is disinformation, the platforms should show a correction to each and every user exposed to it — which means anyone who viewed, interacted with or shared it. 

4. Prevent the Creation of “Filter Bubbles”

In terms of design, platforms should implement cooling-off periods for targeted advertising and organic content, so that beyond a certain threshold of impressions, a platform’s recommendation engine would be required to deliberately switch to displaying different content. Platforms should limit the number of times that similar types of organic content or advertisements from the same advertiser is seen by one user. When content reaches a certain threshold, online service providers should trigger an internal viral circuit breaker that would temporarily prevent the content from algorithmic amplification in newsfeeds or from appearing in trending topics or other algorithmically aggregated and promoted avenues. Individual posting or message sharing could still occur. The algorithmic pause would allow the necessary time for a platform to review the content. Viral content should automatically be placed at the top of a queue for third-party fact-checking. These steps would help to prevent the creation of “filter bubbles,” and the amplification of content from groups associated with hate, misinformation or conspiracies.

5. Limit Forwarding Features

For private messaging systems, platforms should limit their forwarding feature to reduce the risk of abuse of this functionality. Service providers could restrict users to forwarding to one chat group at a time in order to preserve a platform’s usefulness as a closed messaging service while making that service more difficult to exploit. Platforms should label messages created by bots or messages sent by business accounts.

These are just a few measures platforms could implement today to slow or stop the spread of disinformation. As well, it is urgent that our legislators impose new regulations and provide necessary democratic oversight of digital platforms. It is past time to supplement the current self-regulation framework that private digital platform companies benefit from, and to intelligently regulate — and consider outlawing — some existing practices, in order to protect platform users and our democracies.   

To succeed in these efforts, democratic governments need to get together. We need a new global governance structure for digital technology to ensure the effective and coordinated democratic oversight of platforms. The steps toward concrete change should be led by a group of like-minded governments working closely with the civil society — this path is the democratic path.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Delphine Halgand-Mishra is a CIGI senior fellow and an expert on press freedom and regulatory frameworks for platforms.