Major social media platforms have empowered individuals who were previously unable to make their ideas visible on a large scale. They have dramatically altered the advertising markets by directly serving news to users to the detriment of media brands — thereby diverting an important source of income from news media. They have built users’ profiles through the accumulation of traces of online navigation and drawn them in targeted advertising. And they have accelerated the dissemination of various kinds of harmful content such as hate speech and disinformation. As a result, policy makers everywhere are working in haste to effectively regulate the digital leviathans — but in the rush, the protection of fundamental rights is at risk of being overlooked.
This classic theme of media policy — the protection of pluralism and diversity — warrants exploration in the context of the current media landscape revolution, and it requires a whole toolbox of solutions.
Not so long ago, when the world of media and communication was (apparently) simpler than it is today, international law on the right to freedom of expression created a duty for states to enact a legal and regulatory framework that facilitates the development of free, independent and pluralistic media landscapes. The expectation was that public authorities in a democratic society, while refraining from restricting the independence of media actors, should adopt measures and regulate markets in order to ensure the sustainability of a plurality of media actors (media pluralism) and the availability of the broadest possible diversity of information and ideas for the general public (media diversity). Such legal framework would enable media to provide the range of views that the audience needed to develop opinions and to act as informed citizens.
Data shows that nowadays a growing part of the population turns to social media for news. Thus, we have to ask ourselves: what do pluralism and diversity become in the context of media landscapes structured around the distribution of news on social media platforms? To find an answer, we need to look at different layers. First, the market concentration: in the social media sector, there seems to be little or no plurality of players. Second, the business model for online distribution of news: we may ask how it impacts the existence of a plurality of media actors on a given (national) market, including (but not limited to) analysis of vertical integration between distribution and production of news. Here, a good example is Facebook’s new “News Tab” — will it support or undermine pluralism in the media market? The answer greatly depends on the criteria used to select the news and the publishers that Facebook will include in the section.
While these first two layers are relevant, there is a third layer that we want to investigate in more detail, precisely because it might at first appear to be a matter of the past: the diversity of content. It’s tempting to simply be amazed by the fabulous profusion of online content; indeed, the internet is the true realm of diversity if ever there was one. However, in an “economy of attention” and in a world of personalized distribution of content, we must consider diversity differently: as a matter of individual users being exposed to diversity, rather than the mere existence of diversity. Are users of dominant social media platforms exposed to a satisfying degree of diverse content allowing them to act as informed citizens?
While legacy media (themselves evolving and intertwined with social media) still provide broadly the same content for every reader, social media platforms highly personalize the content distributed to each user. Social media platforms do not produce or edit content themselves, but they nonetheless make choices, through a combination of algorithmic and human decisions, that shape what content is distributed at the individual level. This is a different role and possibly a different responsibility than that of a content producer or editor. To compose a user’s feed, the algorithms rely on signals such as trending topics, items liked and shared by friends and contacts, and the user’s profile. Social media platforms harvest a huge amount of data from each user; the more precision with which the platforms profile their users, the more personalized the content they can offer individual users. This content includes advertising, but also more general content and news. It is easy to see that this selection cannot be representative of the diversity of content available online, and thus is not enough to ensure that users can make informed decisions or act as informed citizens.
However, research seems to show that users, collectively, are exposed to a higher diversity of news sources on social media through “incidental exposure” than in the offline context. Such research should contribute to keeping regulatory instincts in check. But the question remains, especially around major issues of public interest, such as electoral campaigns or health issues: how can we ensure that social media platforms provide each user with a representative overview of the broad diversity of the world? Such is indeed the ideal outcome that freedom of expression seeks to ensure: that citizens get at least a general sense of the various ideas and opinions that are out there. With access to such material, they could explore options and navigate their way through the maze of civic participation. But how much diversity is sufficient diversity, and how can a platform achieve it?
Mandating that Platforms Improve Content Distribution
First, social media platforms could work to guarantee some form of “due prominence” to certain content, in order to ensure users’ exposure to a broad(er) diversity of views. Projects such as the Reporters Without Borders’ Journalism Trust Initiative (JTI) can serve to delineate how a duty of due prominence would translate in the context of content distribution on social media platforms. JTI is a certification scheme that identifies media companies that work according to high professional standards: as such, JTI provides reliable “signals of trust” that can be used in the programming of algorithms to improve the visibility and accessibility of reliable content on social media platforms. A duty to give “due prominence” to a diversity of media content could take on a variety of technical forms — from flagging content to the attention of users or displaying a snippet of related information to the design of a specific segment of the screen.
Further, social media platforms could contribute to financing the production of news. In the case of Facebook, which remunerates certain media companies for adding their content to the new News Tab, this certainly raises concerns in terms of media pluralism. Taxing social media platforms to finance news would come with challenges. For example, Google has been effective in avoiding copyright infringement in the European Union by shutting down its news service in countries such as Spain, Belgium and France. Google also funds innovative journalistic initiatives: while some of the products are impressive, the move might also restrict media independence. In such a model, standards should be instituted to ensure the protection of media freedom, and such standards could be adapted from the safeguards that apply to public subsidies to media.
Another traditional instrument of media policy could serve as inspiration — the “must-carry” duty that regulators have imposed upon cable companies since the 1960s. The duty aims to ensure the distribution and visibility of a variety of television channels, and especially local ones, which compete with cable networks for a limited number of cable channels. When adapted to the context of social media platforms, a must-carry duty would ensure a certain degree of visibility to a plurality of news media in a given market. Considering that local news is disappearing, which is detrimental to local democracy, this obligation could certainly help local media to achieve sustainability.
We also note that these different options would not necessarily have to be created through legislation: they could be elaborated, negotiated and fine-tuned through a multi-stakeholder, transparent and accountable dialogue. ARTICLE 19, the free speech organization for which both authors work, envisions this through the creation of social media councils.
Reworking the Market to Support Diversity of Content
Beyond social media’s major platforms, its market is also ripe for reform. Indeed, one way to guarantee more diversity in content exposure could be to open up the market for content moderation and allow more providers to offer that service to users. Currently, major social media platforms do not suffer any competitive pressure concerning the way they provide content moderation to their users. If, however, different companies were enabled to provide content moderation to users on dominant social media platforms, they would likely compete among each other to offer better services. Likely, different companies would rely on different criteria for the personalization and recommendation of content (and not rely solely on the content that engages users the most). A more privacy-friendly service might result, as companies would need to be more transparent about their business models. The plurality of providers would likely lead to a plurality of models for content moderation, which would enhance diversity in content exposure, especially in the long term. Users, from their side, would have the possibility to choose how they want their content moderation to be performed and by whom.
Moving from Theory to Practice
With some work, these scenarios could turn into reality. Regulators could mandate dominant platforms to separate their hosting and content moderation functions, and to allow third parties to access their platform in order to provide content moderation to the users. Imposing functional separation on vertically integrated companies is not a new idea — unbundling network and service activities has been widely required by regulators in the telecom sector. In addition, functional separation does not impede the dominant social media from offering content moderation to their users; however, it is the users who make the decision to opt in. In other words, when creating a profile on Facebook, for example, the user should be asked to select a content moderation provider, and Facebook could remain one of the options to select. Ideally, and to avoid further lock-in, users should remain free to change their choice at any time, through the platform’s settings.
Technically, this solution raises a few challenges. For one, in order to provide content moderation on a platform, alternative providers must have access to the platform’s APIs (application programming interfaces), or be able to integrate their own API with the platform. Moreover, adequate safeguards should be put in place to guarantee that consumers’ data is collected, processed, stored and used according to the principles of the EU General Data Protection Regulation. Standards and best practices have a major role to play in both cases.
Finally, policy makers should find sustainable ways to support a diversity of business models for content moderation. In fact, if alternative players moderate content following the same model currently put in place by dominant platforms, their system would most likely lead to very similar results in terms of limited content exposure.
Notwithstanding these challenges, this form of functional separation appears to be an efficient solution, largely because it constitutes a remedy to a real harm — the reduction of diversity in content exposure. The evolution of content distribution on dominant social media platforms challenges the traditional protection of pluralism and diversity, and while a number of regulatory solutions are on the table (including some of those mentioned above), the complex problem urgently calls for a collective discussion involving all stakeholders.