How Wikipedia Became an Indispensable Part of the Internet

December 19, 2019

The initial problem that social networks solved was one of collective action. While the internet gave voice to individuals previously excluded by the gatekeepers of the public sphere, and allowed like-minded people to find each other in chat rooms and on listservs, it was still difficult to quickly coordinate behaviour. Social networks filled the gap — they allowed speech to go viral and travel with velocity (which, for example, allowed voices from Tahrir Square to reach audiences around the world during the Arab Spring), and they allowed people to connect in real time on shared, and even ephemeral, areas of concern.

This combination of viral speech and connection — what are often called “loose ties” — led to a new layer of collective action. You no longer needed command-and-control hierarchical institutions or to have access to a publication to get large numbers of people to act in a coordinated way.

While the rise of social networks solved the problem of connection, they created a problem in reliability. In part due to their success (billions of users around the world rely on Facebook, YouTube and Twitter), platforms needed systems for prioritizing the content that users saw and engaged with. Largely, this was tackled with algorithmic, content-serving systems — but they weren’t based on democratic process or even the principles of free speech. The algorithms were written with a business model in mind — namely, the attention economy. This model depends on growing the user base and having users spend more and more time on the site. As a result, content algorithms prioritized engagement. Unfortunately, content that is engaging isn’t necessarily truthful. When platforms optimized for sharing, clicks and comments, they lost sight of the value of reliable information. These precedent-setting algorithmic decisions contributed to a system that is, in some ways, responsible for the current crisis — the flood of misinformation online.

Katherine Maher (a guest on the Big Tech podcast) sits at the heart of this new challenge. For over a decade she’s worked on issues of democratic empowerment and the internet at UNICEF, the National Democratic Institute, the World Bank and Access Now. Maher watched the rise of social platforms and saw their potential for empowerment; she also saw the transition to the current moment, in which business decisions drive the flow and amplification of information — true or otherwise. And with that arc, she transitioned her own role; today, Maher is helping to rebuild the very architecture of the internet as the executive director of the Wikimedia Foundation.

While Wikipedia was once derided for its perceived lack of traditional authority (teachers and professors around the world have famously, and in my mind erroneously, counselled students not to use it as a source), the website is now the single largest home of reliable information ever created. And this puts it, and Maher at the centre of the debate about how our digital infrastructure must evolve.

As Maher points out, there is a difference between trust and reliability. “Wikipedia is not meant to be truth. It’s simply what we can sort of agree upon as general consensus about an issue at any point in time.” In this sense, Wikipedia used the tools and norms of the digital infrastructure (for example, the ability to get loosely connected individuals to act in a coordinated fashion) to work toward a process that results in reliable information — information that directly addresses the internet’s crisis of reliability. Instead of using algorithms designed for engagement, Wikipedia uses people organized in a way that optimises for reliable information.

But creating a site that produces reliable information is only part of the solution. In this moment of misinformation and political polarization, reliability must be embedded into our digital public sphere. Arguably, this is where Wikipedia has incredible potential.

First, as platforms ramp up their efforts to adjudicate the quality, and even the veracity, of content on their sites (steps they take either by choice or because of market pressure or government regulation), they will require context for their editorial decisions. Wikipedia provides this database of reliable information on which to ground this ultimately subjective process.

Second, much of the information we consume online is either given to us or created by artificial intelligence. Whether they’re delivering search results, the content on social feeds, responses from our smart speakers or recommendations from chatbots, technological systems are only as reliable as the information they are fed. Increasingly, Wikipedia is their source of information.

In this sense, Wikipedia is quickly becoming an indispensable layer of reliable information in our digital infrastructure. It represents the return of human judgment and knowledge into the architecture of our public sphere. And it could not happen a moment too soon.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Taylor Owen is a CIGI senior fellow and the host of the Big Tech podcast. He is an expert on the governance of emerging technologies, journalism and media studies, and on the international relations of digital technology.