In recent years, the term “techlash” has emerged in the media to epitomize the steadily rising public concern about leading Silicon Valley firms, as bombshell after bombshell has fallen in quick succession. But it goes without saying that the headlines have mostly implicated a single company: Facebook. The 2016 Gizmodo story that first alleged claims of corporate anticonservative bias; the U.S. presidential election later that year that featured heavy campaign use of questionable digital advertising and social media engagement; the steady trickle of all-too-late assertions by the company that its networks had been infiltrated by Russian disinformation operators throughout the course of the 2016 cycle; the implication that this and other activity may indeed have swung that election; a slew of tremendously harmful content disseminated over the platforms, resulting in the spread of hate, conspiracy and genocide; and perhaps most shocking of all, the Cambridge Analytica revelations. These events have thrown open the case against the entire consumer internet industry, but, most substantially, against Facebook.
Only now are we beginning to see the regulatory picture emerge, as national governments the world over have taken on the mantle of tackling Facebook’s monopoly power in social media. Jurisdictions such as Europe and the United States have already launched a slate of cases against the company, leading to suggestions that a breakup might follow. This begs an obvious question: Is the general thrust of this regulatory action moving in the right direction — that is, the direction that favours consumer protection, competitive markets and human rights? The public indeed appears to be increasingly aware that change to these effects is necessary, but what contours should that change exhibit in the long run?
These events have thrown open the case against the entire consumer internet industry, but, most substantially, against Facebook.
This article, building on a new paper co-authored with Nick Couldry on the need for a “digital realignment,” explores the specific case of Facebook: What role does the company play in our media ecosystem, how has the company achieved this position of economic importance, and how can we link its current business with the bevy of social harms that apparently emerge out of its platforms’ very existence? Finally, given the present (apparently adverse) circumstances concerning Facebook’s role in democratic societies, what must we do to contain the bad and uplift the good?
Facebook’s Ride to the Top
The media environment is changing before our eyes as a result of evolutions in technology — with ongoing advances in the efficiencies of computing, data storage and connectivity. These principal three technological trends have, together, enabled the rise of new business models such as Facebook’s.
There is nothing new about these trends; they have been growing apace over the past 60 years or more. But sometime during the past 20 years, we reached and surpassed a threshold that enabled a number of other social and economic changes, including:
- the sudden rise of a new business model, premised on the mass gathering of personal and proprietary information to the end of behavioural profiling;
- the use of highly sophisticated but tremendously opaque algorithms, which curate social content and target ads at people; and
- the engagement in aggressive platform growth and corporate development practices that serve to install and maintain hegemony in the consumer internet marketplace, by keeping would-be rivals at bay through anti-competitive measures and maintenance of monopoly.
Facebook took advantage of these circumstances, as did Google, Apple and Amazon — the four firms whose chief executives were at the centre of the House of Representatives’ recent antitrust hearing.
Facebook’s Current Impact
Since its Harvard days, Facebook has become a fundamental part of the digital experience. Today it is not only a critical fulcrum in the media ecosystem but, by extension, a crucial component of society and the social experience. With advances in technology setting an ever-faster pace, social media has become the forum in which citizens and consumers access the news, view entertainment and engage with others. The steady expansion of the internet has created new terrain — new areas in which human interactions and economic activity can grow and evolve.
It is in this context that Facebook has come to dominate the social media sector. Yet, critically, this hegemony has come not principally from Facebook’s technological ingenuity but rather from the combination of the company’s first-mover advantage toward a product and user interface that resonated with consumers, and the consumer internet industry’s inherent preference for natural monopolies.
And that is, indeed, what Facebook has become: not just a monopoly, but a natural monopoly. The company is, without doubt, a monopoly; it possesses dominant share in several subsectors of the consumer internet industry, be they social media, web-based text messaging or photo-sharing. That dominant share qualifies as monopoly in most major markets; in the United States, the Federal Trade Commission has, in the past, suggested that firms with more than a 50 percent market could constitute monopoly. In Europe, the lowest market share the European Commission has challenged for anticompetitive behaviours on the basis of monopoly power is 39.7 percent. Further, Facebook’s sub-markets (such as social media or web-based text messaging) are becoming increasingly economically important in society. The proof of this is that tremendous amounts of economic, social and political activity occur over platforms such as WhatsApp, Messenger, Facebook and Instagram. Indeed, it can be argued that these sub-markets are so critically important to democratic societies that jurisdictions such as India, the United States, Europe and Canada (where such platforms are routinely used) should consider applying publicly developed standards on them to protect the public interest.
What makes Facebook’s monopoly natural is the organic tendency for any first mover in the social media market to become the dominant player when certain minimum conditions are met. In Facebook’s case, those conditions include a culturally appropriate user interface, a business model that maximizes profit extraction by way of exploiting users’ personal data and the resulting unilateral manipulation of users’ personal media experience.
The signs of the natural features of the platform’s monopoly are easy to spot:
- Facebook has a tendency to develop and maintain organic barriers to entry, through the build-up of digital and physical infrastructures that shut out upstarts;
- there are powerful network effects that draw an increasing number of users to Facebook’s universe of platforms; and
- tremendous economies-of-scale are created and enjoyed by the firm as it expands its operations around the world.
Of course, in free-and-open-market circumstances, there is no issue with a firm pursuing the business model it believes will yield greatest long-running profits. That is only the capitalistic system at work. But whenever that business model treads on public interests — civil rights, human rights, consumer rights, economic equity or democratic process — we should hope that democratic societies can recognize that model’s overreaches and react to it with meaningful, transparent policy development.
That is the situation we find ourselves in. By way of capitalism, Facebook (and other platforms) are infringing upon public rights, and to respond, we need a fundamental digital realignment.
The Case for a Fundamental Digital Realignment
While firms may not intend to cause social and economic unrest or harm, they do. Facebook never intended for Russian agents to infiltrate its platforms and push disinformation at targeted audiences in the voting population during the 2020 US elections. The company never intended for the coordinators of Facebook groups focused on sports and entertainment to draw in unassuming users and begin showering them with conspiracies and political misinformation. And surely, Facebook never intended for its platforms to facilitate what the United Nations has asserted constitutes genocide in Myanmar. And it never conspired to serve as host to countless conspiracy theories linked to hateful and extremist factions.
Unfortunately, the business model behind Facebook facilitates this slew of digital misuse and harm. Its decisions to maximize user engagement, monetize the related data and conduct behavioural profiling are made with money in mind. While unintended, the engagement generated from hate speech and disinformation is profitable, and in economic parlance, a negative externality. Simply put, hate speech and disinformation are like a by-product of an industrial process; like chemicals in the water or smoke in the air, they are automatic, ancillary outputs that harm all of society but must be exhausted from the company’s factories in order for its ordinary course of business to persist.
Unfortunately, the business model behind Facebook facilitates this slew of digital misuse and harm. Its decisions to maximize user engagement, monetize the related data and conduct behavioural profiling are made with money in mind.
In Facebook’s case, a bandage solution — such as more proficient content moderation — is not enough to correct the course of a business model that perpetuates harmful content (and its widespread impacts). And, as recent history shows us, harmful content will persist regardless of new regulatory red lines. Consider the status quo: Facebook has publicly stated standards for hate speech, and yet seemingly every day, there is hateful content that spreads over the company’s platforms and has substantial media impact — all in violation of Facebook’s corporate policies concerning hate. Again, it is because of the company’s inherent nature to maximize engagement at any expense. Hate, violence, conspiracies and disinformation all sell —users are immediately drawn to such content; scholars have found that fake news travels far faster and farther on social media networks than the boring old truth.
What we do need is a realignment — a direct response to the terrible incentives that encourage platforms to profit from harmful content. Such a realignment would include a recognition among policymakers that the platform’s damaging business model is the result of an unregulated green field of economic opportunity that has been privately exploited at the expense of the citizen, and a global movement toward improved privacy, competition and transparency policies.
As the world shifts toward a digital economy, there is an opportunity for change. The digital economy has the potential to work on the public’s terms, through a system of democratically determined regulatory policy, not on the terms of a private industry that has consistently put profits over people. Where we face uninhibited data collection at Facebook’s behest, we require fundamental privacy rights through federal law: rights that afford us the power to control our data, consent to its collection and opt out of algorithmic processing. Where we face highly sophisticated but tremendously opaque algorithms that curate our social content and target ads at us, we require transparency into the ways that algorithms work so that experts, journalists and the public alike can expose possible harms that arise. And where we face aggressive anti-competitive tactics that reduce the pace of market innovation and artificially hold back the rest of the industry, we require robust competition policy that holds Facebook accountable.
While these three components — privacy, competition and transparency — are much needed, they are only the beginning of a digital realignment and they are politically controversial. This is in part because of the economic system that has been adopted throughout the developed world, a system that favours free markets, open innovation and capitalist growth. But there is one thing that democratic societies have always placed ahead of the free market: democracy itself.