In January, the Irish Data Protection Commission fined Meta 390 million euros for breaching the General Data Protection Regulation (GDPR) in delivering services to Facebook and Instagram users. The commission found that the company cannot rely on users’ acceptance of its terms of service as consent to process personal data for behavioural advertising purposes, as users effectively have no choice but to accept the terms if they want to use the platforms.
The decision comes at a time when Meta and other large tech platforms are already facing significant ad revenue shortfalls, leading some to predict the death of surveillance capitalism. But what if these celebrations are premature? What if what comes next is even worse?
At this stage, law and policy makers, civil society and academic researchers largely agree that the existing business model of the Web — algorithmically targeted behavioural advertising based on personal data, sometimes also referred to as surveillance advertising — is toxic. They blame it for everything from the erosion of individual privacy to the breakdown of democracy. Efforts to address this toxicity have largely focused on a flurry of new laws (and legislative proposals) requiring enhanced notice to, and consent from, users and limiting the sharing or sale of personal data by third parties and data brokers, as well as the application of existing laws to challenge ad-targeting practices.
In response to the changing regulatory landscape and zeitgeist, industry is also adjusting its practices. For example, Google has introduced its Privacy Sandbox, a project that includes a planned phaseout of third-party cookies from its Chrome browser — a move that, although lagging behind other browsers, is nonetheless significant given Google’s market share. And Apple has arguably dealt one of the biggest blows to the existing paradigm with the introduction of its AppTrackingTransparency (ATT) tool, which requires apps to obtain specific, opt-in consent from iPhone users before collecting and sharing their data for tracking purposes. The ATT effectively prevents apps from collecting a user’s Identifier for Advertisers, or IDFA, which is a unique Apple identifier that allows companies to recognize a user’s device and track its activity across apps and websites.
But the shift away from third-party cookies on the Web and third-party tracking of mobile device identifiers does not equate to the end of tracking or even targeted ads; it just changes who is doing the tracking or targeting and how they go about it. Specifically, it doesn’t provide any privacy protections from first parties, who are more likely to be hegemonic platforms with the most user data. The large walled gardens of Apple, Google and Meta will be less impacted than smaller players with limited first-party data at their disposal.
For example, McKinsey estimates that “the publishing industry will have to replace up to $10 billion in ad revenue with a combination of first-party data gathered through a combination of paywalls and required registrations, and updated contextual targeting and probabilistic audience modeling.” Moreover, first-party data is typically more accurate, relevant and fresh, and in turn more valuable for personalization and targeting purposes, while also being subject to fewer legal restrictions.
As we celebrate the death of one toxic business model, we must keep our guard up lest we usher in something potentially just as, or even more, problematic in its place.
As digital advertisers, marketers and publishers scramble to future-proof their revenue streams in light of the crackdown on third-party cookies and other Web-based and mobile tracking tools, including pixels, fingerprinting, device and other personal identifiers, we might not like what comes next. While some cookie-based targeting may be replaced with contextual or interest-based ads, other replacements are more concerning. For example, some industry players are calling for techniques that effectively perform the same function as third-party cookies by other means, including numerous proposals aimed at creating a more invasive universal ID — a single unique identifier that would allow advertisers and ad tech companies to identify users across the entire digital ecosystem, including across different websites and devices.
With the growing adoption of augmented reality (AR), virtual reality (VR), and other mixed and extended reality (XR) technologies, companies and advertisers will increasingly have access to more invasive personal data, including enhanced biometrics such as gaze- and eye-tracking and heartrate, GPS coordinates and other location markers, and additional data points to leverage for ad-targeting purposes. And, as they augment or supplement reality in this way, we also risk having private commercial interests crowd out formerly public, physical spaces, further commodifying our experiences and interactions.
Even as advertising, including targeted advertising, is here to stay (and likely to grow more invasive), companies will likely still have to supplement it with other revenue streams as laws get more sophisticated and consumers sustain their pressure. Since the GDPR took effect, we’ve seen a massive uptick in news outlets and publishers introducing paywalls and subscription services, and, more recently, companies like Twitter and Meta have rolled out paid-for-verification schemes (which will, incidentally, make their first-party data even more accurate and powerful than it already is). These combined changes mean we are likely to be more personally identifiable, and thus easier to track, target and surveil than we were before the crackdown on existing modes of behavioural advertising.
We are also likely to see new business models proliferate, especially in the context of AR, VR and XR, including “freemium” and premium subscription models; more in-app or in-platform purchases for digital goods, features and functionality; and fees on payments and microtransactions. While similar business models and revenue streams are already popular in the context of games such as Fortnite, extending them broadly to digital services, including our core communications and news platforms, could popularize and normalize pay-to-play schemes that effectively impose a tax on those who can afford it the least.
This is not to defend targeted behavioural advertising and its tangible, negative consequences. Nevertheless, as we celebrate the death of one toxic business model, we must keep our guard up lest we usher in something potentially just as, or even more, problematic in its place. It’s also an example of how focusing on data distracts us from what’s really at stake in the context of commercialized digital tools and technologies: manipulation, discrimination, harassment, exclusion, and more.
In fact, rather than a “business” model of any kind, we need an alternative to the private sector’s control over large-scale communities for communicating, interacting and organizing.
In other words, as we rightfully dismantle harmful behavioural advertising practices, we should focus our efforts on building sustainable, digital public infrastructure — infrastructure that is underpinned by incentives aligned with core democratic values and human rights. Without it, we risk replacing a bad business model with one that’s even worse.