The Time to Prevent a Toxic Metaverse Is Now

Many harms, from location-based stalking to identity theft, already occur over virtual reality and augmented reality channels.

February 15, 2023
Meta CEO Mark Zuckerberg unveiled his firm’s new virtual reality headset, the Meta Quest Pro, on October 11, 2022, at the company’s Meta Connect 2022 event. (Meta/Cover Images via Reuters Connect)

Famed science fiction author Neal Stephenson sat down for a chat recently with Meta’s Chief Product Officer Chris Cox during the World Economic Forum in Davos. And although the event often hosts unexpected speakers, you’d be forgiven for wondering what brought these strange bedfellows together. The answer, however, is quite simple: the metaverse. Stephenson coined the term in the early 1990s in his book Snowcrash. Cox, meanwhile, is leading efforts to see Stephenson’s fictionalized concept become a reality via Meta’s virtual reality (VR) world of the same name.

The pair’s conversation demonstrates the odd power of hyperstition: “the experimental (techno‑)science of self-fulfilling prophecies.” Science fiction has regularly had an impact on technological innovation in recent years. The writing of H. G. Wells inspired inventions including the laser and email, and the 1911 young-adult novel Tom Swift and His Electric Rifle helped spur the invention of the taser (the name of which is a near acronym of the book’s title).

It’s likely that Meta would rather some aspects of Stephenson’s early vision of the metaverse be left out of its much-hyped new product, however. His metaverse exists in a fictionalized anarcho-capitalist world in which Washington, DC, has lost most of its political power, inflation has skyrocketed, and people have turned to the virtual world for not only escape but also as a core mode of communication. Crime happens there; so does violence.

Mark Zuckerberg’s growing (if somewhat derided) digital world cannot escape certain realities, however. In the 2020 book The Reality Game one of us wrote about how the various nascent information technologies of extended reality (XR) — including VR and AR (augmented reality) — were already being used for the purposes of manipulation and harassment. Brittan Heller, a senior fellow at the Atlantic Council’s Digital Forensic Research Lab (DFRLab), has written and spoken extensively about the communicative and societal troubles facing XR, writ large, and the metaverse, specifically. In a panel last year, she gave the following comprehensive definition of the metaverse:

The metaverse is a pervasive social-computing-based platform designed to replace the functionality of your cell phone. It will be constantly on. You will not be able to turn it off. And so if you think about the type of interfaces that are coming out, they’re similar to Apple watches or smart glasses, with the capability to have you take pictures without hands, to use your voice to control calls, to post things to social media with just a touch. This is what the metaverse is going to be. It’s going to be the next generation of hardware that we use to access online spaces.

Heller and fellow panellists pondered what would happen when the toxicity of social media entered this immersive digital world — noting that many harms, from location-based stalking to identity theft, already occur over VR and AR channels. A similar list of problems already playing out over XR spaces in 2020, from disinformation to coordinated trolling, is discussed in The Reality Game.

It’s easy to ridicule Meta’s XR world, to deride it as a flop. A recent Nasdaq article went so far as to call it a joke. But Meta, along with the rest of big tech’s most elite firms (Apple, Microsoft, Amazon and Google), has sunk countless billions into seeing XR become a reality. There has been marked discussion over the past few years about Apple’s desire to replace the iPhone with some form of AR/VR headset in the next decade. To what extent will technology companies’ gargantuan investments in these media technologies force them — and global society — into a future where XR is the communication tool of choice?

Our team, the Propaganda Research Lab at the University of Texas at Austin, focuses on the study of information manipulation over emerging technologies. For the last four years we’ve been looking into the ways in which seemingly fringe spaces online are actually powerful mechanisms for simultaneously moulding public opinion and sowing division. Once-peripheral chat apps such as Telegram and Viber have burst into the Zeitgeist due to their use during the US Capitol insurrection and the ongoing war in Ukraine. Alternative social media sites such as Gab and Rumble, near clones of mainstream platforms like Twitter and YouTube, respectively, have become crucial organizing and incubation spaces for extremists and known hate groups who eventually work to covertly spread their ideas across the internet.

So, as Meta’s metaverse and its competitors expand, how will their platforms be leveraged for propaganda, coordinated harassment, digital disenfranchisement and other informational harms? Will the makers of these technologies — and the policy makers set to govern them — learn from the far-reaching failures made in not adequately regulating social media?

Sadly, at present, things look grim. Law makers in the United States and other leading democracies are so gridlocked and polarized that they cannot even agree on procedural issues or seemingly common causes — let alone begin to tackle the breadth of problems online. The idea that they might effectively and preemptively tackle the very real, and quite complex, issues posed by the metaverse seems laughable. And meanwhile, Meta Chief Technology Officer Andrew Bosworth has said that content moderation on the metaverse “at any meaningful scale is practically impossible.”

Where does this leave society? Democracy? We know that online tools have become critical for political communication and social life. Given big tech’s movement toward XR, systems like the metaverse are likely to become similarly important. Groups such as Common Sense Media have already lain out the dangers that the metaverse poses for our most vulnerable — for children. The time to act is now, not once propaganda and harassment become an inextricable part of life on XR.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Authors

Samuel Woolley is an assistant professor in the School of Journalism and program director for computational propaganda research at the Center for Media Engagement, both at the University of Texas at Austin.

Inga Trauthig is the head of research of the Propaganda Research Lab at the Center for Media Engagement at The University of Texas at Austin.