Apple Vision Pro Is Here, But Are We Ready for It?

The technology, though innovative, has brought with it a bundle of new risks and challenges.

May 8, 2024
vision
Alfonso Rivero (right) tries on an Apple Vision Pro headset at the company’s flagship store in New York, February 21, 2024. (Anthony Behar/Sipa USA via Reuters Connect)

Apple’s “next big thing,” the Apple Vision Pro, officially hit the market in February, following its initial reveal in June 2023. The device has captured attention for its advanced technology as well as for features that the company says distinguish it from existing products in the virtual reality (VR) headset market.

But the technology, though innovative, has brought with it a bundle of new risks and challenges.

The headset is Apple’s first so-called spatial computer. The company has been encouraging developers to refer to their applications for the device as spatial computing apps rather than as augmented, virtual or extended reality apps. The company’s intent, it seems, is to establish spatial computing as the default terminology within the Apple ecosystem.

Spatial computing is not a novel concept; it originated in the 1980s, with tech pioneers using the term during the 1990s. In 2003, an MIT researcher named Simon Greenwold formally defined the term as “human interaction with a machine in which the machine retains and manipulates referents to real objects and spaces.” Spatial computing is intended to merge the physical and virtual worlds, allowing new ways for people to interact with each other and machines.

The Vision Pro is equipped with high-resolution cameras, eye-tracking cameras, scanners, microphones and various sensors. It employs augmented reality (AR) to display digital objects and integrate them into our physical space, and VR to create immersive environments that transport us into entirely virtual spaces for interactive experiences. Additionally, it uses artificial intelligence (AI) to enhance these technologies.

Now, the challenges:

Devices such as Vision Pro collect extensive amounts of personal data, including biometrics and biomechanics, and acquire insights into users’ emotional and physiological states. The depth and the breadth of this data collection present unprecedented privacy concerns.

For example, a recent study of one popular game using a VR headset revealed that the individual players could be uniquely identified by their movement patterns captured during the game, achieving an accurate identification rate of 94 percent from only 100 seconds of motion data, and 73 percent accuracy from merely 10 seconds of data. This study showed that biomechanics, observed through the brief head and hand motion, are as reliable for identification as traditional biometrics, such as facial and fingerprint recognition.

A key feature of the Vision Pro’s interface is eye tracking, which enables a new level of control and interactivity. Users can navigate the device’s interface by, for example, closing a pop-up window or selecting a photo, simply by moving their eyes. To select or “click,” a user taps two fingers together. Eye-tracking data on where and how long a user looks at specific screen areas can then provide developers and marketers with valuable insights into user preferences and behaviours. That data opens the possibility for advertisements customized according to the user’s gaze and focus areas within the ads, a process so invisible and arguably intrusive as to be almost akin to reading the user’s mind. Unlike other headsets on the market, Vision Pro processes eye tracking locally on the device, ensuring data is not shared with apps, websites or even Apple servers. This significant privacy feature deserves credit where credit is due, which can be a selling point.While Apple typically promotes its technology as privacy-preserving, the Vision Pro introduces new privacy concerns. Another crucial feature of VisionPro , little discussed until now, is spatial mapping, which allows devices to map and understand real-world environments by recognizing objects and surfaces around the user. With Vision Pro, developers can harness the power of spatial mapping and scene-understanding to create AR experiences that adapt to the user’s surroundings. Apple’s privacy overview clarifies that apps are not granted access to spatial data by default. They need the user’s permission. With the user’s permission, apps can utilize this data to “understand” the user’s environment, processing it into a 3D model, identifying nearby objects and determining the precise location of specific items within the user’s environment.

Here is where things start to get tricky.

Apps collecting spatial data can gather sensitive personal information, such as details about bystanders and insights into private areas such as the user’s living room or bedroom. Picture this scenario: You are in your living room, using the Vision Pro. Your partner is playing Xbox, your children are on their iPads and your baby is dozing peacefully in the crib. Your diabetes medicine is clearly visible on the shelf, a high-end coffee machine sits on the counter and a stack of fitness magazines lies on the coffee table. All these details — kids, baby, diabetes medicine, Xbox, iPad, high-end coffee machine, fitness magazines — create a detailed picture of your lifestyle. This data is personal, and therefore highly valuable to advertisers so that they can customize ads for your household’s specific interests and needs. The headset must gather it all, with your permission, for the device to create the “immersive” experience intended by design.

This is not a far-fetched scenario but rather the reality of the technology. That is why clarifying what consent truly means for real-time processing and dynamic interactions is essential. And shifting the responsibility for data protection from users to providers also becomes critical. The key question centres on data collection and processing: What guidelines must developers follow? Who oversees this? It is unreasonable to place the burden of privacy solely on users.

Spatial computing has the potential to revolutionize our interactions with technology and shape the future of human-computer interaction. However, we consistently fall behind the rapid pace of technology in our understanding and regulatory efforts. The development of these headsets has relied thus far on the self-regulation of profit-driven companies such as Apple, Meta and others. There is a lack of oversight in establishing guardrails, whether it be for protecting privacy, safeguarding children or addressing misuse.

The upshot? If we’re to learn from our experiences with social media and, more recently, AI, there’s no time to waste. Society must establish effective laws and regulations that dictate what companies can and cannot do, regardless of consent. It is crucial that we deepen our understanding of and research into how spatial computing can impinge on privacy, trust and safety, health and, in particular, its impact on children. This work must happen now, before the technology becomes embedded in our culture, as others have before it.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Burcu Kilic is a CIGI senior fellow, and a scholar, tech policy expert and digital rights advocate.