Putting Our Bodies Online: The Privacy Risks of Tech Wearables

Smart watches and fitness trackers may offer some health benefits to their wearers, but who really benefits from this monitoring?

August 11, 2021
shutterstock_1821872042 (1).jpg
A person measuring blood oxygen levels on an Apple Watch Series 6, Calgary, Alberta, September 2020. (oasisamuel/Shutterstock.com)

When Apple launched its latest watch two years ago, CEO Tim Cook was adamant about the life-changing benefits of the new device and expressed Apple’s intense interest in entering the health and fitness industry. In a series of sleek videos promoting the company’s latest products, one man described how the watch called 911 when he fell during a run, while a pregnant women said the device’s capacity to measure her heart rate saved not only her life but her baby’s as well. The watch’s heart-rate sensor, Apple promises, can help users and doctors identify early signs of cardiovascular problems. “Apple Watch has powerful apps that make it the ultimate device for a healthy life,” the company’s website claims. In an interview with the Outside podcast, Cook said he wants “to democratize health science by enabling millions of his customers to anonymously share their data with researchers.”

This belief that technology will save us is a repeated refrain whenever a new tool appears. After all, the internet and social media were supposed to bring more democracy to the world. But in 2021, it’s not at all obvious that this belief is justified.

Wearables, also known as digital human-techology interfaces, are information technology devices worn by users that enable continuous collection, recording, transfer, analysis and, possibly, sharing of data. These devices come in many forms, everything from smart watches and other jewellery, such as rings and necklaces, to body sensors, eyeglasses, backpacks, and even clothing that can track productivity, gestures, health, sounds, speech and more.

Wearables can be used in different environments and for a growing array of purposes. According to the Surveillance Studies Centre at Queen’s University, 425 wearables are available today, with fitness trackers and smart watches using body sensors leading the market. Levi Strauss has used Google’s Jacquard technology to produce a jean jacket made of conductive fabric, which allows wearers to answer calls and play music merely by touching the sleeve. The Ōura Ring tracks activities, heart rate and sleep patterns.

There are, of course, advantages to wearable technologies for those who wear them. Health trackers and fitness apps may indeed help users lead healthier lives. The Apple Watch can track sleep patterns, menstrual cycles, detect high and low heart rate. The Apple Watch and sensory wristbands provide therapeutic help through, for example, meditation features and breathing recommendations.

But these devices, worn on our own persons, also enable the collection of data that can be scrutinized by others beyond ourselves. In the health care space, wearables have been used for medical monitoring in post-operative settings, although with limited results. In Australia, mining companies, among others, are testing a brainwave-monitoring technology known as SmartCap to monitor their employees’ fatigue, so as to improve their safety and productivity. These applications can, undoubtedly, be useful. All the same, it’s worth asking: in the age of Amazon, a company that already, controversially, tracks the activity of its employees, who really benefits from this monitoring? This question is particularly germane when one considers that, as with the SmartCap adopters, “each company creates their own rules around how to use [these technologies].”

Health trackers and fitness apps may indeed help users lead healthier lives….But these devices, worn on our own persons, also enable the collection of data that can be scrutinized by others beyond ourselves.

The COVID-19 pandemic has only accelerated the use of digital human-tech interfaces — and the necessity to weigh their benefits against their risks. Over the past year and a half, governments and authorities, including in the United States, have considered, and in some cases have already used, wearables to track and prevent the spread of the virus. South Korea used a smartphone app to monitor its citizens’ self-quarantining and social distancing but imposed wristbands on those people who violated quarantine. Bracelets were also used to monitor lockdowns, social distancing or quarantining in Hong Kong, Bulgaria, Lichtenstein and Belgium. According to Top10VPN.com’s COVID-19 Digital Rights Tracker, 120 contact-tracing apps are currently available in 71 countries; according to its regional analysis, Asia has the most apps among the regions, with 36 contact-tracing apps, as well as 21 apps for digital tracking. Meanwhile, the United States has 23 contact-tracing apps, more than any other country in the world. Some of the wearables using these applications monitor complex biological data, including body temperature, to detect infections before symptoms are felt.

Tech companies and scientists have seized this moment to test and market new applications for wearables. Researchers are currently studying how physiological data collected by the Ōura Ring could be used to predict symptoms of COVID-19. Stanford Medicine researchers are collaborating with Fitbit and Scripps Research to collect data from five different brands of wearable interfaces and to train algorithms to predict and indicate when an immune system has started fighting an infection. “This tool may end up being a plus for both diagnosis and for prognosis,” Dr. Michael Snyder, whose team is conducting the study, predicts. Finally, Apple and Google collaborated early on in the pandemic to “enable the use of Bluetooth technology to help governments and health agencies reduce the spread of the virus.”

In times of crises, the use of tech interfaces can benefit the general public. And, during these crises, citizens may be more willing to forfeit essential rights. In her seminal book The Age of Surveillance Capitalism, Shoshana Zuboff argued that surveillance capitalism emerged in the wake of September 11, 2001, as authorities gathered as much information as possible in order to protect their citizens. In the context of the COVID-19 pandemic, Apple and Google, for example, claim that “there has never been a more important moment to work together to solve one of the world’s most pressing problems.” In Germany, the Academy of Sciences Leopoldina suggested that data protection legislation should be “reevaluated and, if necessary, adjusted in the short term.” But this normalization of body monitoring and tracking should not be taken lightly. How do we balance public safety and privacy in times of such national security risks?

The core difficulty with tech wearables, when it comes to privacy and human rights, is the amount and nature of the data that users surrender to their device — and therefore to the company that makes and operates it. In The Age of Surveillance Capitalism, Zuboff writes that “every time we encounter a digital interface we make our experience available to ‘datafication,’ thus ‘rendering unto surveillance capitalism’ its continuous tithe of raw-material supplies.”

Similarly, American legal scholar Julie E. Cohen uses the term “surveillance-innovation complex” to describe how our bodies have become “a source of presumptively raw materials that are there for the taking.” Cohen believes the perpetual sharing of information normalizes “a distinctly Western, democratic type of surveillance society, in which surveillance is conceptualised first and foremost as a matter of efficiency and convenience.” With the advent of mass market tech wearables for consumers, we see surveillance capitalism approaching a new scale. First, because the consumer-tech interface can be worn constantly and is highly invasive. Second, because the data gathered is extremely intimate, especially when it comes to personal health.

In times of crises, the use of tech interfaces can benefit the general public....But this normalization of body monitoring and tracking should not be taken lightly. How do we balance public safety and privacy in times of such national security risks?

As researchers at the Edmond J. Safra Center for Ethics have found, there are long-term implications to the “surveillance of the human body by governments, private companies, governments, employers and other entities who have a stake in our data.” This is particularly true for devices developed by private tech companies interested in selling devices and collecting data for commercial purposes. But to repeat: Who benefits? How will the data be stored and used, and by whom?

The list of unanswered questions is long and growing — and the opacity benefits private tech companies. Yet the experience with Apple, Google, Facebook and others is that they are not competent at protecting privacy and hide behind opaque terms of services. A study by the CitizenLab and Open Effect also showed that “basic technical safeguards have sometimes been improperly established” and that health apps manage to take the consumers’ private data without express consent and send it to ad companies and other third parties. The 2015 Google DeepMind–Royal Free London NHS Trust scandal in the United Kingdom is one such case where patients’ medical data was given away without proper consent. This example is particularly frightening since the project was a collaboration between a big tech company and the UK National Health Service.

It is increasingly obvious that more and better guardrails are urgently needed. Tech companies that sell fitness and health devices or apps are not subject to the same level of oversight and privacy laws as companies that sell medical devices — and they should be. Similarly, in a clinical context, our data is heavily protected, which is not the case for the health and behavioural data shared through a phone app or wearable device.

Certainly, these companies are subject to the rules and regulations established by the Federal Communications Commission and the Federal Trade Commission in the United States (and, additionally, to the California Consumer Privacy Act in California), and to the General Data Protection Regulation in the European Union. But the current legal framework and ethical oversight are not sufficient, as current debates about antitrust and tech regulations show. In examining the Fitbit’s Terms of Service, one barely understands how and in what context the company protects or shares private information. An owner of the device could be forgiven for being unable to make an informed decision. As Zuboff noted in The Age of Surveillance Capitalism, “under the regime of surveillance capitalism, individuals do not render their experience out of choice or obligation but rather out of ignorance and the dictatorship of no alternatives.”

The recent NSO Group scandal, in which spyware was sold to authoritarian governments for the purposes of hacking journalists’ and activists’ phones, has shown that technologies and software originally intended to prevent terrorism can be transformed into tools of intrusion, cyberstalking and repression. Crises such as the COVID-19 pandemic show that governments are willing to work with tech companies perhaps without thinking about some of the long-term impacts for erosion of individual rights, including privacy and human agency.

Finally, what does it say about our society that humans are increasingly relying on technologies to change their behaviours and get healthier? When Zuboff writes that the rise of wearables and health apps are a testimony to the failure of health care systems to serve the needs of individuals, she is not mistaken.

Indeed, when a study in the United Kingdom claimed that the Apple Watch could add up to two years to a wearer’s life, the British government claimed it would help the country’s National Health Service. Large stakeholders such as health plans and employers are looking for ways to cut costs by keeping populations healthy. Apple even markets its watch to children, claiming that the device can encourage them to get fitter, including by rewarding them for moving more.

Do we truly want big tech to be our doctors? Do we want to cede to these companies the responsibility of making our children healthier? Should we accept that, to make this possible, these companies collect our children’s data?

Technology can dramatically improve human life. But the notion that it will save us is a myth. Human agency and responsibility are absent from these systems. They should, instead, be at their very heart.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Marie Lamensch is the project coordinator at the Montreal Institute for Genocide and Human Rights Studies at Concordia University.