Control Creep: When the Data Always Travels, So Do the Harms

April 12, 2021
CG-MAR21-Control-Creep-Final_2.jpg
Illustration by Jeannie Phan

In 2014, a Canadian firm made history. Calgary-based McLeod Law brought the first known case in which Fitbit data would be used to support a legal claim. The device’s loyalty was clear: the young woman’s personal injury claim would be supported by her own Fitbit data, which would help prove that her activity levels had dipped post-injury. Yet the case had opened up a wider horizon for data use, both for and against the owners of such devices. Leading artificial intelligence (AI) researcher Kate Crawford noted at the time that the machines we use for “self-tracking” may be opening up a “new age of quantified self incrimination.”

Subsequent cases have demonstrated some of those possibilities. In 2015, a Connecticut man reported that his wife had been murdered by a masked intruder. Based partly on the victim’s Fitbit data, and other devices such as the family house alarm, detectives charged the man — not a masked intruder — with the crime. “In 2016, a Pennsylvania woman claimed she was sexually assaulted, but police argued that the woman’s own Fitbit data suggested otherwise, and charged her with false reporting.” In the courts and elsewhere, data initially gathered for self-tracking is increasingly being used to contradict or overrule the self — despite academic research and even a class action lawsuit alleging high rates of error in Fitbit data.

The data always travels, creating new possibilities for judging and predicting human lives. We might call it control creep: data-driven technologies tend to be pitched for a particular context and purpose, but quickly expand into new forms of control. Although we often think about data use in terms of trade-offs or bargains, such frameworks can be deeply misleading. What does it mean to “trade” personal data for the convenience of, say, an Amazon Echo, when the other side of that trade is constantly arranging new ways to sell and use that data in ways we cannot anticipate? As technology scholars Jake Goldenfein, Ben Green and Salomé Viljoen argue, the familiar trade-off of “privacy vs. X” rarely results in full respect for both values but instead tends to normalize a further stripping of privacy.

We might call it control creep: data-driven technologies tend to be pitched for a particular context and purpose, but quickly expand into new forms of control.

Fundamentally, framing data use as a trade-off directly contradicts the nature of big data and AI as a business. On the one hand, the public is asked to choose and judge technological systems by reading what’s on the tin. On the other hand, the very business of data collection and analytics is predicated on constantly recombining available data into new calculations and “insights.” The nature of the business provides a strong incentive to always choose to collect data rather than not, and to remain relatively indifferent to the meaning that data had in its original, lived context. As Wendy Hui Kyong Chun put it in her book Updating to Remain the Same, our networks are promiscuous by design.

These principles are the result of both technical and economic decisions. A machine-learning-based system for facial recognition typically does not begin by establishing a priori definitions for what kind of data “means” what. The analytical techniques it develops can be quickly deployed into different situations — from police archives of suspects, to job interviews or university exams — in ways that are rarely anticipated or disclosed up front. These tendencies for control creep are especially strong in an entrepreneurial economy that is emblematized by the hunt for “unicorn” successes, where Silicon Valley’s venture capitalists pour massive funds into early-stage start-ups in the hopes of backing the next Amazon or Uber. In the search for a scalable business model, companies often “pivot” to deploy the same technology in wildly different ways. Athena Security, a Texas company focused on gun detection technology, infamously pivoted in early 2020 to market “AI thermal cameras” for detecting COVID-19. Experts subsequently pointed out that most of Athena’s marketing campaigns appeared to be using fabricated video footage and customer testimonials.

Sometimes, it is precisely the user data collected through one product that is directly exploited for new and different applications. The South Korean developer Scatter Lab caused a national scandal early this year when it was discovered that its flirty female chatbot, Iruda (Lee Luda), was trained on real users’ romantic conversations that had been originally gathered for its separate relationship “analysis” app. Indeed, the data was so poorly anonymized that anyone could extract those users’ addresses and nicknames from the chatbot. As historian of data Orit Halpern notes in her book Beautiful Data, big data analytics often has “no clearly defined endpoints or values.” It is precisely this capability for new and unexpected use cases that renders big data so attractive for governments and businesses — and pernicious for the rest of us.

Control Creep in Action

One major frontier in this creeping expansion of data-driven surveillance is the rise of self-tracking via wearables and other “smart” technologies. Where early iterations were often marketed as highly individualized solutions we could apply to better know ourselves through data, these technologies are now increasingly being leveraged by employers or insurers.

This brings us back to Fitbit. Although the fitness wristband remains a poster child for the promise of ourselves tracking ourselves, the company has long courted corporate clients. In my book, Technologies of Speculation, I examine how life insurance providers such as John Hancock and Sovereign have been experimenting with subsidizing Fitbits for its customers, and offering gamified rewards for sharing their exercise and movement data. Similarly, Cathy O’Neil notes in her book Weapons of Math Destruction that car insurance companies such as Progressive and State Farm have been offering drivers discounted rates for exposing data logged in their cars’ telemetric units. By 2018, all John Hancock products had become “interactive,” integrating health tracking data into its insurance offerings. To be sure, individual fitness data is not directly used to recalculate John Hancock insurance premiums, yet that is exactly the horizon of potential use that motivates such business partnerships.

New sources of data are often cited as a path to more unbiased and objective decision making. In practice, they are often absorbed into existing power asymmetries — something that is very clear in the genre of workplace surveillance. Fitbit and other exercise-tracking devices are being taken up by companies such as BP America. We also find bespoke tools like WorkSmart, which brands itself as a “Fitbit for work,” records employees’ keystrokes and takes photographs of them every ten minutes. WorkSmart suggests that such surveillance can help workers “collaborate” with their managers. In practice, we might point to a recent Verge report, in which the company behind WorkSmart used the tool on their own employees, and would refuse to pay them for any 10-minute block where they were deemed unproductive. Such initiatives are often introduced as voluntary, wellness-focused initiatives, after which they often become tools for extrapolating productivity and other value judgments on the basis of movement data.

COVID-19 has also provided new opportunities for employers to normalize newly invasive practices. Some measures are quickly cobbled together, such as the decree by some employers that their staff work with their webcams on at all times; other technologies are designed to last well into any post-pandemic future. The consulting giant PricewaterhouseCoopers now sells Check-in, a set of worker monitoring apps that is advertised primarily as a tool for COVID contact tracing. Yet the same technical capabilities, such as GPS-based location tracking, open up many other uses of fine-grained employee data.

In these scenarios, workers’ bodies are embedded with an array of tracking machines that report to another master. In 2018, we learned that Amazon patented customized wristbands for its warehouse workers — ostensibly to help them locate the right inventory bin as they handle items for shipping. This mundane suggestion, however, must be placed in the context of a workplace that is one of the most profitable systems of our times for mechanizing human labour. Amazon warehouses are notorious for conditions including extreme heat, abrupt terminations, high injury rates and insecure “zero-hour” contracts. The human impact of these technologies often has less to do with what exactly the machines can and cannot do, and far more to do with the pre-existing interests that solicit and fund these technological “solutions” — in this case, the ever-intensifying demand that human beings live and work in ways that are most compatible with the machines around them. Indeed, one Amazon warehouse in Britain had already introduced similar wearables in 2016. Phoebe Moore and colleagues write in their book Humans and Machines at Work that although the official purpose was to “help” employees, it quickly turned out that the devices were being used to track individual productivity, and then used to pick out workers for termination.

And what about Fitbit? The company was bought out by Google for a handsome US$2.1 billion in 2019. Such consolidations are common in an increasingly centralized digital economy, and they amplify control creep by creating massive ecosystems for data recombination. While Google claimed that the acquisition was for “devices, not data,” it would be extremely surprising if this were true. The deal was confirmed early this year after lengthy scrutiny by regulators, opening up new ways for Google to get into consumers’ homes and onto their skin.

This is not exactly the trade-off that users agreed to. From Fitbit to Amazon Ring, smart tracking technologies initially took off through the promise that we can own our data and use it to understand ourselves better. In 2007, Wired veterans Gary Wolf and Kevin Kelly founded the Quantified Self. They envisioned it as an informal community of enthusiasts: people who wanted to use the newest technologies to know themselves better. Kelly, for decades a central figure in Silicon Valley’s imagination of the technological future, envisioned a harmonious partnership of data-driven surveillance and empowerment: “Unless something can be measured, it cannot be improved. So we are on a quest to collect as many personal tools that will assist us in quantifiable measurement of ourselves.”

Many of these early Quantified Selfers were tech-savvy, and found the time and space for self-experimentation — for instance, by building customized tracking tools tailored to one’s own chronic health conditions. Most knew how to look under the hood and tinker with the data, making judgments as to what the numbers might mean. Self-tracking was envisioned as a way to truly claim ownership of personal data. This remains a powerful sentiment in the present “techlash.” The entrepreneur and 2020 presidential candidate Andrew Yang promised to govern data as a “property right,” and last year, followed up with the Data Dividend Project, which would pay money to ordinary individuals for being tracked and surveilled.

Being paid for our data will not empower us if that data is still being recombined into unappealable judgments by cops or bosses.

The trouble with thinking about data as personal property, however, is that what our data means for us has little to do with what it can be made to mean for others. Being paid for our data will not empower us if that data is still being recombined into unappealable judgments by cops or bosses. The rapid growth of self-tracking into a global industry has meant an increasing emphasis on mass-produced devices. As in other areas of our digital economy, the focus was on scale and ease-of-use rather than full disclosure and customizability, and many companies tended to take privacy less than seriously. Fitbit’s pivot toward institutional clients and its subsequent buyout by Google exemplifies these incentives for leaky data. Control creep is unlikely to be deterred by any payout that effectively functions as a minor tax on the still-profitable business of data extraction — not in the absence of more fundamental change in the underlying incentives to centralize and recombine data.

The other problem is that these narratives around trade-offs and property rights rely on a certain imagination of the user. Fitbit’s marketing materials speak to a familiar kind of wellness consumer: upwardly mobile, younger professionals, tech-savvy and educated, making their own decisions about the role of technology in their yoga practice or running routine. Many of these conditions were also present, or at least presumed, in early Quantified Self communities. But this kind of imagined user commands a level of agency, informed consent and control that simply is not available for most of us, most of the time.

Consider Amazon Ring, whose extensive data-sharing partnerships with US police departments provide an explicit example of how data collection first sold as voluntary self-tracking creeps into other domains. Although people technically choose to install Ring on their own property, the cameras watch neighbouring children or postal workers as much as they watch the homeowner. Ring’s video data is compromised by shoddy security practices, but the company insists that it will keep the data indefinitely and share it widely. The Electronic Frontier Foundation recently showed that the Los Angeles Police Department had asked individual Ring owners for footage of Black Lives Matter protesters — drawing a direct line between police brutality, corporate profit and our data. These instances make clear the limitations of traditional frameworks around individual choice or consent, and the consistency with which tech corporations successfully creep beyond the original (or reported) scope of data-driven systems.

A Good Worker Is a Machine-Readable Worker

What does all this mean in the longer term? As smart tech and “self-tracking” devices become more tightly integrated into the economic and management logic of data-driven surveillance, they transform the “gaze” of the data. Individualized measurements of our fitness or productivity are initially construed as augmenting the gaze of the self: a noble quest to understand and love ourselves better. Yet, as the data travels, this purpose is overtaken by the gaze of the employer, the immigration official, the police officer — until the same numbers are made to speak for the interests of others. While each specific implementation might plausibly claim that it solves a particular management problem or resolves existing bias, in the long run, this creates an ever-growing list of labours that users must undertake to ensure that they are being measured and judged in reasonable ways, and to fight uphill against any errors and injustices that happen along the way.

When our data is stripped of context and hawked to every bidder, it’s not just the data that travels. It’s us. We are the ones called to performance review at work, where Fitbit-sourced data about bodily health appears not on our side of the conversation, but the employer’s. It might mean that our boss is calling us after surgery, exclaiming: “Man! I noticed your steps have picked up. You used to be under 2,000, now you’re over 6,000,” because he has been observing our movement data through the company’s health insurance program. We are the ones lining up to get scanned when facial recognition systems are taken up by hiring algorithms that promise to use AI to predict desirable employees. In these emerging scenarios, what is at stake is not data as some intangible good, or as some vague sense of creepiness, but concrete, far-reaching decisions that shape our professional and personal lives.

When our data is used to empower the gaze of others, this also affects how we see ourselves. Our ideas of what a “good” worker looks like — which, in many cases, overlaps with our sense of a good person — is not formed in isolation, but by absorbing the measurements and standards all around us. Consider one historical example of a technology whose way of seeing is now fundamentally ingrained in our lives: clocks. Although various timekeeping technologies have existed throughout Western history, it was only in fourteenth-century Italy that clocks striking by the hour became common — first as public installations, then in homes, and by the sixteenth and seventeenth centuries, as portable/wearable pieces. And it took still longer to establish the modern sense of clock-time as a standard unit that underpins one of capitalism’s most fundamental units for the estimation of value: the hourly wage.

In his work “Time, Work-Discipline, and Industrial Capitalism,” the renowned British historian E. P. Thompson describes how adopting the hourly wage often meant creating and justifying new standards around what it meant to be a “good” worker. Tying labour value to hourly output meant a new demand for the kind of human subject that could work reliably — rain or shine, sick or hale — as regular as the clocks themselves. Workers who couldn’t or wouldn’t adapt to these new standards were condemned as not simply incompetent, but by extension, immoral. Thompson describes how Mexican mine workers in the early nineteenth century were routinely criticized as “indolent and child-like.” Rather than conforming to the expectations of the daily wage, many were used to working in more elastic schedules, perhaps only working as much as they needed money, or taking time off to return home at harvest time. Yet, as Thompson recounts, the critics of the day observed that when the workers were provided with a definition of productivity that they could recognize, they proved more than up to the task: “Given a contract and the assurance that he will get so much money for each ton he mines, and that it doesn’t matter how long he takes doing it, or how often he sits down to contemplate life, he will work with a vigour which is remarkable.”

How we measure changes not only what is being measured but also the moral scaffolding that compels us to live toward those standards. Innovations like assembly-line factories would further extend this demand that human beings work at the same relentlessly monotonous rate of a machine, as immortalized in Charlie Chaplin’s film Modern Times. Today, the control creep of self-tracking technologies into workplaces and institutions follows a similar path. In a “smart” or “AI-driven” workplace, the productive worker is someone who emits the desired kind of data — and does so in an inhumanly consistent way.

In Design, Control, Predict, technology studies scholar Aaron Shapiro argues that smart technologies extend what political philosopher Julian Reid called the logistical life: “life lived under the duress of the command to be efficient … to use time economically, to be able to move when and where one is told to.” Yet, there is an essential contradiction. Even as we are asked to redefine our sense of the good in accordance with new systems of measurement, those very systems reserve the right to constantly modify their standards and practices. What the company-subsidized wristband means for my working life and my labour rights remains vulnerable to indefinite expansion and change.

The last few years have seen increased law and policy attention to this problem, such as the General Data Protection Regulation’s restrictions on data use beyond its original purpose. But we remain far off what should be a minimal baseline of accountability. Scholars like Frank Pasquale have accordingly called for a “second wave of algorithmic accountability,” one that extends beyond criticism of existing systems in order to map and address structural problems around who does what with our data.

As data-driven systems freely expand across social domains, they are increasingly locking us into existing asymmetries of power and information. For some, datafication might seem an empowering choice, a sovereign and individual decision to walk boldly toward a post-human future. For many of us, to appear correctly in databases is the unhappy obligation on which our lives depend.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Sun-ha Hong is an assistant professor of communication at Simon Fraser University.