Reuters/Andrew Kelly
Reuters/Andrew Kelly

This article is a part of Global Cooperation after COVID-19, an opinion series offering analysis of the post-pandemic world.

W

hen the retrospectives of 2020 are written, one of the year’s most striking features will be the way the COVID-19 pandemic unfolded — as Mike Campbell, a character in Hemingway’s The Sun Also Rises, described his descent into bankruptcy — “gradually and then suddenly.” Early on, there was a belief that the end of the pandemic would follow a parallel, if opposite, trajectory. “If we choose to fight hard,” said Tomas Pueyo in his March 2020 essay “The Hammer and the Dance,” “the fight will be sudden, then gradual.” Nearly four months and well over 500,000 deaths later, Pueyo’s hopes seem faintly quixotic.

As with many world-historical events, the pandemic tempts us to imagine our lives afterward — a vaccine, a parade, a cozy dinner with friends — but our current reality makes that future far less certain.

In some places in the United States, for example, there has been flagrant disregard for “the hammer” and outrage at “the dance”: people protesting on behalf of haircuts and against masks, flouting social-distancing rules, and at least one US governor going so far as to refuse outright to reinstate restrictions. As a result, we are seeing a record number of cases, to the extent that the European Union has instituted a travel ban for Americans.

COVID-19’s impact isn’t just physical, it’s societal. Just as our bodies have no intrinsic immunity to the virus, neither do our health, technology or governmental systems. While the medical community focuses on research, prevention and treatment, we have to build capacity across multiple domains simultaneously.

Rethinking the Role of Technology

The first logical impulse is to search for and adapt or build technology tools to aid in preventing, detecting and monitoring spread and supporting safe, sanitary and resilient homes and workplaces. And we need to do this while combating the infodemic that threatens to, among other things, undermine vaccine adoption and compromise herd immunity. These goals are essential to public health, and the technology industry is feverishly working on tools to address them.

But we have seen what happens when we build tools without regard for their impact on vulnerable and marginalized people. We cannot do this again.

As we look to technology to help address COVID-19, we must not repeat or intensify past injustices. Even as the world grapples with a global health crisis, we’re reckoning, particularly in the United States, with a long and shameful history. While people around the world take to the streets to protest the deaths of George Floyd, Breonna Taylor, Tony McDade, Ahmaud Arbery and too many others, the devastating impact of white supremacy suffuses our educational, economic, media and health care systems. We see it reflected in the disparate impact of COVID-19 on racial and ethnic minority groups, who, in the United States, are hospitalized at a rate of four to five times and die from COVID-19 at a rate of up to two-and-a-half times the rate of their white counterparts.

As Safiya Noble, the author of Algorithms of Oppression: How Search Engines Reinforce Racism, argues, “We need new paradigms, not more new tech.” What might those new paradigms be? And where should policies focus? A first step would be to interrogate our assumptions, our methodologies and their impacts.

Refusing to Exclude

In his essay, “We Need to Talk About Digital Contact Tracing,” Ali Alkhatib, a researcher at the University of San Francisco’s Center for Applied Data Ethics, warns of the “harmful downstream effects of simplifying and reducing how we track and measure our messy world,” referring specifically to the digital contact-tracing apps proposed by Apple and Google. One of the dangers stems from models that ignore or exclude human complexities that are difficult to identify or account for. In the case of digital contact tracing, this means “the poor, children, and myriad other uncounted groups.”

Why? Because such apps tend to run on new, expensive smartphones equipped with the latest operating systems, proximity-detection capabilities and so forth. Who is unlikely to have those phones? The poor, children, and myriad other uncounted groups. To model contact tracing effectively, therefore, we need to account for people who have older smartphones without proximity detection or feature phones that don’t run iOS or Android, as well as for people who don’t have phones at all.

We have seen what happens when we build tools without regard for their impact on vulnerable and marginalized people. We cannot do this again.

Excluding these groups is a danger for everyone, because the model misses precisely those groups who tend to be most at risk. That leaves us, as Alkhatib points out, with “a dangerously misleading picture.”

It should go without saying, but here it is: we must refuse to exclude people simply because they are inconvenient for our models.

Protecting Data and Digital Rights

We also need to look, deeply and persistently, at how we collect, store and use data. We are seeing an increasing move toward designing built environments that require temperature or other tests as a precondition of entry, that are contactless, easily metered (to control population density), easily configurable and sanitized, and with adequate airflow and filtration.

These spaces will require the use of sensors that detect body temperature, verify identity via face recognition or other biometric data, and track location, among other things. But this technology raises a host of data and digital rights issues.

It is one thing to opt out of ad-tracking software and suffer glacial page-load times. It’s another entirely to be denied entry to work or elsewhere because you refuse to consent to the use of your face, fingerprints, blood, gait or iris. And, if you do consent, the questions continue to pile up: Are there less invasive but equally effective ways to do this? Who will collect and oversee this data? What track record do they have with regard to data security? (Arguably, a hospital would be safer, than, say, a local convenience store.) Who will have access to the data and under what conditions? Will it be stored and, if so, for how long? What else could be done with it? What happens if it’s breached?

These are far from trivial concerns; right now, if there’s a data breach, it’s stressful and inconvenient to close your bank account and get a new credit card. But what will you do when — not if — your most intimate and irreplaceable data is compromised?

Even if there were such a thing as perfect policies and governance structures, we still have to acknowledge that as data collection becomes increasingly ambient and moves from our environments and devices to our actual, physical selves, we are creating an unprecedented era of surveillance. The profound implications of facial recognition have already given us a preview of how this issue might play out in the future, with troubling consequences.

While data collection is frequently useful and necessary, we have to stop viewing it as the only possible solution to every problem. We have to consider the impact of the tools we’re building and the precedents we’re setting.

Imagining the Future

The past few months offer us a lens through which to imagine the future and, perhaps, influence it for the better. Here are some scenarios to anticipate during the next one to two years.

A vaccine will help, but we will never return to our pre-pandemic lives. Our choices — whether we accept scientific evidence and pull together for the common good or reject and politicize the evidence — will determine which people and countries thrive, and which suffer needlessly.

The extent to which we include and account for the most vulnerable among us will determine not only our success in fighting this and subsequent pandemics, but also our resilience as a society.

Our level of willingness to think expansively and fearlessly will determine whether we usher in an even more pervasive and intimate level of surveillance or put human beings at the centre of possible solutions and address root causes as well as symptoms.

If we reckon with our history, if we question our assumptions and motivations, if we refuse to leave people behind, if we do the hard work now, imagine how much stronger and healthier we’ll be — not only individually, but collectively.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.
  • Susan Etlinger is a senior fellow at CIGI and an expert on artificial intelligence and big data.