Tech Ethics: A Disappointing Year in Review

Surveillance concerns, labour conditions in the tech sector and promises of ethics advisory boards were much discussed in 2019. But, as Daniel Munro writes, meaningful ethics reform was limited

December 19, 2019
AP_19260563192270.jpg
commuters walk by surveillance cameras installed at a walkway in between two subway stations in Beijing. (AP Photo/Andy Wong, File)

This year should have been a better year for tech ethics. In 2018, we saw rising public criticism of the big tech firms in light of scandals and questionable practices, leading some to label it the year of the “techlash.” One might have expected 2019 to be a year when firms took the concerns seriously and genuinely improved governance and public confidence. Instead, 2019 has been a year of “frenetic standstill” — a term coined by German sociologist Hartmut Rosa to describe the sense that “nothing remains the way it is while at the same time nothing essentially changes.” From developments in facial recognition use and governance, to greater awareness of labour conditions, to the unfulfilled promises of ethics advisory boards, the tech ethics landscape experienced a great deal of activity, but meaningful reform and progress were limited.

Facial Recognition

The development, use and criticism of facial recognition were much discussed in 2019. Growing awareness of the dangers of ubiquitous facial recognition technologies in public and private locations prompted citizens — and even some local governments — to take action to curtail their use. A handful of US municipalities, including San Francisco, Somerville and Oakland, passed bylaws to restrict the use of facial recognition, which has prompted other jurisdictions in the United States and Canada to consider similar policies.

Despite these efforts, surveillance technologies spread at an alarming rate in 2019. China accelerated its use of facial recognition to monitor citizens and to support its ongoing oppression of ethnic Uyghurs in the Xinjiang region. In the United Kingdom, a pervasive system of closed-circuit television cameras is being linked to facial recognition technology to identify people on watchlists and to identify behaviour that might constitute a threat (as defined by the private and public sector organizations controlling the systems). Meanwhile, individual consumers have been installing their own home surveillance systems, such as Amazon’s Ring, a “smart” doorbell system. Instead of enjoying improved safety and security, however, users are finding that these “smart” systems are vulnerable to hackers and that some police departments can access Ring camera footage from residents without first receiving warrants.

Labour Conditions and Relations

If greater public awareness of problematic work environments is a helpful step toward improving labour conditions and relations, 2019 witnessed some progress. But if progress requires actual changes to policy and practice, then 2019 was a disappointment.

This year, we learned about the experiences of many Facebook content moderators who work in low-wage, precarious positions to identify and censor racist, violent and pornographic content — and who find themselves without support to deal with the psychological trauma of their work. In October, we learned more about the deadly conditions facing workers in Amazon’s fulfillment centres, which prompted Amazon’s inclusion on the National Council for Occupational Safety and Health’s 2019 “Dirty Dozen” list of the most dangerous workplaces in the United States.

At the same time, many workers at the big tech companies’ headquarters have raised their voices about their employers’ questionable behaviour and business models. While this has been an enormously positive development, some of the firms appear to have pushed back against these employees. In November, Google fired four employees who had been critical of the company’s work with US Customs and Border Protection, although the company maintains that there is no connection. In Canada, Amazon appears to be working with its local delivery partners to help identify and prevent unionization efforts among workers who have long complained about unfair labour practices by these Amazon contractors.

Ethics Councils

This year also raised concerns about high-profile “ethics-washing” — that is, when firms announce that they are adopting codes of ethics and launching ethics advisory boards to better monitor and govern their work, but appear to do so mainly to avoid criticism and regulation rather than to genuinely confront ethical challenges. For example, earlier this year, Google launched, but quickly disbanded, an external advisory council for artificial intelligence (AI) ethics in the face of criticism from its own employees about the council’s membership. Members included the CEO of a drone company involved in military applications and the president of a right-wing think tank who, critics note, has a track record of anti-immigrant and transphobic views.

Similarly, after a panel of experts in the European Union released a set of ethics guidelines for responsible AI development and use, observers noted that there are only four ethicists on the panel of 52 people, and that the guidelines are banal and toothless; largely, it leaves industry to self-regulate. One of the panel’s own members, Thomas Metzinger, a professor of theoretical philosophy at Johannes Gutenberg University of Mainz, wrote in an op-ed that the panel was driven far too much by industry representatives and amounted to little more than an “ethical washing machine.”

Public Perceptions

In a survey of more than 4,000 Americans, the Pew Research Center found that 81 percent of respondents feel that the risks they face from data collection outweigh the benefits and that 62 percent believe that it is not possible to go through daily life without companies collecting their data. Moreover, 79 percent are not confident that these companies are good stewards of their data, nor that they will admit mistakes or take responsibility if they misuse or compromise data. In an October 2019 survey of nearly 2,000 Canadians by Forum Research, 56 percent of respondents said that technology companies such as Facebook, Google and others are “making society worse,” including 22 percent  who said they make it “much worse.” When it comes to changing the way tech companies operate, 61 percent said they would approve stronger laws to govern technology companies in Canada.

Whatever tech firms did to try to improve public confidence in 2019 doesn’t appear to have had much, if any, impact.

2020 and Beyond: A New Hope

If 2019 amounted to a disappointing response to the techlash of 2018, can we expect 2020 to be much better? Maybe. In addition to tech employees’ persistent and heroic efforts to rein in their firms’ behaviour, civil society actors have been getting more involved, and some governments are also sending positive signals of activity.

We learned in late 2019, for example, that funding for organizations monitoring and advocating for change in the tech sector is increasing. Moreover, government scrutiny and investigation of big tech firms’ business practices and behaviour are on the rise — even if they have not yet led to concrete action to address those practices and behaviour. In Canada, there are signs of hope in the mandate letters for the innovation, science and industry minister, the justice minister and the Canadian heritage minister. The prime minister has directed these ministers, with the assistance of a newly created “data commissioner,” to “establish a new set of online rights” that will help to protect Canadians’ privacy, prevent misuse of their data online and “encourage greater competition in the digital marketplace.”

These are all signs of hope. But whether we can move beyond “frenetic standstill” to a genuinely responsible and ethical tech sector remains to be seen.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Daniel Munro is a Senior Fellow in the Innovation Policy Lab at the Munk School of Global Affairs and Public Policy at the University of Toronto, and Co-Director of Shift Insights.