Before Adopting New Technologies, We Must Define the Common Good

To avoid (or halt) the harmful effects of digital control and techno-social engineering, regulators must garner more public interest in the ways that technologies shape public life

March 6, 2020
2019-07-04T153825Z_840380228_MT1IMGCNBJL11888190_RTRMADP_3_CHINA-JIANGSU-NANJING-JAYWALKERS-FACIAL-RECOGNITION.JPG
Facial recognition technology helps traffic police capture jaywalkers in Nanjing. (Reuters/Wang Feng).

Yesterday’s techlash is today’s regulatory agenda. Recently, the Canadian government has undertaken massive policy reform related to privacy, broadcasting and telecommunications while also committing to regulate data and artificial intelligence (AI). As the government adapts and reaffirms policy traditions, such as democratic citizenship and privacy, policies must address technology’s influence on society; this is a matter of digital control and engineering.

Control is a long-standing but often nebulous concern. Concern about the influence of technology on free will and autonomy is central, for example, to Shoshana Zuboff’s blockbuster book The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. In a recent op-ed, Zuboff writes, “people have become targets for remote control, as surveillance capitalists discovered that the most predictive data come from intervening in behavior to tune, herd and modify action in the direction of commercial objectives.” That should be a familiar concern, as much of computing has long promised to better manage — or rather, to optimize — human societies, usually in the name of profit and social stability.

These concerns are the subject of Re-Engineering Humanity, a helpful new book by legal scholar Brett Frischmann and philosopher Evan Selinger. Frischmann, whose past work has helped establish the importance of the commons in information societies, offers an accessible introduction to the problem of digital control. Technologies, as the book’s title implies, have in some ways acted as social projects, experiments that ask how humanity can be steered and to what ends. Rather than suggest that the current problems are altogether new, the authors highlight today’s techno-social engineering as a turn to more integrated, always-on tools, with an emphasis on contracts that frame more and more of our social interactions and a reliance on digital media to manage choices.

These effects are nuanced. Control is more about choices, what users could do rather than what we must do. Often attempts to control fail because there is ingenuity beyond engineers. Frischmann and Selinger’s book is a reminder that overstating digital control “risks imputing too much power to others and too little to ourselves.” Instead, the book examines the belief an engineered determinism that assumes technologies can easily influence society even though that is far from the case.

The book’s attention to the problem of engineered determinism is particularly apt amid growing concerns about data power, facial recognition, content moderation and platform governance. According to Frischmann and Selinger, there is a “grand hubris to believe we could socially construct a perfectly optimized world if we only have sufficient data, confidence in our advanced technologies and expert technologists, and willingness to commit to what some see as a utopian program.” Technology, in other words, contains a social program, usually obscured as some sort of stated optimal condition expressed in math rather than democratic debate.

Society does not have an optimal solution. It has, and will always be, a constant problem that requires democratic governance and a commitment to human dignity. A fantastic quote in Re-Engineering Humanity sums this matter up: “Optimization presumes an identifiable set of values. One of the reasons why society generally does not aim to optimize for specific values, such as efficiency or happiness, is that people often are committed to many different values.” A question remains: which institutions will enable public debate and allow for the consideration of different values and views on what might be in the public interest.

At their best, assessments become an impetus for public consultation and the co-creation of knowledge about techno-social engineering. At their worst though, assessments become another form to fill out...

The answer to that question is anything but clear. Frischmann and Selinger worry that the narrow definitions of evidence preclude critical discussion of technology and confine robust policy making about future challenges. Indeed, Canada’s record of democratic governance of technology is poor, and too often seen as a foregone necessity rather than a chance for public debate.

New solutions do exist, and Canada’s response will be of international significance. In matters of AI and algorithms, governments worldwide are considering algorithmic assessment tools that could evaluate social impacts. Here, Canada is seen as a leader, with the federal government set to require that all of its applications of AI and algorithms assess social risks. Meredith Whittaker, a global critic of the social harms of digital technologies and co-founder of the AI Now Institute, specifically points to Canada’s initiatives in her call for the US government to implement algorithmic impact assessments for all use and procurement of AI systems.

At their best, assessments become an impetus for public consultation and the co-creation of knowledge about techno-social engineering. AI Now, for instance, suggests that assessments “must account for AI’s impact on climate, health, and geographical displacement. At their worst though, assessments become another form to fill out, which might be the outcome of Canada’s implementation if it’s not supported and elaborated.

New institutions need to be established to deal with these ongoing problems. For example, the recent Broadcasting and Telecommunications Legislative Review calls for major reforms to Canada’s media regulator, the Canadian Radio-television and Telecommunications Commission (CRTC). In its final report, the review panellists recommend democratizing the appointment of the CRTC’s commissioners, establishing “a transparent process for funding public interest participation” and organizing a “Public Interest Committee funded by the CRTC.” These recommendations reaffirm a commitment to democratic oversight of communication systems, an example of how institutional support can improve the capacity for assessments.

Canada (and the rest of the world) faces an opportunity: to avoid (or halt) the harmful effects of digital control and techno-social engineering, regulators must garner more public interest in the ways that technologies shape public life. Wrestling with the effects of technology, its wide reach and impact provides opportunity to foster more inclusive and engaged consultations about the environmental, health, social, cultural and democratic implications of powerful technologies such as AI and digital platforms. Here, there is excitement as much as risk in searching for the right formats to find, define and debate the common good.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Fenwick Mckelvey is an associate professor at Concordia University and the author of Internet Daemons.