UN Report Raises the Question: Do Governments Have the Tools to Hold AI Firms to Account?

Emerging technologies are reshaping our world at a rate that threatens to outpace both our readiness to govern them and our understanding of the implications for fundamental rights and freedoms.

October 12, 2021
2020-03-11T132326Z_1_LYNXMPEG2A1BE_RTROPTP_4_CHINA-SECURITY.jpg
People walk past a poster simulating facial recognition software at the Security China 2018 exhibition on public safety and security in Beijing, October 24, 2018. (Thomas Peter/REUTERS)

At the seventy-sixth session of the United Nations General Assembly, which recently concluded in New York City, US President Joe Biden urged world leaders to focus on “shaping the rules of the world on vital issues like trade, cyber, and emerging technologies.” In addressing the same body that adopted the Universal Declaration of Human Rights (UDHR) in the wake of World War II, he asked, “Will we apply and strengthen the core tenets of [the] international system, including the U.N. Charter and the [UDHR], as we seek to shape the emergence of new technologies and deter new threats?”

Biden’s focus on emerging technologies is particularly pertinent given the speed with which new and emerging technologies such as artificial intelligence (AI) are reshaping our world and our lives, far outpacing our ability to understand their implications for fundamental rights and freedoms — a trend further accelerated by the COVID-19 pandemic.

Biden’s remarks came on the heels of a new report published by the Office of the United Nations High Commissioner for Human Rights (OHCHR), The Right to Privacy in the Digital Age, outlining the human rights risks and implications of the widespread use of AI by governments and businesses alike. The report reviews the international human rights legal framework applicable to AI technologies, highlights specific risks in four key sectors (law enforcement, national security, criminal justice and border management; public services; employment; and content moderation) and offers recommendations to mitigate these risks.

It also advocates for new and improved national legislation, human rights due diligence by states and businesses, enhanced oversight of the private sector, and improved transparency and participation by diverse stakeholders. Despite widespread coverage of the report’s publication, it has sparked relatively limited analysis to date. And yet, the OHCHR’s report is remarkable for at least four reasons.

First, whereas various national, international, cross-sectoral, and multi-stakeholder frameworks for ethical AI have proliferated over the last decade, the report officially acknowledges that the impacts and risks posed by AI systems are a matter of international human rights law, resulting in binding obligations that exceed ethical guidelines, for state actors and businesses alike. For example, it cites the affirmative duties of states to protect against adverse human rights impacts and the obligations of companies pursuant to the United Nations’ Guiding Principles on Business and Human Rights. It also acknowledges that states have a specific “duty to adopt adequate legislative and other measures to safeguard individuals against interference in their privacy, whether it emanates from State authorities or from natural or legal persons.” While some jurisdictions such as the European Union are already working on AI-specific legislation, the report elevates the importance of these issues in national and international policy agendas.

Second, the report expressly acknowledges that the human rights impacts of AI systems go far beyond threats to the individual right to privacy. As the report observes, “deeply intertwined with the question of privacy are various impacts on the enjoyment of other rights, such as the rights to health, education, freedom of movement, freedom of peaceful assembly, freedom of association and freedom of expression.” The report also notes the inadequacy of existing data protection and privacy laws that focus on personal data as “AI systems do not exclusively rely on the processing of personal data … [and] even when personal data are not involved, human rights, including the right to privacy, may still be adversely affected by their use.” This emphasis on economic, social and cultural rights, such as rights to health and education and the right to work, is of critical importance, particularly in relation to the increasing use of AI in public services and employment. For example, recent research from Harvard found that automated hiring and job applicant screening technologies are unnecessarily screening out and rejecting millions of viable job candidates due to overly simplistic criteria.

Third, the report is remarkable in highlighting the shaky scientific foundations on which AI tools and technologies often rely. For example, it observes that “facial emotional recognition systems operate on the premise that it is possible to automatically and systematically infer the emotional state of human beings from their facial expressions, which lacks a solid scientific basis [as] facial expressions vary across cultures and contexts, making emotion recognition susceptible to bias and misinterpretations.” It further notes that “the quantitative social science basis of many AI systems used for people management is not solid, and is prone to biases.” Whereas most ethical AI frameworks focus on the impacts or outcomes of AI systems, the OHCHR’s report goes further by challenging their underlying claims or efficacy in the first place.

The report is remarkable in highlighting the shaky scientific foundations on which AI tools and technologies often rely.

Finally, the report embraces a precautionary principle with respect to certain AI applications and technologies. Notably, it calls for a “moratorium on the use of remote biometric recognition technologies in public spaces, at least until the authorities responsible can demonstrate compliance with privacy and data protection standards and the absence of significant accuracy issues and discriminatory impacts,” and until other safeguards are in place.

Strong rhetoric and robust recommendations aside, there are reasons to be skeptical of the report’s potential impact. For one thing, it doesn’t define what AI is. Instead, the report references “artificial intelligence, including profiling, automated decision-making and machine-learning technologies.” While the proposed European AI regulation also neglects to define AI (instead opting to enumerate a list of “AI systems” that is subject to updates and amendments as technology evolves), the Organisation for Economic Co-operation and Development defines an AI system as “a machine-based system that can, for a set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.” Without a clear definition of AI, it will be challenging to apply relevant international human rights standards.

A more perennial challenge is the question of adequate resources for setting and implementing the relevant international human rights standards. Despite the mounting human rights risks posed by the widespread use of AI, and exacerbated by the pandemic, the OHCHR received less than US$100 million from the United Nations’ budget in 2020, representing only approximately three percent of the United Nations’ total budget, and leaving it with a shortfall of nearly US$375.5 million. Meanwhile, the global annual revenues of firms such as Facebook and Amazon, whose AI-based technologies are already posing some of the greatest risks to human rights, ranged from US$86 billion to US$386 billion for that same year.

While companies have ample resources (but often limited incentives) to meet their human rights obligations, this report should leave us asking whether governments have the resources and, more importantly, the political will, to uphold their duties to hold companies to account. As things stand now, the costs of inaction are simply too high.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Elizabeth M. Renieris is a CIGI senior fellow, lawyer, researcher and author focused on the ethical and rights implications of new technologies. Her latest book is Beyond Data: Reclaiming Human Rights at the Dawn of the Metaverse.