Canadian Government Calls for a “Human-centric” Approach to AI

May 21, 2019
shutterstock_794528230.jpg
Federal artificial intelligence initiatives must prioritize the rights and well-being of Canadians. (Shutterstock)

Last week, the Government of Canada made two announcements that signalled a desire to find a better balance between its enthusiasm and support for artificial intelligence (AI) research and commercialization, and the need to grapple with the social, political and ethical risks that AI entails. The government announced the creation of an Advisory Council on Artificial Intelligence and provided further details about the International Panel on Artificial Intelligence — a joint initiative with France it first announced in December 2018. While both initiatives are a step in the right direction, a close look at the details suggests that there is still a long way to go to ensure the responsible development and use of AI in Canada.

Advisory Council on Artificial Intelligence

Composed of experts from academia, industry and government, the government said that it expects the Advisory Council on AI to provide advice on “how best to build on Canada’s AI strengths, identify opportunities to create economic growth that benefits all Canadians and ensure that AI advancements reflect Canadian values.” The council members are an extraordinary group of entrepreneurs and thinkers — many of whom are aware of and deeply concerned about responsible and ethical AI.

The media release announcing the council emphasized a commitment to “promoting a human-centric approach to AI, grounded in human rights, transparency and openness,” but the formal terms of reference include almost no mention of these issues. Instead, the council is expected to provide advice to Navdeep Bains, Canada’s Minister of Innovation, Science and Economic Development on mainly economic and skills issues — including “how to harness AI to create more jobs for Canadians, to attract and retain world-leading AI talent, to ensure more Canadians have the skills and training they need for jobs in the AI sector; and to use Canada’s leadership in AI research and development to create economic growth that benefits all Canadians.”

Moreover, some observers have noted the lack of representation on the council from civil society organizations who work closely with the people and communities most likely to be adversely affected by the risks of AI, while others have expressed concern about the absence of labour representatives on the council and what that could mean for discussion on AI technology’s effect on workers. A more inclusive approach to advising the government on AI is needed.

International Panel on AI

Two days after announcing the council, the government also announced new developments with respect to the International Panel on Artificial Intelligence (IPAI). At the digital ministerial meeting in Paris, Bains and Cédric O, France’s secretary of state for digital affairs, released a declaration of the IPAI, which articulates the core principles to which future members of the panel must commit in order to participate. It emphasizes the need for a “human-centric and ethical approach to AI, grounded in human rights” as well as for commitments to diversity, inclusion, transparency, openness, democratic values and alignment with the 2030 Agenda for Sustainable Development.

These are good principles, but the process for launching and convening the panel — and ultimately for having its insights on ethical and responsible AI inform policy — is moving very slowly. Six months after the initial announcement, one might have expected something more concrete and substantive from the task force that was convened to provide advice on how to structure the panel. And, although Canada and its partners are indeed learning how to talk about responsible AI, private sector actors are already using AI-based technologies in ways that could adversely affect the rights and well-being of Canadians.

Why Responsible AI Matters

Canada faces a policy challenge: finding a way to support AI innovation to achieve the social, economic and other benefits that AI promises, while also ensuring that the threats AI technologies pose to the rights and well-being of Canadians are minimized.

Supporting the research, commercialization and use of AI is critical given its potential to improve health care, scientific research, transportation, firm productivity and other social and economic conditions. At the same time, the list of irresponsible and unethical uses of AI technologies is growing. Researchers have catalogued many examples, including image recognition technologies that miscategorize black faces, risk assessment algorithms used in sentencing that discriminate against black defendants, chatbots that adopt racist and misogynistic language when trained on online discourse, facial recognition technology used by Uber that does not work for trans drivers, predictive policing models that recommend over-policing minority neighbourhoods and loan approval systems that rely on incomplete and biased social media and other non-financial data to assess loan risk.

From Rhetoric to Action

It’s not that the government isn’t aware of the challenges of responsible AI development and use, nor that it lacks capacity to develop good principles and procedures to manage risks. The Treasury Board of Canada has conducted extensive work on responsible AI use within and by the public sector — including developing its Directive on Automated Decision-Making and Algorithmic Impact Assessment process for federal agencies. And through the CIO Strategy Council, private and public sector chief information officers are developing voluntary shared standards for automated decision systems using machine learning. The government seems particularly reluctant to address AI in the private sector in timely and concrete ways.

The problem is that AI adoption is already happening and some of its risks are playing out in the private sector. In the face of rapid technological change, governments need to move quickly to address social, political and ethical risks. When they move slowly, private sector actors often fill the void with their own principles and procedures. But self-regulation is always shaped by existing distributions of resources and power, frequently prioritizes profit over people and often amounts to ethics washing rather than sincere, committed action to address social, political and ethical risks of business activities and technologies.

The government is right to highlight the need for a “human-centric” approach to AI, grounded in respect for “human rights, inclusion and diversity.” But recent developments still seem to prioritize innovation over risk management, industry over civil society and rhetoric over concrete efforts to develop regulatory mechanisms to protect the rights and interests of Canadians in the age of AI.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Daniel Munro is a Senior Fellow in the Innovation Policy Lab at the Munk School of Global Affairs and Public Policy at the University of Toronto, and Co-Director of Shift Insights.