Last week, the Government of Canada made two announcements that signalled a desire to find a better balance between its enthusiasm and support for artificial intelligence (AI) research and commercialization, and the need to grapple with the social, political and ethical risks that AI entails. The government announced the creation of an — a joint initiative with France it first announced in December 2018. While both initiatives are a step in the right direction, a close look at the details suggests that there is still a long way to go to ensure the responsible development and use of AI in Canada. and provided further details about the
Advisory Council on Artificial Intelligence
Composed of experts from academia, industry and government, the government how best to build on Canada’s AI strengths, identify opportunities to create economic growth that benefits all Canadians and ensure that AI advancements reflect Canadian values.” The are an extraordinary group of entrepreneurs and thinkers — many of whom are aware of and deeply concerned about responsible and ethical AI. that it expects the Advisory Council on AI to provide advice on “
The media release announcing the council emphasized a commitment to “promoting a human-centric approach to AI, grounded in human rights, transparency and openness,” but the formal include almost no mention of these issues. Instead, the council is expected to provide advice to Navdeep Bains, Canada’s Minister of Innovation, Science and Economic Development on mainly economic and skills issues — including “how to harness AI to create more jobs for Canadians, to attract and retain world-leading AI talent, to ensure more Canadians have the skills and training they need for jobs in the AI sector; and to use Canada’s leadership in AI research and development to create economic growth that benefits all Canadians.”
Moreover, the lack of representation on the council from civil society organizations who work closely with the people and communities most likely to be adversely affected by the risks of AI, while have expressed concern about the absence of labour representatives on the council and what that could mean for discussion on AI technology’s effect on workers. A more inclusive approach to advising the government on AI is needed.
International Panel on AI
Two days after announcing the council, the government also with respect to the . At the digital ministerial meeting in Paris, Bains and Cédric O, France’s secretary of state for digital affairs, released a of the IPAI, which articulates the core principles to which future members of the panel must commit in order to participate. It emphasizes the need for a “human-centric and ethical approach to AI, grounded in human rights” as well as for commitments to diversity, inclusion, transparency, openness, democratic values and alignment with the 2030 Agenda for Sustainable Development.
These are good principles, but the process for launching and convening the panel — and ultimately for having its insights on ethical and responsible AI inform policy — is moving very slowly. Six months after the initial announcement, one might have expected something more concrete and substantive from the task force that was convened to provide advice on how to structure the panel. And, although Canada and its partners are indeed learning how to talk about responsible AI, private sector actors are already using AI-based technologies in ways that could adversely affect the rights and well-being of Canadians.
Why Responsible AI Matters
Canada faces a : finding a way to support AI innovation to achieve the social, economic and other benefits that AI promises, while also ensuring that the threats AI technologies pose to the rights and well-being of Canadians are minimized.
Supporting the research, commercialization and use of AI is critical given its potential to including health care, scientific research, transportation, firm productivity and other social and economic conditions. At the same time, the list of irresponsible and unethical uses of AI technologies is growing. Researchers have many examples, , , , that does not work for trans drivers, . that recommend over-policing minority neighbourhoods and that rely on incomplete and biased social media and other non-financial data to assess loan risk
From Rhetoric to Action
It’s not that the government isn’t aware of the challenges of responsible AI development and use, nor that it lacks capacity to develop good principles and procedures to manage risks. The Treasury Board of Canada has conducted extensive work on responsible AI use within and by the public sector — including developing its . The government seems particularly reluctant to address AI in the private sector in timely and concrete ways. and process for federal agencies. And through the CIO Strategy Council, private and public sector chief information officers are developing
The problem is that AI adoption is already happening and some of its risks are playing out in the private sector. In the face of rapid technological change, governments need to move quickly to address social, political and ethical risks. When they move slowly, private sector actors often fill the void with their own principles and procedures. But self-regulation is always shaped by existing distributions of resources and power, frequently prioritizes profit over people and often amounts to rather than sincere, committed action to address social, political and ethical risks of business activities and technologies.
The government is right to highlight the need for a “human-centric” approach to AI, grounded in respect for “human rights, inclusion and diversity.” But recent developments still seem to prioritize innovation over risk management, industry over civil society and rhetoric over concrete efforts to develop regulatory mechanisms to protect the rights and interests of Canadians in the age of AI.