A Welcome Voice for Canada on the Future of AI

By establishing an AI Safety Institute, Canada is joining an urgent global effort to ensure safe, secure and trustworthy AI.

April 30, 2024
shutterstock_2323301655 reduced size
Photo illustration of a Canadian flag overlying technological components. (Shutterstock)

In its 2024 budget released on April 16, the Government of Canada pledged $2.4 billion toward artificial intelligence (AI), including $2 billion for AI computing capacity and $50 million for a new AI Safety Institute of Canada. This commitment brings Canada into line with key allies in addressing one of the most pressing global challenges of our time: how to mitigate the severe public safety risks posed by advanced AI. For these investments to be worthwhile, however, Canada will need to empower its new institute with a clear mandate and an agile structure, and make a commensurate government commitment to turn research into policy action.

Why an AI Safety Institute?

We are living in a Don’t Look Up world. Leading AI scientists, including Canada’s Yoshua Bengio and Geoffrey Hinton, warn that advanced AI, for all its potential benefits, could pose grave risks to humanity if not managed carefully. AI systems are becoming rapidly more powerful and could soon be misused to cause widespread harm, or even act autonomously in ways that humans can’t control.

What explains the disconnect between the potential urgency of AI risks and the relative lack of government focus? Two crucial knowledge gaps stand out: the first about the nature of the risks, and the second about how best to take action. The new institute and its fledgling counterparts in the United States, the United Kingdom and Japan are intended to help fill these gaps.

Assessing Risks and Researching Solutions

How likely and severe are potential safety risks from AI? How soon could AI reach, and then vastly exceed, human-level proficiency across a broad range of cognitive capabilities? How likely is it that technical safeguards will be adequate to ensure such systems can’t be misused to cause catastrophic harm or won’t slip beyond humanity’s control? The forthcoming International Scientific Report on Advanced AI Safety should answer some of these questions, but much work remains. The new institute could unite leading Canadian AI scientists to participate in this effort.

Governments also require answers on what technical and governance solutions may be needed to mitigate AI risk. Technical research by the new institute could help determine potential capabilities and risks of frontier AI and aid in the design of effective mitigations. More ambitiously, the institute could develop new approaches for reliably safe AI. Canada has contributed to a revolution in AI paradigms before and could do so again.

Safety also requires solutions to complex AI-related national and international governance challenges. What governance mechanisms would mitigate the risk of a single company or country using AI to achieve dominance at the expense of others? What would be needed to prevent rogue actors anywhere from creating AI systems that pose extreme risks to humanity? The new institute can mobilize the ingenuity of Canadians across sectors and disciplines to address these issues.

Design of the Institute

How should the new institute be designed? Potential principles include:

  • scientific independence, neutrality and rigour, to ensure trust and credibility across society;
  • close channels of communication, to learn from and advise relevant government experts, including in sensitive areas of national security;
  • an approach that leverages Canada’s leading assets, such as its AI institutes (Amii, Mila and Vector) and policy and governance think tanks;
  • strong, dynamic, mission-led leadership;
  • a highly focused research agenda targeted to the most important questions where Canadian expertise can contribute;
  • close collaboration with AI safety institutes in other countries, to ensure complementary efforts;
  • privileged access to Canadian computing capacity, to support technical research; and
  • agility and flexibility in administration, to attract and retain the right people.

A Crucial Counterpart in Government

To be effective and relevant, the new institute should be accompanied by the creation of a central counterpart body within government that can absorb the institute’s research and convert it into policy action. This counterpart should integrate a full range of legitimate perspectives and areas of action, including public safety and national security as well as innovation, privacy, competition, global affairs and international cooperation.

Both the new institute and its counterpart in government should be established as soon as possible. Quick timing would support Canada’s preparations for leadership in the G7 in 2025.

Canada is a proven AI leader. As the world sits on the cusp of being able to create technologies that surpass humans in all cognitive capabilities, Canada has an opportunity to play a globally crucial role in building safe, secure and trustworthy AI. We must quickly rise to the challenge.

This article was first published by Tech Policy Press.

The opinions expressed in this article/multimedia are those of the author(s) and do not necessarily reflect the views of CIGI or its Board of Directors.

About the Author

Duncan Cass-Beggs is executive director of the Global AI Risks Initiative at CIGI, focusing on developing innovative governance solutions to address current and future global issues relating to artificial intelligence.