Framework Convention on Global AI Challenges

Influential research. Trusted analysis.

Artificial intelligence (AI) breakthroughs and releases are attracting widespread attention in a flurry of media commentary, public debate, and national and international policy initiatives. Yet many observers are largely focused on existing AI systems, rather than on the potential power, pace of development and scale of implications of systems that could exist in the years ahead.

Duncan Cass-Beggs, Stephen Clare, Dawn Dimowo and Zaheed Kara explore three emerging global-scale challenges posed by advanced AI that could require international cooperation. In their discussion paper, produced as part of CIGI’s Global AI Risks Initiative, they propose the development of an international Framework Convention on Global AI Challenges, with specific supporting protocols, and provide preliminary recommendations intended to support further dialogue, reflection and action.

Interpretations of the concept of digital sovereignty vary from continent to continent. In Africa, although data localization is seen as a means of ensuring digital sovereignty, it remains difficult to achieve. Several African countries have built or are building data centres, but when these projects involve external actors, there are often strings attached and data benefits may be unequally distributed.

In this policy brief, Folashadé Soulé argues that localizing sensitive government data, such as electoral information, is key to protecting digital sovereignty, necessitating enhanced local capacity in technology and data governance. African institutions can drive this process by developing new financial models and capacity building in digital governance and cybersecurity.

In this policy brief, Leslie N. L. Mills looks at “the value and importance of joint ventures as a specific flavour of public-private partnerships that will help [African] governments achieve their digital economy goals.” He cites two case studies from Togo where the government has favoured joint ventures in developing its cybersecurity infrastructure and expanding connectivity.

Mills writes that for this model to be successful, governments need to work on strengthening rule of law, take on the burden of first loss from the private sector, negotiate for the private partner to contribute to building local talent and create opportunities for local ownership by enabling local investment in the joint venture.

Recommended

Leading or lagging in AI? Last week, Canadian Press asked CIGI President Paul Samson and other experts to comment on efforts to position Canada as a leader in AI. Read “Canada is a force in AI research. So why can’t we commercialize it?

The Frontier AI Safety and Governance Forum, hosted by Concordia AI, a Beijing-based social enterprise focused on AI safety and governance, takes place at the World AI Conference (WAIC) in Shanghai on July 5 (09:00–17:00 CST). Highlights include an afternoon session on international cooperation, where Duncan Cass-Beggs will join other think tank scholars, members of the UN High-Level Advisory Body on AI and experts from HuggingFace in speaking on the role of international institutions, proposals for AI safety “redlines,” and geopolitical dynamics in AI governance.

Find out here how to register or live stream the event.

“June 2024 saw the first regulated social media election for the European Parliament. The Digital Services Act came into full force in February 2024, and the European Commission has already launched formal investigations into TikTok, X and AliExpress for various potential infringements. The question now is whether these events have made any difference to Europe’s democracy.”

Half the world’s population goes to the polls in 2024. This commentary by Heidi Tworek is the fifth in a series from CIGI created in partnership with the Centre for the Study of Democratic Institutions at UBC to explore the intersection of technology with the most pivotal among these elections.

“There is something deeply worrying about the prospect of AI agents that can reason and act independently, make copies of themselves, and spread across networks while carrying out complex tasks, without human oversight.”

Eli Fathi and Peter MacKinnon reflect on the recent launches of OpenAI’s GPT-4o and Google’s Gemini 1.5 Pro, which suggest a future with more sophisticated intelligent AI agents that will function as fully autonomous systems. “What will be the effect on behaviour, when individuals and groups can speak with AI agents that appear human in their ability to provide detailed answers?…Discussion of AI risks thus far has focused too little on the possible unforeseen consequences on human behaviour. This needs to change.”

Follow us
                         
© 2025 Centre for International Governance Innovation