Conceptualizing Global Governance of AI

Digital Policy Hub Working Paper

February 27, 2024

Merging artificial intelligence (AI) and global governance, global AI governance focuses on defining terms to deepen understanding, promote collaboration and create informed policies. It emphasizes multi-stakeholder and multi-level cooperation in managing AI’s global impacts. AI’s societal impacts are broad, offering exceptional benefits while carrying unintended risks. Its rise poses geopolitical challenges, affecting transparency, privacy and power dynamics in both democratic and non-democratic states. Empirical and normative research is essential in forming global AI governance, guiding ethical values and legal practices for ethical data use and unbiased algorithm development. Empirical research provides verifiable knowledge through data and experiences, highlighting regime complexities in an anarchic system of global governance, meaning a system with no central authority. Normative research examines values and norms, assessing AI systems’ trustworthiness and ethical compliance. Multilateral cooperation in global AI governance involves collaborative efforts among numerous actors to establish universally accepted norms and policies for AI. An institutional framework for global AI governance should incorporate lessons from international organizations such as the International Atomic Energy Agency and the European Organization for Nuclear Research/Conseil européen pour la recherche nucléaire to guide AI’s ethical development and deployment within and beyond national borders.

About the Author

Maral Niazi is a Digital Policy Hub doctoral fellow and a Ph.D. student at the Balsillie School of International Affairs with a multidisciplinary background in political science, human rights, law and global governance. Her research with the Digital Policy Hub will expand on her doctoral research on the global governance of AI where she will examine the societal impacts of AI on humanity.