Avoiding Catastrophe Through Intersectionality in Global AI Governance

Digital Policy Hub Working Paper

April 28, 2025

Artificial intelligence (AI) safety is a growing field that highlights the existential risks of AI, while proposing alternative development processes centred around concepts of alignment with human values and ethical concerns. While it promotes critical perspectives, AI safety has been criticized for its limited conceptualization of future existential threats as universally impactful. This working paper utilizes a feminist policy analysis framework centred around five thematic areas — intersectionality, context, neutrality, control and power — to analyze global initiatives for AI safety governance. The analysis reveals that AI safety policies often lack meaningful engagement with feminist principles, failing to acknowledge how future risks are tied to current harms. Future AI safety work can benefit from integrating feminist perspectives such as accountability and participation into the research and policy development processes.

About the Author

Laine McCrory is a Digital Policy Hub master’s fellow and second-year master’s student in the joint program in communication and culture at Toronto Metropolitan University and York University. She works at the intersections of feminist technology, artificial intelligence (AI) policy, smart cities, data capture and community governance in order to create socio-political critiques of AI.