Poor Cybersecurity at Frontier AI Labs Could Disincentivize Arms Racing

Digital Policy Hub Working Paper

June 30, 2025

An artificial intelligence (AI) arms race could lead to or exacerbate an arms race in cybertechnology, transforming the cyberwarfare landscape. This transformation could heighten the difficulty of securing frontier AI labs against cyberattacks. If states’ frontier AI labs are mutually vulnerable to cyberattacks from their adversaries, states have novel incentives to coordinate on AI development under specific circumstances. Further incentives for states to coordinate would emerge if their labs were vulnerable to cyberattacks by both state and non-state actors. Such coordination would reduce arms race-driven international security risks from AI, although that risk might still be elevated in the case of bad non-state actors. There is a pressing need for further research on how cyberwarfare will shape states’ incentives to buy in to international AI coordination regimes. Research in this vein should seek to identify and leverage cybersecurity vulnerabilities that could be used to increase the effectiveness of international coordination efforts.

About the Author

Wim Howson Creutzberg is a former Digital Policy Hub undergraduate fellow who recently completed a B.A. at McMaster University. He is interested in governance mechanisms for mitigating collective action problems and artificial intelligence (AI) policy, and researched how international AI policy proposals enforce coordination.