Russia emerges as the most active foreign meddler in US elections, deploying artificial intelligence (AI)-driven disinformation campaigns at scale. The rise of generative AI has shifted the disinformation battlefield from primarily influencing human audiences to deliberately “grooming” large language models (LLMs). By flooding the digital sphere with pro-Kremlin narratives, Russia and allied networks aim to contaminate AI training data sets, causing chatbots to repeat or legitimize propaganda. Canada should adopt a coordinated policy approach to combat AI-driven information manipulation by consolidating existing legislation into a cohesive framework, enhancing transparency and accountability for social media platforms and LLM developers. Regulatory obligations must include data access for researchers, public reporting and data quality standards, backed by strong enforcement mechanisms. The government should support research initiatives and data-sharing collaborations to monitor the evolving information landscape and identify threats in real time.
