Corporate adoption of artificial intelligence (AI) has surged, with 72 percent of Fortune 500 companies integrating AI technologies in 2024, amplifying concerns about ethical risks such as bias, privacy violations and lack of accountability. Corporate digital responsibility (CDR) offers a focused framework for managing digital-specific risks, addressing challenges such as algorithmic fairness, data security and ethical AI governance. Existing disclosure standards from the Sustainability Accounting Standards Board and International Financial Reporting Standards (IFRS S1 and S2) do not sufficiently account for AI-related risks and opportunities, leading to inconsistencies in reporting across industries. Current frameworks often rely on qualitative measures, which lack the comparability and rigour required for effective oversight. Standardizing AI disclosures requires integrating industry-specific metrics that reflect AI’s unique operational and ethical challenges. Enhancing disclosure frameworks to include AI-specific metrics and transparency measures would improve corporate accountability and stakeholder trust. A cohesive approach to CDR ensures organizations can navigate the complexities of digital transformation while aligning with societal and regulatory expectations.
