Standards and certification programs are developed to support new trustworthy artificial intelligence (AI) legislation. Recent developments point to the emergence of a two-track approach. One track would focus on the certification of AI applications embedded in tangible products using objective criteria through established conformity assessment bodies. Regarding AI interacting with humans and used in delivering services, there is a need to create a new track. This track is needed to verify and validate compliance against subjective criteria, values and ethics, seen by many as an integral part of what constitutes trustworthy AI. The practice of “assurance as a service” can be adapted to verify and validate conformance to upcoming trustworthy AI standards. This policy brief provides an update on key legislative and policy developments framing trustworthy AI. It sketches possible approaches for the certification, verification and validation of AI embedded in products and in services and looks at recent proposals regarding the creation of a new chartered profession to deliver assurance services to achieve trustworthy AI.