AI assurance ensures compliance with strict regulations, builds trust and transparency, and guarantees accountability and safety. It upholds ethics and fairness, preventing biases and discrimination. Robust practices enhance system security and reliability, protecting against failures and threats. Ultimately, AI assurance supports responsible innovation, advancing beneficial AI technologies.
— ChatGPT, June 2024

Launched in May 2022, the AI Assurance Club is a Global Digital Foundation initiative. The Club brings together representatives from across the AI ecosystem. We do this by convening events and working groups with leading industry experts, AI assurance providers, regulators, policy makers, standards and certification bodies, and academic experts. We share key insights into the rapidly evolving AI assurance ecosystem, and offer unique opportunities for members to shape industry’s response to these developments. Our growing membership currently numbers over 300 professionals working in AI technology, international standards development, public policy, and universities.

What is AI Assurance?

AI assurance is the practice of providing reliable information about AI systems. It encompasses different types of impact assessments, compliance audits, as well as verification mechanisms such as model performance testing. Assurance means those who buy, sell, and use AI systems can better understand, assess, and manage risks while demonstrating regulatory compliance.

AI assurance is a critical component of AI governance, focusing on ensuring that AI systems operate safely, ethically and reliably. By integrating AI assurance into AI governance, organisations can build trust, enhance system robustness and foster responsible AI development and deployment.

The production and distribution of AI systems is complex. AI systems are generally developed through the collaboration of many actors within a value chain rather than ‘in-house’ by a single entity. We have proposed a multi-actor governance framework (MAGF) to support assurance information flow between actors in the AI value chain. We outline this approach in our 2024 MAGF white paper. We are delighted and proud that our efforts relating to transparency through the value chain, and the need for frameworks to support information sharing, have raised the importance of this issue.

Latest News