What is AI Assurance?

AI assurance is the practice of providing reliable information about AI systems. It encompasses different types of impact assessments, compliance audits, as well as verification mechanisms such as model performance testing. Assurance means those who buy, sell, and use AI systems can better understand, assess, and manage risks while demonstrating regulatory compliance. 

“By building trust in AI systems through effective communication to appropriate stakeholders, and ensuring the trustworthiness of AI systems, AI assurance will play a crucial role in enabling the responsible development and deployment of AI, unlocking both the economic and social benefits of AI systems.”

UK government’s guidance on AI Assurance (2024)

Read Towards Multi-Actor Governance: in Five Practical Steps, John Higgins, Paul MacDonnell (2020)