
International AI Governance Round Table
The fifth annual International AI Governance round table takes place in Sophia Antipolis, near Nice, and will be a hybrid event under Chatham House rules. Attendance is by invitation only.

Trust frameworks for the AI value chain: Essential for transparency and regulatory conformity
Recently, value chain accountability has emerged as a key challenge for AI governance. Transparency of information is key to enabling accountability and building trust. Article 13 of the EU AI Act provides the requirement of transparency and provision of information for high-risk AI systems. However, we have yet to operationalise the process for information sharing through the value chain.
At this roundtable, we will hear from those at the forefront of AI regulation and standards in Europe, together with wider perspectives on the need for standardisation of information sharing between those in the AI value chain to support both transparency and regulatory conformity.

Trust frameworks for the AI value chain: Essential for transparency and AI assurance
Value chain accountability is a key challenge for AI governance. Transparency of information is key to enabling accountability and building trust. Article 13 of the EU AI Act provides the requirement of transparency and provision of information for high-risk AI systems. However, we have yet to operationalise the process for information sharing through the value chain.
At this event we will hear from representatives of both UK government and the UK standards body, BSI, together with leading experts in the field of AI governance.

Is AI really an existential threat?
The so-called ‘godfathers’ of AI, Yoshua Bengio, Geoffrey Hinton, and Yann LeCun are deeply divided as to whether AI is a major threat to humanity, or conversely, whether it’s an enabling technology to transform society for the better. We have two leading experts, Dr Bertie Müller, Chair of AISB, and Calum Chace, author of Surviving AI, to discuss the likelihood of these possible AI outcomes.
We then consider whether these developments have changed policy makers’ thinking or approach to AI regulation. Has the focus on foundation models and the possibility of some kind of AGI shifted the focus away from the real AI being deployed now?
Full video of the event is now available. Click View Event below.


Toolsets for Operationalisation of AI Assurance: Copenhagen
This event took place at the offices of 2021.AI in Copenhagen. We had some great speakers and enjoyed a lively discussion on the benefits of frameworks and technical tools to operationalise AI assurance. See event details for full video.

Assured AI and Data Ecosystems: Innovation, Standards, and Cybersecurity
As regulatory approaches to AI and its applications firm up, attention is turning to operational challenges. How do we ensure AI value chains function smoothly? How do we build confidence in cybersecurity? How do we get the right standards while encouraging innovation? See event details for full report.

AI REGULATION TRENDS 2022: THE IMPACTS FOR INDUSTRY
Join us at ‘AI Governance and Assurance Global Trends 2022’ in Milan where Catriona Gray will present our recent report - a current snapshot of the most important global trends in AI policy and governance.
Find out what these trends mean in practice. Hear from industrialists how current and planned regulation impacts their industries, and join the discussion on operationalising AI governance and assurance.

AI ASSURANCE AND INDUSTRY 4.0 : BERLIN
We were pleased to host AI Assurance and Industry 4.0, a roundtable discussion at which experts from government, industry, and academia offered insights on measures to increase trust, innovation, and competitiveness. See event details for a synopsis and full video.

AI ASSURANCE : THE STATE OF PLAY
The AI Assurance Club kicked off last month in London with AI Assurance: the State of Play, at which speakers from industry and government discussed the current state of play in AI. Key themes included the importance of a coherent standards landscape, the need to avoid the pitfalls of GDPR, and the value of a global perspective on AI governance.

FOUNDATION FORUM 2021: AI SECURITY & PRIVACY
One impact of the Covid-19 pandemic in Europe has been a step change in digital transformation affecting all sectors. Industries from pharmaceuticals to manufacturing, and services have begun to build an enabling foundation for the digital society. The European Commission has prioritized legislation to regulate emerging technologies, including Artificial Intelligence through the Artificial Intelligence Act, the upcoming Data Act, as well as through facilitating the creation of European data spaces in specific sectors (Health, Transportation).
Chaired by Paul MacDonnell, the Foundation Forum 2021 will feature three workshops moderated by CEPS, Eclipse Foundation and ETSI. These will focus on the role of AI regulation, technology, cybersecurity and privacy standards in enabling AI innovation and data governance in Europe. With a diverse group of stakeholders across different sectors, attendees will benefit from a wide range of discussions through a convention of key stakeholders. If you wish to attend then please contact Esther Westerweele at: ewesterweele@lefmarketing.com