We produce outcome reports from selected events, together with briefing papers, white papers and our annually updated AI Governance and Assurance Global Trends Report. A selection of our publications are available below.

Our Publications

AI Governance and Assurance Global Trends 2023-24, Catriona Gray and Rob Wortham, March 2024

We are at a watershed moment in AI governance. Across the globe, government and industry actors, along with citizens, are weighing up the options available. Many different tools are being developed to help take advantage of the promises of AI while minimising its risks. These range from formal regulation through new AI laws or increased powers for sector based regulators, to assurance processes such as voluntary standards conformity and certification schemes. This report surveys the global AI governance landscape, identifying six trends …

MAGF – A Call and Proposal for Assurance Information Sharing Standards, MAGF Working Group, January 2024

In this paper we motivate the need for a Multi Actor Governance Framework (MAGF). The purpose of a MAGF is to enable effective assurance across the entire AI value chain. This will enable relevant organisations to make informed, risk based decisions that meet their formal legal requirements and own business needs. Use of such a framework will also increase organisational transparency and demonstrate a responsible approach to the development and deployment of AI based solutions within the AI ecosystem …

Operationalising AI governance: A review of industry best practices, Catriona Gray and Rob Wortham, September 2023

This briefing paper provided advance context for the 4th annual International AI Governance Roundtable that took place on 19th September 2023 in Barcelona and Shanghai, hosted by Global Digital Foundation. This year the focus was on the operationalisation of AI governance practices, and considered how we can take an engineering approach. As in previous years, the event took place under Chatham House rules.

Foundation Forum 2022: Assured AI and Data Ecosystems - Outcome Report, Catriona Gray and Rob Wortham, March 2023

On 29 November 2022 in Brussels, Global Digital Foundation hosted its second annual Foundation Forum. This year’s theme was Assured AI and Data Ecosystems: Innovation, Standards, and Cybersecurity. The AI Act is expected to become applicable by around 2025, and stakeholders must use this interim period to prepare to operationalise the provisions of the Act. Over the course of the keynotes and panels, participants had a wide-ranging set of discussions. Four interactive panels covered the topics of AI assurance, cybersecurity, innovation and standards. A final panel brought together the conclusions from each panel and enabled common themes and relationships between different policy areas to be identified.

The Assessment List for Trustworthy Artificial Intelligence: A Review and Recommendations, Frontiers in Artificial Intelligence. Vol 6. 2023. Charles Radclyffe, Mafalda RibeiroRobert H. Wortham, March 9, 2023

In July 2020, the European Commission's High-Level Expert Group on AI (HLEG-AI) published the Assessment List for Trustworthy Artificial Intelligence (ALTAI) tool, enabling organizations to perform self-assessments of the fit of their AI systems and surrounding governance to the “7 Principles for Trustworthy AI.” Prior research on ALTAI has focused primarily on specific application areas, but there has yet to be a comprehensive analysis and broader recommendations aimed at proto-regulators and industry practitioners. This paper therefore starts with an overview of this tool, including an assessment of its strengths and limitations. The authors then consider the success by which the ALTAI tool is likely to be of utility to industry in improving understanding of the risks inherent in AI systems and best practices to mitigate such risks. It is highlighted how research and practices from fields such as Environmental Sustainability, Social Justice, and Corporate Governance (ESG) can be of benefit for addressing similar challenges in ethical AI development and deployment. Also explored is the extent to which the tool is likely to be successful in being taken up by industry, considering various factors pertaining to its likely adoption. Finally, the authors also propose recommendations applicable internationally to similar bodies to the HLEG-AI regarding the gaps needing to be addressed between high-level principles and practical support for those on the front-line developing or commercializing AI tools. In all, this work provides a comprehensive analysis of the ALTAI tool, as well as recommendations to relevant stakeholders, with the broader aim of promoting more widespread adoption of such a tool in industry.

Assured AI and Data Ecosystems: Innovations, Standards and Cybersecurity, the Policy Context — Dr Robert H. Wortham and Catriona Gray, November 2022

Artificial Intelligence (AI) undoubtedly offers huge opportunities for businesses, public authorities, and citizens. We are already witnessing major transformations, enabled by AI, in fields including infrastructure, business processes, consumer products, and public services. The development and deployment of AI techniques across sectors, however, brings significant challenges, including for cybersecurity. This comes at a time when cyber attacks are increasing in scale, cost and complexity, and the number of devices linked to the Internet of Things (IoT) continues to grow.

AI Governance and Assurance: Global Trends 2022 — Catriona Gray and Dr. Robert H. Wortham, September 2022

The field of AI governance is fast-moving and complex. We are witnessing a general shift from soft law declarations and principles on AI towards more concrete and rule based commitments including binding national and regional regulations. Yet, many key debates about how best to regulate AI remain unresolved. Fundamental issues, including definitions of AI and how to assess, manage and classify risk, continue to divide policy actors. Agreement about high-level values and ethical principles has not translated into agreement about how markets should be governed. Alternative regulatory models are now beginning to emerge, with potentially far-reaching implications for global trade. From an AI assurance perspective, countries are at very different levels of ecosystem maturity. Building knowledge and capacity for AI assurance will be vital in shifting AI governance from principles to practice.

A Response to the UK’s AI Regulation Policy Paper — Dr Robert H. Wortham and Catriona Gray, September 2022

We welcome the recent publication of the UK government’s proposals on the future regulation of AI in its Policy Paper, Establishing a pro-innovation approach to regulating AI. Overall, we broadly support the sector and context-specific approach which stands in contrast to the anticipated cross-sectoral EU regulatory regime. We believe the key strengths of the Policy Paper are …

Making Rules for AI The Errors and Fallacies Regulators Must Avoid — Paul MacDonnell, October, 2020

Policy analysts and policymakers are responding to AI as a threat to human rights and as a potential savour of humanity from discrimination. These responses originate in two places: the first is the widespread belief that inequality reflects malevolent intentions embodied in historical social hierarchies; the second is the belief that, unchecked, AI will instrumentalise these intentions. These responses explain why many people advocate that ‘fairness’ must be programmed into AI. This paper takes no issue with the view that AI should be designed so as not to treat people unfairly. It argues that while safety—e.g. in transport or healthcare—can, and should, be designed into AI prior to implementation, fairness defined as the achievement of equality of social outcomes, cannot be designed into AI in the same way. The likely causes of bad social or economic outcomes following the use of AI will be both multivariate and exogenous to AI technologies in a way that threats to safety will not; and, in fact, these ‘outcomes’ will usually predate the use of such technologies. For this reason, arguments to design or mandate for AI ‘fairness’ ex ante to achieve equality of outcomes rely on poor reasoning and, if successful, will result in threats to human rights arising from certified ‘fair’ AI passing unnoticed.

Towards Multi-Actor Governance: in Five Practical Steps — John Higgins and Paul MacDonnell , October 22, 2020

This is a practical 5-step proposal to develop an inter-organisational governance framework (a Multi-Actor Governance Framework (MAGF)) for companies in the AI supply chain, including: chipset manufacturers, AI software developers, data providers, algorithm builders, technology integrators, cloud-service providers, and hardware / device manufacturers.

Artificial Intelligence and Machine Learning: An Introduction for Policymakers — Paul MacDonnell, July 4, 2017

For most people, machines that can think and act on their own have, until now, been futurist fantasy. Fritz Lang’s Metropolis (1927), Stanley Kubrick's 2001 A Space Odyssey (1968), Alex Proyas' I, Robot (2004), and Steven Spielberg's Minority Report (2002) have, along with many other creative works, variously portrayed fictive worlds profoundly altered by Artificial Intelligence and, especially, automata. The roots of these vivid tales reach down to a bedrock of Judeo-Christian folklore and Greek mythology from which, at least since the Middle Ages, have grown parables warning of the danger that comes from taking the place of the Creator. Inhabiting Medieval Jewish folklore is one such, the golem, an automaton-protector made from mud which, in one story, prefiguring Mary Shelley’s Frankenstein, runs amok. As technology has evolved stories about the ambitions of its creators that end in tragedy have evolved with it—lasting well into an age where an unchallenged scientific secularism rules our intellectual and moral worlds. Is this a residue of superstition in an enlightened age or a moral symbiosis? And if the latter, is its lesson that science should split the difference with superstition or that the humanities and religion, along with science, should retain this perspective: that good and evil live in man and not in his machines?