Assurance in the AI value chain

The production and distribution of AI systems is complex. AI systems are generally developed through the collaboration of many actors within a value chain rather than ‘in-house’ by a single entity. Though these value chains may be arranged in different ways, a typical value chain is shown in the diagram below.

Diagram showing a typical simple value chain configuration. Taken from ‘MAGF – A Call and Proposal for Assurance Information Sharing Standards’, MAGF Working Group, January 2024

In many cases, actors operate with incomplete information about AI products and services they are involved in producing, distributing, and using. Downstream actors responsible for deployment often lack access to the models they depend on. Yet, there are few incentives or mechanisms to make information about different aspects of their performance available. At the same time, upstream actors may lack information about downstream use cases and application contexts needed to anticipate possible harms. These information asymmetries make it difficult for actors to carry out comprehensive and reliable risk assessments.

The Multi Actor Governance Framework (MAGF)

To address this dilemma, we proposed a multi-actor governance framework (MAGF) to support assurance information flow between actors in the AI value chain. We outline this approach in our 2024 MAGF white paper, which identifies the actors involved, and describes their roles and responsibilities based on a taxonomy derived from international standards and the definitions in the EU AI Act.

We are delighted and proud that our efforts relating to transparency through the value chain and the need for frameworks to support information sharing have raised the profile on this issue. Early drafts of the European AI Act made no reference to the value chain, however additional clauses have now been added as below.

References to the Value Chain in the AI Act

The AI Act Article 25 identifies "Responsibilities along the AI value chain". These include duties of information sharing aimed at the fulfilment of the obligations set out in the Regulation.

As further specified in Recital 85:

(85) General-purpose AI systems may be used as high-risk AI systems by themselves or be components of other high-risk AI systems. Therefore, due to their particular nature and in order to ensure a fair sharing of responsibilities along the AI value chain, the providers of such systems should, irrespective of whether they may be used as high-risk AI systems as such by other providers or as components of high-risk AI systems and unless provided otherwise under this Regulation, closely cooperate with the providers of the relevant high-risk AI systems to enable their compliance with the relevant obligations under this Regulation and with the competent authorities established under this Regulation.

See also Recital 88 and 89 requiring information sharing:

(88) Along the AI value chain multiple parties often supply AI systems, tools and services but also components or processes that are incorporated by the provider into the AI system with various objectives, including the model training, model retraining, model testing and evaluation, integration into software, or other aspects of model development. Those parties have an important role to play in the value chain towards the provider of the high-risk AI system into which their AI systems, tools, services, components or processes are integrated, and should provide by written agreement this provider with the necessary information, capabilities, technical access and other assistance based on the generally acknowledged state of the art, in order to enable the provider to fully comply with the obligations set out in this Regulation, without compromising their own intellectual property rights or trade secrets.

(89) Third parties making accessible to the public tools, services, processes, or AI components other than general-purpose AI models, should not be mandated to comply with requirements targeting the responsibilities along the AI value chain, in particular towards the provider that has used or integrated them, when those tools, services, processes, or AI components are made accessible under a free and open-source licence. Developers of free and open-source tools, services, processes, or AI components other than general-purpose AI models should be encouraged to implement widely adopted documentation practices, such as model cards and data sheets, as a way to accelerate information sharing along the AI value chain, allowing the promotion of trustworthy AI systems in the Union.