[go: up one dir, main page]

0% found this document useful (0 votes)
13 views17 pages

AI Governance Around The World: Country Profile: European Union

asdasfasfaasdasfasfaasdasfasfaasdasfasfaasdasfasfaasdasfasfaasdasfasfaasdasfasfa

Uploaded by

hitep33649
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views17 pages

AI Governance Around The World: Country Profile: European Union

asdasfasfaasdasfasfaasdasfasfaasdasfasfaasdasfasfaasdasfasfaasdasfasfaasdasfasfa

Uploaded by

hitep33649
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

August 2025

AI Governance around the world


Country Profile: European Union
Arcangelo Leone de Castris

1
About The Alan Turing Institute
The Alan Turing Institute is the UK’s national institute for data science and artificial
intelligence.

Authors
Arcangelo Leone de Castris is Research Manager in AI Governance at The Alan Turing
Institute.

Acknowledgements
The author gratefully acknowledges the contributions of Matthieu Binder and Franziska
Busse (Zentrum für Vertrauenswürdige KI) for their expert review of this draft.

Citation
Leone de Castris, A. (2025). AI Governance around the world: European Union. The
Alan Turing Institute. https://doi.org/10.5281/zenodo.16779366

2
About the project

Introduction
The international AI governance landscape continues to evolve as jurisdictions around
the world balance the need to navigate increased competition with earlier commitments
to advancing international alignment and cooperation on AI governance and safety.
Many jurisdictions have recognised the strategic implications of AI, both for economic
prosperity and national security, and accelerated investment in domestic AI
infrastructure has begun to engender a more competitive international environment.
However, the role of international trade and global cooperation remains crucial in
realising the economic benefits of AI while effectively managing the technology’s risks.

As high-level AI principles are translated into concrete policies and distinct regulatory
frameworks, jurisdictions are managing the tensions between competitive and
cooperative priorities. Meaningful progress towards effective international AI
governance will require a clear and detailed understanding of how different countries
are approaching AI governance in practice. Tracking primary source policy initiatives
over the past decade, the AI governance around the world project sheds light on shifting
national priorities on AI, dynamics of cooperation and competition, and potential areas
of international alignment.

Project aims
The AI governance around the world project seeks to map this evolving landscape
through a series of country profiles based on a consistent framework. Each profile
provides a descriptive overview of a jurisdiction’s approach to AI regulation and
standardisation, highlighting the high-level aims and principles, definitions of relevant
technologies, key policy initiatives and the main features of the respective
standardisation systems. Drawing exclusively on primary sources, the country profiles
analyse legal and policy instruments, national strategies, standardisation initiatives, and
public investments that reflect the varied and evolving approaches that different
jurisdictions are taking to AI governance.

This series offers a foundation for comparative analysis and future work on global
regulatory interoperability without commenting on the efficacy of the specific governance
models being adopted at the jurisdictional level. As more profiles are developed, the
project aims to contribute to international understanding of where alignment is emerging
and where deeper coordination may be needed.

3
Executive Summary ________________________________________________ 5
Regulatory approach to AI ___________________________________________ 7
High-level aims and principles ______________________________________ 7
Definitions of relevant technologies __________________________________ 8
Key policy initiatives ______________________________________________ 8
Approach to AI standardisation ______________________________________ 13
Main features of the standardisation system___________________________ 13
European AI-focused standardisation activities ________________________ 15
Engagement in international AI standardisation ________________________ 15

4
Executive summary

The European Union (EU) is taking a ‘harder’ AI governance approach than most
other jurisdictions, setting up legally binding requirements to ensure AI
technologies are developed and used in a safe and responsible way, minimising
the risk of harm to the health, safety, or fundamental rights of EU citizens. The
rules for AI products and services that the EU is developing will have implications
for companies well beyond the Union’s borders. They will apply to products and
services that are commercialised within the European market as well as to those
that, even if not introduced in the EU market, have an impact on physical persons
residing in the EU.

Standards are set to play a central role in the EU’s regulatory framework for AI
technologies. Developed by European Standards Organisations (ESOs) based
on a request by the European Commission, European harmonised standards will
provide the technical specifications necessary to support the implementation of
the EU AI Act. Like most standards, the adoption of European harmonised
standards is voluntary. However, if adopted, they will grant companies a
presumption of conformity with specific EU AI Act requirements.

5
Key policy initiatives

Apr 2018 Artificial Intelligence for Europe


First European AI Strategy

Apr 2019 Ethics Guidelines for Trustworthy AI


Voluntary framework defining values for responsible AI

Apr 2021 Proposal for the EU AI Act


First draft proposal for a regulation laying down harmonised
rules on AI

Feb 2024 Creation of the EU AI Office


Creation of the EU AI Office, the primary body responsible for
the implementation of the EU AI Act

Aug 2024 The EU AI Act enters into force


The EU AI Act entered into force on 1 August 2024

Sep 2024 AI Pact


Voluntary pledge by companies to start applying the principles
of the EU AI Act before they become enforceable

April 2025 AI Continent Action Plan


Comprehensive AI strategy with the objective to position the
EU as a global leader in AI

Aug 2025 GPAI Code of practice


Voluntary framework to help providers of GPAI models
comply with Artt. 53 and 55 of the EU AI Act

6
Regulatory approach to AI

High-level aims and principles


In Artificial Intelligence for Europe, the EU’s first AI strategy, the European
Commission set three overarching objectives: boost AI uptake across the economy to
strengthen the EU's technological and industrial capacity; address the socio-economic
challenges brought about by AI technologies; and develop an appropriate ethical and
legal framework to enable the development of trustworthy AI.1

These broad strategic objectives are reflected in the EU AI Act, which aims to ensure
the proper functioning of the single market by creating the conditions for the
development and use of trustworthy artificial intelligence in the Union. This includes
developing an effective governance infrastructure that can support the development
and use of lawful, safe, and trustworthy AI systems while also ensuring these
technologies can contribute to innovation and economic growth.

The EU’s approach to governing AI technologies builds on a set of ethical principles


first presented by the EU’s High Level Expert Group (HLEG) in its Ethics Guidelines
for Trustworthy AI2 and later integrated in the EU AI Act. These include data
governance and quality, traceability and technical documentation, transparency and
provision of information, human oversight and accuracy, robustness, and security.
These principles also provide the foundation of the ten standardisation deliverables
that the European Committee for Standardisation (CEN) and the European Committee
for Electrotechnical Standardisation (CENELEC) have been tasked with developing,
which providers and users of AI systems will be able to use to demonstrate
compliance with the EU AI Act’s requirements.3

In April 2025, the European Commission published a new strategy to accelerate the
development and adoption of AI throughout the EU economy. The AI Continent Action

1 European Commission, ‘Artificial Intelligence for Europe’, April 2018, https://eur-lex.europa.eu/legal-


content/EN/TXT/?uri=COM:2018:237:FIN.
2 High-Level Expert Group on AI, ‘Ethics guidelines for Trustworthy AI, April 2019, https://digital-

strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai.
3 European Commission, ‘Commission Implementing Decision on a standardisation request to the European

Committee for Standardisation and the European Committee for Electrotechnical Standardisation in support of
Union policy on artificial intelligence’, May 2023, https://ec.europa.eu/transparency/documents-
register/detail?ref=C(2023)3215&lang=en.

7
Plan4 focuses on five areas of intervention: a) building large-scale AI infrastructure; 2)
increasing access to high-quality data; 3) supporting AI adoption in strategic industry
sectors; 4) strengthening AI skills and attracting AI talent; 5) simplifying the
implementation of the EU AI Act. The EU’s new AI strategy remains largely consistent
with the ethical values and principles put forth in previous policy initiatives while re-
focusing the Commission’s policy priorities towards technical research, industry
applications, and AI infrastructure.

Definitions of relevant technologies


Based on Art. 3(1) of the EU AI Act, an AI system is “a machine-based system
designed to operate with varying levels of autonomy and that may exhibit
adaptiveness after deployment and that, for explicit or implicit objectives, infers, from
the input it receives, how to generate outputs such as predictions, content,
recommendations, or decisions that can influence physical or virtual environments.”5
Compared to the definition proposed by the European Commission in the first draft of
the EU AI Act in 2021,6 the definition adopted in the final text of the regulation is more
streamlined and specific, explicitly adding the elements of autonomy and adaptiveness
as distinctive features of AI systems. This definition also aligns with the definition of ‘AI
system’ provided by the OECD.7

In February 2025, the European Commission published a set of guidelines to clarify


and support the practical application of the legal definition of ‘AI system’ under Art.
3(1) of the EU AI Act. These guidelines provided concrete examples of technologies
that fall under the scope of the regulation and helped clarify some areas of ambiguity.
Nevertheless, some concerns have been raised that parts of the guidelines may
effectively narrow the scope Art. 3(1), particularly those introducing exemptions for
‘simple prediction systems’ and ‘systems improving mathematical optimisation.’

Key policy initiatives


The EU has been an early mover in AI regulation. The centrepiece of the European
Commission’s approach is the Artificial Intelligence Act (EU AI Act), a comprehensive

4 European Commission, ‘AI Continent Action Plan’, April 2025, https://commission.europa.eu/topics/eu-


competitiveness/ai-continent_en.
5 Regulation (EU) 2024/1689.
6 Based on Art 3 of the original proposal, “‘artificial intelligence system’ (AI system) means software that is

developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-
defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the
environments they interact with.” Source: COM(2021)0206.
7 OECD, ‘Updates to the OECD’s definition of an AI system explained’, November 2023, https://oecd.ai/en/wonk/ai-

system-definition-update.

8
regulation laying out rules to ensure the proper functioning of the single European
market for AI. The EU’s broader digital regulatory ecosystem also includes other laws
that influence the use and commercialisation of AI technologies. These include the
Data Act, the Data Governance Act, the Digital Market Act, the Digital Services Act,
the General Data Protection Regulation, and the Product Liability Directive. While
these laws are not AI-specific and, as such, will not be analysed in detail here, it is
important to acknowledge their role in European Commission’s overall approach to AI
governance.

Several new initiatives are also planned for the second half of 2025 and into 2026. By
the end of 2025, the European Commission is set publish two new strategic
documents: the Apply AI Strategy, which will focus on leveraging AI’s potential in
target sectors and for the delivery of public services; and the European Strategy for AI
in Science, which will promote the responsible use of AI in research and innovation. In
addition, the European Commission is expected to introduce the Cloud and AI
Development Act by the first quarter of 2026–a regulation aimed at incentivising
investment in cloud and edge capacity.

Horizontal initiatives
Artificial Intelligence Act

The EU AI Act8 is the flagship EU regulation on AI technologies. Entered into force on 1


August 2024, the Act lays out a risk-based regulatory framework for the development,
placing on the market, and use of AI technologies in the EU.

The EU AI Act includes provisions regulating both AI systems and general purpose AI
models (GPAI). AI systems are subject to proportionally stringent requirements based
on the level of risk related to their ‘intended use’, meaning the use for which they are
intended by providers. Uses of AI systems that pose an unacceptable risk of violating
fundamental rights and contravening EU values are prohibited. These include AI
applications that can cause significant harm to persons or groups of persons by
manipulating someone’s behaviour; exploiting the vulnerabilities of a group of persons
based on their age, physical or mental disability, or socioeconomic status; scoring
persons based on their social behaviour; predicting the risk of persons committing a
crime based solely on their personality traits; inferring a person’s emotions on the
workplace or in education institutions; identifying people in real-time through the
analysis of biometric data in public spaces. AI applications that pose a high risk,
meaning a significant risk to the health, safety or fundamental rights of individuals, have

8 Regulation (EU) 2024/1689.

9
to comply with a stringent set of requirements. These range from deploying a risk
management system and data management processes, to enabling sufficient
transparency, ensuring accuracy, robustness, and cybersecurity, ensuring human
oversight, and automatic record-keeping. Providers of high-risk systems will also be
required to establish a quality management system through which they can demonstrate
the compliance of their systems with such requirements. Only AI systems whose
compliance with these requirements is certified can be commercialised in the EU
market. The AI act also introduces obligations for AI applications that pose a specific
transparency risk. For example, it is mandatory to ensure natural persons are aware
that they are interacting directly with an AI system or consuming synthetically generated
content (e.g., images, text, audio, video, etc.). Finally, AI systems intended for any other
use are considered to generate only a minimal risk and are not subject to mandatory
requirements.

The EU AI Act also lays out obligations for GPAI model providers. These include sharing
specific types of information as part of the systems’ technical documentation and
ensuring that downstream providers and deployers have the necessary information and
support to use the system in compliance with the EU AI Act. More stringent obligations
exist for providers of GPAI models that can pose systemic risks–i.e., models that can
have a significant impact on the Union market due to their reach, or due to actual or
reasonably foreseeable negative effects on public health, safety, public security,
fundamental rights, or the society as a whole, that can be propagated at scale across
the value chain. Providers of GPAI models with systemic risk shall perform model
evaluations in accordance with standardised protocols and tools, assess and mitigate
possible systemic risks at the Union level, report serious incidents to the AI Office, and
ensure adequate levels of cybersecurity protection for the models and their
infrastructure.

Crucially, the EU AI Act will have relevant implications for organisations even beyond
the EU’s borders. Providers that commercialise AI systems in the EU market will fall
under the scope of the regulation irrespective of their place of establishment. And even
for deployers–which are otherwise affected by the EU AI Act’s obligations only when
established within the EU–the regulation will apply if the AI systems that they use can
affect natural persons residing in the EU. As such, the scope of application of the AI Act
is admittedly very large. Whether the regulation will be implemented as broadly as it
potentially can, however, remains an open question.

A key challenge for the success of the EU AI Act is how well the regulation will be
implemented and enforced. This will require a combination of legal and technical skills,
as well as substantive efforts from the EU institutions and the various national bodies
responsible for enforcing the regulation. An array of new institutional bodies has been

10
tasked with various implementation and enforcement responsibilities. At the Union level,
the European Commission and its AI Office are responsible for most implementation
and enforcement functions. They are supported in this role by the AI Board, which
represents Member States, the Advisory Forum, which brings together various
stakeholder groups to offer technical expertise and support, and the Scientific Panel of
Independent Experts, which will advise the European Commission and national
authorities on systemic risks, model classification and evaluation methodologies for
GPAI models. The AI Office is also setting up the AI Act Service Desk to help
stakeholders understand what obligations they are subject to and help them comply,
including by responding to ad hoc queries. At the national level, Member States are
required to designate national competent authorities, which include at least one notifying
authority and one market surveillance authority.

Different bodies are also developing resources to support and facilitate compliance with
the EU AI Act. For instance, European Standards Organisations are developing
standards to specify the technical requirements needed to comply with Section 2 of the
EU AI Act; the European Commission’s AI Office led a multistakeholder initiative to draft
a General Purpose AI Code of Practice, which was adopted in August 2025 and
supports providers of GPAI models to comply with the relevant EU AI Act requirements;
the European Commission has also been publishing a series of guidelines and
delegated and implementing acts to facilitate the interpretation and application of
specific EU AI Act provisions. In February 2025, the Commission published two sets of
guidelines to clarify the definition of ‘AI system’ and specify what AI practices shall be
considered as prohibited under Art. 5 of the EU AI Act.9 More recently, it published
guidelines on the scope of the obligations for GPAI model providers.10

General Purpose AI Code of Practice

The General-Purpose AI Code of Practice was developed to help providers of GPAI


models comply with requirements on safety, transparency and copyright set by Artt. 53
and 55 of the EU AI Act, which became applicable on 2 August 2025. Signatories of the
Code will be able to demonstrate compliance with the relevant provisions of the EU AI
Act by adhering to the voluntary measures described in the Code. Organisations that
refuse to sign it, instead, will be required to demonstrate compliance through alternative
and likely more burdensome processes.

9 European Commission, Guidelines on prohibited artificial intelligence practices established by Regulation (EU)
2024/1689 (AI Act)’, February 2025, https://digital-strategy.ec.europa.eu/en/library/commission-publishes-
guidelines-prohibited-artificial-intelligence-ai-practices-defined-ai-act.
10 European Commission, ’Guidelines on the scope of obligations for providers of general-purpose AI models under

the AI Act’, July 2025, https://digital-strategy.ec.europa.eu/en/library/guidelines-scope-obligations-providers-general-


purpose-ai-models-under-ai-act.

11
Published on 10 July 2025 and approved by the Commission on 1 August 202511, the
Code was prepared by a team of independent experts through a multi-stakeholder
process bringing together more than 1000 organisations from industry, civil society, and
the scientific community. The Code consists of three chapters: Transparency,
Copyright, and Safety and Security. The Transparency Chapter offers a form that GPAI
providers can use to share information that should be made available to the AI Office,
national competent authorities, and downstream providers. The Copyright Chapter
provides guidance on how to comply with EU law on copyright and IPs. Differently from
the first two chapters, which apply to all GPAI model providers, the final chapter on
Safety and Security only applies to a small subset of organisations that provide GPAI
models with systemic risk under Art. 55. The third chapter details a series of measures
that providers of GPAI models with systemic risk shall adopt to identify, measure, and
control systemic risks. Finally, a guiding principle cutting across all three chapters is the
importance of ethical AI development. In this sense, GPAI providers are encouraged to
reflect on the ethical impact of their models and to ensure that their use aligns with
broader societal norms and values.

At the time of writing, some of the main organisations that signed up to the Code include
OpenAI, Google, Microsoft, Anthropic, Mistral AI, IBM, Amazon, Cohere, and Aleph
Alpha. xAI only signed up to the Safety and Security Chapter, implying it will
demonstrate compliance with the remaining obligations of the EU AI Act via alternative
means, while Meta officially refused to sign it.12

EU AI Pact

Soon after the entry into force of the EU AI Act, the European Commission published
the EU AI Pact13, a voluntary pledge by multinational corporations and European SMEs
to start applying the principles of the EU AI Act before they become enforceable.
Signatories to the EU AI Pact committed to developing and implementing an AI
governance strategy for AI adoption within their organisations and starting to work
towards compliance with the EU AI Act; mapping systems that are likely to fall under the
definition of ‘high-risk’ AI systems under the EU AI Act; and promoting AI literacy within
their organisations. More than half the signatories also committed to additional actions

11 European Commission, ’Commission Opinion on the assessment of the General-Purpose AI Code of Practice’,
August 2024, https://digital-strategy.ec.europa.eu/en/library/commission-opinion-assessment-general-purpose-ai-
code-practice.
12 Ibid.
13 European Commission, ‘EU AI Pact’, September 2024, https://digital-strategy.ec.europa.eu/en/policies/ai-pact.

12
on requirements introduced by the AI Act concerning high-risk AI systems and
transparency obligations for GPAI providers.

Data-related legislation

The EU has multiple legislative instruments relating to data governance, data protection,
and data sharing that will impact the development and deployment of AI systems. These
include the General Data Protection Regulation (GDPR), the Data Governance Act, the
Data Act and the Open Data Directive.

The GDPR is described by the EU institutions as “the strongest privacy and security law
in the world”,14 and defines individuals’ fundamental rights, the obligations of those
processing data, methods for ensuring compliance and sanctions for those in breach of
the rules. The Data Governance Act, Data Act and Open Data Directive aim to make
more data available for use and strengthen data sharing mechanisms, widening the
scope of AI systems that can be developed.

Vertical initiatives
While the EU AI Act should be primarily viewed as a horizontal piece of legislation, it
contains many sector-specific provisions. These aim to ensure consistency between the
AI Act and existing sectoral legislation and, in some cases, provide new requirements
based on the unique risks of deploying AI systems in specific sectors. The clearest
example of this vertical approach is the classification of AI systems as ‘high-risk’ when
these are intended to be used in ways that pose a risk of harm to health, safety or
fundamental rights in specific sectors, such as education and vocational training, critical
infrastructure management, or biometric identification of physical persons.

Approach to AI standardisation

Main features of the standardisation system


European standardisation is a unique system: there are standardisation structures and
processes that operate at both national and regional (European) levels, and around 30%
of European standards are adopted under the guidance of the European Commission,
giving them a special status in the EU legal system.

14 Council of the EU and the European Council, ‘The general data protection regulation’, 2023,
https://www.consilium.europa.eu/en/policies/data-protection/data-protection-regulation/.

13
The Commission has the twofold power to set the strategic priorities of European
standardisation through the annual Union work programme and initiate the process that
leads to developing European harmonised standards, i.e. European standards “adopted
on the basis of a request made by the Commission for the application of Union
harmonisation legislation”.15 Harmonised standards differ from other European
standards due to their distinctive legislative function in providing technical specifications
to support the implementation of EU legislation. While harmonised standards are
technically voluntary, their adoption grants a presumption of legal conformity with related
EU legislation. Additionally, when a harmonised standard is adopted, National
Standards Bodies must withdraw any conflicting national standard within a “reasonable
deadline”.16

The rationale behind the Commission’s direct involvement in European standardisation


processes lies in the key role that standards play in harmonising EU legislation across
Member States. For instance, harmonised standards are instrumental to the
construction of a healthy internal market: the system of harmonised standards was first
established as a way to specify quality and safety requirements of products and services
traded within the EU. Additionally, standards can play a role in fostering the
development of European businesses, providing value to consumers by enhancing
product quality, interoperability and the circulation of information, and supporting
international trade.

As already mentioned, the Commission strictly collaborates with ESOs. These are CEN,
CENELEC, and the European Telecommunications Standards Institute (ETSI). They
are membership-based associations officially recognised by Regulation (EU) No
1025/2012 as the providers of European standards in specific sectors.

In addition to working under the Commission’s guidance to support the implementation


of European legislation, ESOs also develop European standards that are requested by
their own members. Membership in CEN and CENELEC is restricted to National
Standards Bodies. Stakeholders who wish to participate in their activities can only do so
through the respective National Standards Bodies. ETSI, instead, is open to the direct
participation of industry organisations.

15 Regulation (EU) No 1025/2012, Art. 2.


16 Ibid., Art. 3.

14
National Standards Bodies are the one-stop shop for all stakeholders interested in
partaking in either European or International standardisation. These include industry
associations, businesses, academia, and civil society representatives.

European AI-focused standardisation activities


EU standardisation for AI technologies is carried out by a Joint Technical Committee
(CEN-CENELEC/JTC 21) established following the publication of the European
Commission’s White Paper on AI. With more than 130 participants, CEN-
CENELEC/JTC 21 is one of the largest committees among all ESOs. Its core mission
is to develop standards to ensure the development of safe and trustworthy AI systems
aligned with EU values.

Following a formal request by the European Commission,17 CEN-CENELEC/JTC 21 is


developing harmonised standards to support the implementation of the EU AI Act.
These standards address a wide range of topics, including risk management and
conformity assessment procedures, organisational governance, data quality,
transparency, accuracy, robustness, and cybersecurity specifications for AI systems.
They will define the technical specifications needed to meet the key requirements of
the EU AI Act and will grant organisations that adopt them a presumption of conformity
with the relevant legal provisions. Based on the European Commission’s original
standardisation request, these standards should have been adopted by 31 January
2025. The deadline was later postponed to August 2025 to accommodate for delays in
the adoption of the EU AI Act. In Spring 2025, however, CEN-CENELEC announced
that the standards supporting the implementation of the EU AI Act are further delayed
and will likely be published in the summer of 2026.

Engagement in international standardisation


As stated by the European Commission, shaping international standards is a key priority
for the EU and is in line with the goals of promoting global competitiveness and fostering
international trade. The European Commission specifies the strategic objectives for
international standardisation in its annual Union work programme and actively
promotes the collaboration between ESOs and International Standardisation Bodies
(ISBs) like IEC, ISO and ITU.

17European Commission, ‘Commission Implementing Decision on a Standardisation Request to the European


Committee for Standardisation and the European Committee for Electrotechnical Standardisation in Support of
Union Policy on Artificial Intelligence’, May 2023, https://ec.europa.eu/growth/tools-
databases/enorm/mandate/593_en.

15
For its part, the EU often transposes international standards into European standards
when these sufficiently meet European needs and interests—one of the key roles of
CEN-CENELEC/JTC 21 is to identify opportunities for alignment with international
standards in the context of the EU AI Act. While this work is still ongoing, CEN-
CENELEC/JTC 21 seems motivated to draw on existing standards wherever possible
in order to meet the stringent deadline set in the standardisation request issued by the
European Commission.

16
17

You might also like