Oxford - 2024
Oxford - 2024
https://doi.org/10.1093/ijlit/eaae021
Advance access publication 14 September 2024
Article
ABSTR ACT
In Europe, the governance discourse surrounding artificial intelligence (AI) has been predomi-
nantly centred on the AI Act, with a proliferation of books, certification courses, and discussions
emerging even before its adoption. This narrow focus has overshadowed other crucial regulatory
interventions that promise to fundamentally shape AI. This article highlights the proposed EU AI
liability directive (AILD), the first attempt to harmonize general tort law in response to AI-related
threats, addressing critical issues such as evidence discovery and causal links. As AI risks prolifer-
ate, this article argues for the necessity of a responsive system to adequately address AI harms as
they arise. AI safety and responsible AI, central themes in current regulatory discussions, must be
prioritized, with ex-post liability in tort playing a crucial role in achieving these objectives. This is
particularly pertinent as AI systems become more autonomous and unpredictable, rendering the
ex-ante risk assessments mandated by the AI Act insufficient. The AILD’s focus on fault and its lim-
ited scope is also inadequate. The proposed easing of the burden of proof for victims of AI, through
enhanced discovery rules and presumptions of causal links, is insufficient in a context where Large
Language Models exhibit unpredictable behaviours and humans increasingly rely on autonomous
agents for complex tasks. Moreover, the AILD’s reliance on the concept of risk, inherited from the
AI Act, is misplaced, as tort liability intervenes after the risk has materialized. However, the inher-
ent risks in AI systems could justify EU harmonization of AI torts in the direction of strict liability.
Bridging the liability gap will enhance AI safety and responsibility, better protect individuals from
AI harms, and ensure that tort law remains a vital regulatory tool.
*
Guido Noto La Diega, Professor of Law, Technology and Innovation at the University of Strathclyde, Glasgow, where they
lead the LLM/MSc Law, Technology and Innovation, and the namesake research theme. School of Law, University of Strathclyde,
The Lord Hope Building, 141 St James Rd, Glasgow G4 0LT, United Kingdom. Tel: +441414448427. Email: guido.notoladiega@
strath.ac.uk.
†
Leonardo Teonacio Bezerra, Lecturer in A.I/Data Science at the University of Stirling. Division of Computing Science and
Mathematics, University of Stirling, Cottrell Building, Stirling FK9 4LA, United Kingdom. Tel: +44176467421. Email: leonardo.
bezerra@stir.ac.uk. This is a genuinely collaborative work, Noto La Diega is responsible for Sections 1, 3, 4, 5 and Bezerra for
Section 2.
INTRODUCTION
For many years, the artificial intelligence (AI) governance discourse has been dominated
1
Even the US and the UK—whose neoliberal approach has always meant a preference for deregulation and self-
regulation—have recently, albeit timidly, embraced a more top-down approach. In October 2023, the President of the
USA issued the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. In
February 2024, the UK backtracked on its initial plans to leave it to AI businesses to self-regulated, and set up to introduce
binding requirements for developers of highly capable general-purpose AI models to ensure their safety (Department
for Science, Innovation & Technology, ‘A Pro-Innovation Approach to AI Regulation—Government Response to
Consultation’ (2024) CP 1019, 4).
2
For example, it seems clear that the UK approach to AI governance is dominated by the pursuit of safety (see eg the afore-
mentioned legislative initiatives as well as the AI Safety Summit). One could speculate that, whereas ethics and fundamental
rights have become increasingly contentious, safety is a…safer notion, as one could hardly imagine any argument in favour of
unsafe AI. We should be careful, however, as the single-minded focus on safety may lead to overlooking the wider societal risks
associated to this technology.
3
As we write, the Regulation of the European Parliament and of the Council on laying down harmonized rules on Artificial
Intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU)
2018/1139, and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797, and (EU) 2020/1828 (Artificial Intelligence
Act) has not been published in the Official Journal. The final text is however available at <https://data.consilium.europa.eu/doc/
document/PE-24-2024-INIT/en/pdf> accessed 17 June 2024.
4
Tycho de Graaf and Gitta Veldt, ‘The AI Act and Its Impact on Product Safety, Contracts and Liability’ (2022) 30 Eur Rev
Private Law 803. This will be a lex specialis in relation to the new Regulation (EU) 2023/988 of the European Parliament and of
the Council of 10 May 2023 on general product safety, amending Regulation (EU) No 1025/2012 of the European Parliament
and of the Council and Directive (EU) 2020/1828 of the European Parliament and the Council, and repealing Directive
2001/95/EC of the European Parliament and of the Council and Council Directive 87/357/EEC [2023] OJ L 135/1.
5
‘Artificial Intelligence Act: MEPs Adopt Landmark Law’ (European Parliament, 13 March 2024) <https://www.europarl.
europa.eu/news/en/press-room/20240308IPR19015/artificial-intelligence-act-meps-adopt-landmark-law> accessed 15 March
2024.
6
Other binding laws already existed. Notably, China adopted a number of regulations, including Generative AI Regulation,
which came into force on 15th August 2023 (see Jing Cheng and Jinghan Zeng, ‘Shaping AI’s Future? China in Global AI
Governance’ (2023) 32 Journal of Contemporary China 794.
7
For many years, technological neutrality has been one of the constitutional principles in the field of Internet governance.
While not immune to criticism (eg Chris Reed, ‘Taking Sides on Technology Neutrality’ (2007) 4 SCRIPTed 263), this approach
has so far enabled the law to retain the flexibility required to remain relevant despite the pace of technology development (see
Joshua AT Fairfield, Runaway Technology: Can Law Keep Up? (Cambridge University Press, Cambridge (UK) 2021). It remains
to be seen if the same can be said of this new generation of technology-specific law.
8
Guido Noto La Diega and Christof Koolen, ‘Generative AI, Education, and Copyright Law: An Empirical Study of
Policymaking in UK Universities’ 2024 EIPR.
9
Martin Kretschmer and others, ‘The Risks of Risk-Based AI Regulation: Taking Liability Seriously’ (2023) DP8517 CEPR
Discussion Paper No. 18517. CEPR Press, Paris & London 10 <https://cepr.org/publications/dp18517> accessed 15 March
2024.
Can there be responsible AI without AI liability? • 3
the AI Act can be seen as a codification of—and can be complemented and operationalized
by—‘responsible AI’10 initiatives, ie bottom-up initiatives normally backed by national or
transnational governance bodies aimed at ‘providing concrete recommendations, standards
and policy suggestions to support the development, deployment and use of AI systems’11.
Responsible AI requires robust forward-looking governance, and at its core there must be
questions of who should be liable if AI harms humans and under which circumstances.12 We
posit that there can be no responsible AI without AI liability. There can also be no AI safety
without AI liability, ie a clear and comprehensive liability framework for AI, one that would
10
To a large extent, this phrase corresponds to the idea of trustworthy AI, which is more common in the EU. Among the rea-
sons to prefer the former to the latter, ‘trustworthy AI’ is usually linked to ethics, which is a contentious area, and ‘trustworthiness’
is a slippery concept as it refers to a particular kind of behaviour that is considered to be good when it is displayed by individuals
or organizations. See Charlotte Stix, ‘Artificial Intelligence by Any Other Name: A Brief History of the Conceptualization of
“Trustworthy Artificial Intelligence”’ (2022) 2 Disc Artif Intell 26, para 2.4.
11
Virginia Dignum, Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way (Springer, Cham
(Switzerland) 2019) 95. The author mentions some examples, IEEE initiative on Ethics of Autonomous and Intelligent Systems
and the ethical guidelines of the Japanese Society for Artificial Intelligence.
12
ibid 104.
13
Kretschmer and others (n 12) 10.
14
‘OECD AI Incidents Monitor (AIM)’ (OECD) <https://oecd.ai/en/incidents> accessed 18 June 2024.
15
HAI, ‘Artificial Intelligence Index Report 2024’ (2024) Stanford Univ Human-Centered Artif Intell 17 <https://aiindex.
stanford.edu/wp-content/uploads/2024/05/HAI_AI-Index-Report-2024.pdf>.
16
A former senior employee of OpenAI cited in Dan Milmo, ‘OpenAI Putting “Shiny Products” above Safety, Says Departing
Researcher’ The Observer (18 May 2024) <https://www.theguardian.com/technology/article/2024/may/18/openai-putting-
shiny-products-above-safety-says-departing-researcher> accessed 23 May 2024.
17
Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to
artificial intelligence (COM(2022) 496 final).
18
These are not perfectly overlapping synonyms, but they tend to have in common the reference to the private law rules that
can be invoked when a harm occurs, with the exception of contractual issues (eg liability for breach of contract). See eg, Percy H
Winfield, The Province of the Law of Tort (Cambridge University Press, Cambridge (UK); The Macmillan Company, New York
City (NY) 1931) 32.
19
Shu Li and Béatrice Schütte, ‘The Proposed EU Artificial Intelligence Liability Directive: Does/Will Its Content Reflect Its
Ambition?’ [2024] Technol Regul 143.
4 • G. Noto La Diega and L.C.T. Bezerra
turn will serve the two-fold goal of better protecting people from AI and helping tort law retain
its relevance as a key regulatory tool.
Against this backdrop, this article—adopting a doctrinal method that focuses on European
private law, with insight from Italy and the UK—pursues a two-fold objective. First, it critically
assesses whether AI responsibility and safety can be achieved by the proposed AILD which is
tasked with bridging the AI liability gap through enhanced discovery rules and a presumption of
causal link between fault and AI-generated damage20. Second, as the proposal was presented two
months before ChatGPT became commercially available,21 this paper scrutinizes whether the
20
For some recommendations on how to change the AILD to better address the liability gap and the information gap see
Marta Ziosi and others, ‘The EU AI Liability Directive (AILD): Bridging Information Gaps’ (2023) 14 Eur J Law Technol 8–9
<https://ejlt.org/index.php/ejlt/article/view/962> accessed 21 June 2024.
21
The European Commission released the draft AILD on 28th September 2022, ChatGPT was launched on 30th November
2022.
22
European Commission, ‘White Paper on Artificial Intelligence: A European Approach to Excellence and Trust’ (2020)
COM(2020) 65 final 13.
23
Turing, Alan, ‘Computing machinery and intelligence’, Mind, 1950.
24
The term AI winter has been coined to describe the periods of retraction in investment on and adoption of AI technologies
that followed periods of major interest in the field due to a given breakthrough.
25
Christopher M. Bishop, Pattern recognition and machine learning (Springer, New York (NYC) 2006).
26
McCulloch, W and Pitts, W. ‘A Logical Calculus of Ideas Immanent in Nervous Activity’ (1943) 5(4) Bull Math
Biophys115–133.
27
Rosenblatt, F ‘The Perceptron: A Probabilistic Model For Information Storage and Organization in the Brain’ (1958)
Psychol Rev 65
Can there be responsible AI without AI liability? • 5
only to see the then hype frustrated as theoretical computer science demonstrated its limita-
tions.28 In the 1980s, the devise of the currently most employed ANN training approach led
to another surge in interest,29 with many specialized ANN architectures30 that are used to this
date being proposed in that period or shortly after. Already in the early 2000s, ML algorithms
employed for predictive purposes achieved accurate results when the data used was tabular and
of good quality, and over the past decade this also became true in fields where data is not tabular,
eg image, text, audio, video, and/or code. Among the core ideas that enabled recent results are
(i) big data,31 the technology to store and process the vast, diverse, and fast-growing data pro-
28
Marvin Minsky and Seymour A. Papert. Perceptrons: An Introduction to Computational Geometry (MIT Press, Cambridge
(MA) 1969) 480, 479, 104.
29
David E Rumelhart, Geoffrey E Hinton and Ronald J Williams (1986). ‘Learning Representations by Back-Propagating
Errors’ (6088) 323 Nature 533–536.
30
Being networks, ANNs can vary as to their topology and the type of computation performed at nodes and/or layers. The
resulting architecture of the network is decisive for its performance and varies as a function of the task and data. For images, for
instance, convolutional architectures proposed in the 80’s gained significant popularity in the 2010’s. A similar pattern is observed
for recurrent networks, successfully applied to sequential data such as text and audio until recently superseeded by novel archi-
tectures such as the Transformer.
31
Ghemawat, Sanjay, Howard Gobioff and Shun-Tak Leung. ‘The Google File System’. In Proceedings of the Nineteenth ACM
Symposium on Operating Systems Principles, 2003,pp. 29–43..
32
Ian Goodfellow, Yoshua Bengio and Aaron Courville, Deep Learning (The MIT Press, Cambridge (MA) 2016).
33
Parallel computing refers to the ability of executing different parts of an algorithm at the same time, typically through
multiple processing units. Moving from sequential to parallel execution of algorithms is non-trivial in several scenarios, and for
sequential data such as text the algorithms employed until recently were majorly sequential due to the need to respect the order
of the training data.
34
This goes at the core of the AI accountability gap and it is problematic because ‘[i]ndividual citizens may have a hard time
finding out who they should turn to, if data are incorrect, corrupted, or biased as a collective outcome of a series of minor con-
tributions’ (Filippo Santoni de Sio and Giulio Mecacci, ‘Four Responsibility Gaps with Artificial Intelligence: Why They Matter
and How to Address Them’ (2021) 34 Philos Technol 1057, 1066.).
35
Crawford, Kate. The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press, New
Haven (CT) 2021).
36
Ian Goodfellow and others, ‘Generative Adversarial Nets’, in Z. Ghahramani and M. Welling and C. Cortes and N. Lawrence
and K.Q. Weinberger (eds), Advances in Neural Information Processing Systems 27 (MIT Press, Cambridge (MA) 2014) <https://
papers.nips.cc/paper_files/paper/2014/hash/5ca3e9b122f61f8f06494c97b1afccf3-Abstract.html> accessed 25 June 2024.
37
Shervin Minaee and others, ‘Large Language Models: A Survey’ (arXiv, 20 February 2024) <http://arxiv.org/
abs/2402.06196> accessed 28 June 2024.
6 • G. Noto La Diega and L.C.T. Bezerra
Indeed, being trained on ever-larger training datasets, LLMs display capabilities that had not
been anticipated (a phenomenon sometimes referred to as emergence). Among the most rele-
vant are (i) zero-shot learning, where the model is able to perform a task it has not been trained
for, and; (ii) few-shot learning, where the model can be taught to perform a new task through
novel examples rather than training. Importantly, the good results observed for text led devel-
opers to consider multimodal language models, ie models that can also address image, audio,
video, and/or code.
The unexpected capabilities of LLMs have stirred a novel AI rush towards autonomous
move away from grey areas in terms of copyright,45 but the data pipeline industry is still incipi-
ent and could face significant liability as it matures. Concerning deployment, models are often
updated with novel training data, which may incur in two phenomena. First, data drift, when
the more recent data becomes significantly different from the data used in the original training,
increasing model unpredictability. Second, echoes, when content produced by GenAI models
is used to train them, creating a feedback loop that can further reinforce existing issues such as
bias or hallucinations.
These features and vulnerabilities in GenAI explain why harms have already been occurring,
45
Katie Paul and Anna Tong, “Inside Big Tech’s underground race to buy AI training data”, Reuters, https://www.reuters.
com/technology/inside-big-techs-underground-race-buy-ai-training-data-2024-04-05, accessed 29 April 2024.
46
Laura Weidinger and others, ‘Taxonomy of Risks Posed by Language Models’, 2022 ACM Conference on Fairness,
Accountability, and Transparency (ACM 2022) <https://dl.acm.org/doi/10.1145/3531146.3533088> accessed 18 June 2024.
47
This is the main thesis of Anu Bradford, Digital Empires: The Global Battle to Regulate Technology (Oxford University Press,
Oxford (UK) 2023).
48
Anu Bradford, The Brussels Effect: How the European Union Rules the World (Oxford University Press, Oxford (UK) 2020).
49
For example, the Chinese approach to data protection may be attributed to the Brussels effect as there are many similar pro-
visions in China’s Personal information Protection Law (PIPL) and the GDPR; at a closer look, the similarities may have other
explanations, and the difference may be deeper than they seem, as convincingly argued by Wenlong Li and Jiahong Chen, ‘From
Brussels Effect to Gravity Assists: Understanding the Evolution of the GDPR-Inspired Personal Information Protection Law in
China’ (2024) 54 Comp Law Security Rev 105994.
50
We doubt the internet ever was a lawless space as the dominant narrative claims, but even if it were there is no doubt that
is has now become one of the most heavily regulated sectors, one characterized by complex multi-level and multi-jurisdiction
overlaps. See Chris Reed and Andrew Murray, Rethinking the Jurisprudence of Cyberspace (Edward Elgar Publishing, Cheltenham
(UK) 2018).
51
For example, the AI Act will apply to AI systems marketed in the EU regardless of where the provider is established, and to
the deployers as long as the output produced by the AI system is used in the EU (art 2(1)).
52
AI Act, art 3(2).
8 • G. Noto La Diega and L.C.T. Bezerra
case in a pre-AI world. AI only exacerbates the need to shift the focus from ex-ante actions to
ex-post reactions, as it makes it more difficult to predict the occurrence of harms and to fully
appreciate their wider repercussions.53 Shifting the focus also means that liability rules need to
be carefully calibrated and harmonized to tackle the issues in AI as presented in the previous
section.
Efforts to harmonies European private law have traditionally focussed on contractual law,
as epitomized by the Principles of European Contract Law,54 the UNIDROIT Principles of
International Commercial Contracts,55 and, to a large extent, the Draft Common Frame
the consequence of a defect in a product. Conversely, the proposed AILD constitutes the first
attempt by the EU lawmaker to make tangible progress towards the horizontal harmonization
of general tort law. As such, it deserves closer inspection.
After the launch of the European strategy for AI in 2018,65 the Expert Group on Liability and
New Technologies published a report on liability for AI and other emerging digital technol-
ogies in 2019.66 There, it observed that, while domestic liability regimes ensure basic protec-
tion of victims of AI-caused damage, the characteristics of these technologies (e.g. modification
through self-learning during operation, limited predictability, etc.) and their applications may
(i) The development of a horizontal regulatory framework for AI, focussing on issues of
safety and fundamental rights—the AI Act would become its centrepiece;
(ii) The revision of existing sectoral safety legislation, with the new General Product Safety
Regulation69 and the new Machinery Regulation70 as its cornerstones; and
(iii) Updated rules on AI liability, which would ultimately lead to the proposed Second
Product Liability Directive and the draft AILD.
To justify the need for a reform of liability rules, the Commission underlined the height-
ened likelihood of harm and the difficulty to apportion liability due to the integration of AI
into products, design flaws, poor quality or availability of data, and limited access to evidence.
Consequently, despite the presence of ex-ante product safety laws, ‘if the safety risks materialize,
the lack of clear requirements and the characteristics of AI technologies […] make it difficult to
trace back potentially problematic decisions made with the involvement of AI systems [as well
as making it difficult] for persons having suffered harm to obtain compensation under the cur-
rent EU and national liability legislation.’71 Adding to the momentum, the European Parliament
called on the Commission to propose legislation on civil liability for AI,72 resulting in the 2021
Coordinated Plan for AI that articulated the objective to introduce EU measures adapting the lia-
bility framework to the challenges of new technologies, including AI, and expressly stating that
the new framework would include ‘a revision of the Product Liability Directive, and a legislative
proposal with regard to the liability for certain AI systems.’73
65
European Commission, Communication “AI for Europe”, COM/2018/237 final.
66
Expert Group on Liability and New Technologies—New Technologies Formation, Liability for Artificial Intelligence and
Other Emerging Digital Technologies (EU 2019) 3.
67
European Commission, ‘White Paper on Artificial Intelligence – A European Approach to Excellence and Trust
COM(2020) 65 final.
68
European Commission, ‘Report on the Safety and Liability Implications of Artificial Intelligence, The Internet of Things
and Robotics’ (2020) COM 64 final.
69
Regulation (EU) 2023/988 of the European Parliament and of the Council of 10 May 2023 on general product safety,
amending Regulation (EU) No 1025/2012 of the European Parliament and of the Council and Directive (EU) 2020/1828 of
the European Parliament and the Council, and repealing Directive 2001/95/EC of the European Parliament and of the Council
and Council Directive 87/357/EEC [2023] OJ L 135/1.
70
Regulation (EU) 2023/1230 of the European Parliament and of the Council of 14 June 2023 on machinery and repealing
Directive 2006/42/EC of the European Parliament and of the Council and Council Directive 73/361/EEC [2023] OJ L 165/1.
71
European Commission, White Paper on Artificial Intelligence (n 55) 12.
72
European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime
for artificial intelligence (2020/2014(INL)) [2021] OJ C 404/107.
73
European Commission, Coordinated plan on artificial intelligence 2021 review, Annex to the Communication ‘Fostering a
European approach to Artificial Intelligence’ (2021)COM 205 final, 33.
10 • G. Noto La Diega and L.C.T. Bezerra
In September 2022, the Commission followed up with the proposed AILD, which as we write
is going through its first reading.74 As the EU lacks general competence to fully harmonize tort
law,75 Article 114 TFEU provides the legal basis for the proposal; indeed, this measure is seen
as pivotal to ensuring the good functioning of the internal market by removing the legal uncer-
tainty and fragmentation that hinders cross-border trade in AI-powered goods and services.76
While harmonization has its costs and therefore its need must be justified77, an economic study
has shown that, when it comes to AI liability, uniform liability rules are set to have a positive
impact of 5–7% on the production value of relevant cross-border trade.78 The Commission has
could end up being beneficial for some claimants. For example, Member States could maintain
national strict liability regimes88 eg Italy could keep its liability regime for dangerous activities
under Article 2050 of the Codice civile. This provision arguably applies to AI damages, especially
now that the AI Act provides guidance as to what systems are high-risk89. Under Article 2050
codice civile, ‘[c]ompensation must be paid by whomever damages others while exercising an
activity that is either dangerous by its very nature or due to the means used, unless the defend-
ant can prove to have put in place suitable measures to prevent the damage’90. This regime is
particularly useful in the context of damage caused by ‘technological unknown’, ie damages that
88
Draft AILD, recital 14.
89
On the applicability of this regime to AI see eg Antonino Procida Mirabelli di Lauro, Intelligenze artificiali e responsabilita’
civile, in Antonino Procida Mirabelli di Lauro and Maria Feola, Diritto delle obbligazioni (ESI 2020) 507, esp. 534 ff. The main
limitation is that this regime is alternative to the product liability one. There is also the issue that, despite the AI Act, there remains
uncertainty as to what constitutes a high-risk system, as we will note in the following section.
90
Codice Civile, art 2050.
91
Lalage Mormile, ‘Il principio di precauzione fra gestione del rischio e tutela degli interessi privati’, Riv dir ec trasp amb 10
(2012): 247, esp 271. Some of the issues related of the development risk defence (also known as the state-of-the-art defence) are
being addressed in the revision of the Product Liability Directive, which may to some extent reduce the usefulness of falling back
on the liability for dangerous activities.
92
Draft AILD, art 1(4).
93
Jan De Bruyne, Orian Dheu and Charlotte Ducuing, ‘The European Commission’s Approach to Extra-Contractual Liability
and AI—An Evaluation of the AI Liability Directive and the Revised Product Liability Directive’ (2023) 51 Comp Law Security
Rev 105894, 3.
94
See for example the UK National Cyber Strategy 2022, which—rather than focusing on cybersecurity per se as the previous
strategy did—centres on the idea of cyber power as ‘the ability to protect and promote national interests in and through cyber-
space’ (ibid 11), against competitors such as China and Russia.
95
Draft AILD, art 1(2).
96
Draft AILD, art 1(3)(a).
97
Draft AILD, art 1(3)(b).
98
Draft AILD, art 1(3)(c); Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022
on a Single Market for Digital Services and amending Directive 2000/31/EC (DSA) [2022] OJ L 277/1, arts 4-10.
99
Draft AILD, art 1(3)(d).
100
Draft AILD Explanatory Memorandum, 11, emphasis added.
12 • G. Noto La Diega and L.C.T. Bezerra
do not wish to downplay the resistance that a more holistic intervention may cause, harmoni-
zation intervenes precisely to tackle the variance in concepts that is at odds with the needs of
the internal market. This variance calls for harmonization measures, rather than justifying their
lack. Additionally, one could call into question whether it is true or even possible to legislate AI
liability without dealing with key questions of fault and damage. Indeed, the draft AILD defines
a ‘claim for damages’ as one that compensates ‘damage caused by an output of an AI system or
the failure of such a system to produce an output where such an output should have been pro-
duced.’101 Leaving aside the lack of clarity as to when it can be said that an output should have
101
Draft AILD, art 2(5).
102
Ibid, emphasis added.
103
Along similar lines, Kayleen Manwaring, ‘Will Emerging Technologies Outpace Consumer Protection Law? The Case of
Digital Consumer Manipulation’ [2018] Competition Consumer Law J 141.
104
Li and Schütte (n 22) 151.
105
In this sense, though in the context of educational AI, see European Commission Expert Group on AI in Education, Final
report of the Commission expert group on artificial intelligence and data in education and training (European Union 2022) <https://
op.europa.eu/en/publication-detail/-/publication/7f64223f-540d-11ed-92ed-01aa75ed71a1/language-en> accessed 21 June
2024.
106
See Vagelis Papakonstantinou and Paul De Hert, The Regulation of Digital Technologies in the EU: Act-Ification, GDPR
Mimesis and EU Law Brutality at Play (Routledge 2024).
Can there be responsible AI without AI liability? • 13
indicator that the EU is far from taking the task of harmonizing torts seriously. Let us not allow
such paucity to prevent us from analysing both in turn.
Article 3 focuses on the key issue of discovery: it empowers national courts to request the
disclosure of evidence, and in some instances its preservation. Claimants can be granted disclo-
sure orders provided that they (i) address one of the persons expressly listed in the provision
(the provider of high-risk AI system, the product manufacturer, the user, etc.)107; (ii) identify a
specific high-risk AI system that is suspected of having caused a damage; (iii) made all propor-
tionate attempts at gathering evidence from the defendant; (iv) present elements to corroborate
107
The reference is to the providers of high-risk systems, the user, and ‘a person subject to the obligations of a provider pursu-
ant to [Article 24 or Article 28(1)’ of the AI Act in the originally proposed version, i.e. certain product manufacturers (e.g. toys,
lifts, etc.) and, under certain circumstances, distributors, importers and third parties (e.g. if they make a substantial modification
to the high-risk system). See arts 23-26 of the final version of the AI Act; as the adopted version differs significantly from the
proposal that the draft AILD referred to, it is expected that significant work will be needed to avoid misalignments.
108
Draft AILD, art 3(1) and (2).
109
Draft AILD, art 3(1).
110
Draft AILD, art 3(3).
111
Draft AILD, recital 6.
112
See eg Donal Khosrowi, Finola Finn and Elinor Clark, ‘Engaging the Many-Hands Problem of Generative-AI Outputs:
A Framework for Attributing Credit’ [2024] AI and Ethics <https://doi.org/10.1007/s43681-024-00440-7> accessed 16 May
2024.
113
More on this already in Guido Noto La Diega, ‘Against the Dehumanisation of Decision-Making—Algorithmic Decisions
at the Crossroads of Intellectual Property, Data Protection, and Freedom of Information’ (2018) 9 JIPITEC 3.
114
Jorge Luis Morton Gutiérrez, ‘On Actor-Network Theory and Algorithms: ChatGPT and the New Power Relationships in
the Age of AI’ [2023] AI and Ethics <https://doi.org/10.1007/s43681-023-00314-4> accessed 16 May 2024.
115
The draft AILD itself recognizes that ‘[t]he large number of people usually involved in the design, development, deploy-
ment and operation of high-risk AI systems, makes it difficult for injured persons to identify the person potentially liable for
damage’ (recital 17). The same applies to the identification of the discovery order recipients.
116
Delaram Golpayegani, Harshvardhan J Pandit and Dave Lewis, ‘To Be High-Risk, or Not To Be—Semantic Specifications
and Implications of the AI Act’s High-Risk AI Applications and Harmonised Standards’, Proceedings of the 2023 ACM
Conference on Fairness, Accountability, and Transparency (Association for Computing Machinery 2023) <https://dl.acm.org/
doi/10.1145/3593013.3594050> accessed 13 May 2024.
14 • G. Noto La Diega and L.C.T. Bezerra
AI system’. Rather, it provides a list of systems that are regarded as high-risk (eg e-proctoring soft-
ware)117 and introduces a set of exemptions (eg AI used to detect patterns in decision-making are
not high risk) and then an exception to the exemptions, ie the exempt systems will be regarded as
high risk if used for profiling purposes.118 To add to the complexity, AI providers can disagree with
the classification if they believe that the system is in fact not high risk. Positively, predictability may
be increased by the guidelines that the Commission is set to adopt; they should provide examples
of high-risk and non-high-risk use cases.119 Nonetheless, legal certainty is under threat as the crite-
ria for classifying systems will change over time under yet-to-be adopted delegated acts.120
117
Draft AI Act, art 6(1)-(2) and Annex III [3](d).
118
Draft AI Act, art 6(3).
119
Draft AI Act, art 6(5).
120
Draft AI Act, art 6(7).
121
Geoffrey C Jr Hazard, ‘Discovery and the Role of the Judge in Civil Law Jurisdictions’ (1997) 73 Notre Dame Law Rev
1017.
122
Carla L Reyes, ‘The U.S. Discovery-EU Privacy Directive Conflict: Constructing a Three-Tiered Compliance Strategy
Note’ (2008) 19 Duke J Comp Int Law 357; Eckard von Bodenhausen, ‘U.S. Discovery and Data Protection Laws in Europe’
(2012) 37 DAJV Newsletter 14.
123
29 CFR § 18.51(e). With regards to the assistance to foreign and international tribunals, see 28 USC §1782.
124
Draft AILD, art 3(4).
125
Draft AILD, art 3(4).
126
Privilege refers mostly to client-attorney privilege (Upjohn Co. v. United States 449 U.S. 383 (1981)), attorney work-
product (United States v. Nobles 422 U.S. 225, 238–39 (1975)), and joint defence privilege (United States v. Henke 222 F.3d 633
(9th Cir. 2000)), with some US states recognizing other privileges, eg between ministers and their confessors (eg Cal. Evid.
Code § 912).
127
Draft AILD, art 3(4).
128
29 CFR § 18.51(e).
Can there be responsible AI without AI liability? • 15
striking a balance between data access and IP can be easily exploited and lead to overprotection
of IP and ultimately closed, inscrutable AI129.
On a more positive note, the harmonized discovery provision is equipped with an incentive
for the defendant to comply with the discovery or preservation order. Indeed, if they fail to do
so, national courts shall presume the non-compliance with the duty of care that the withheld
evidence was intended to prove. This presumption is limited to the defendant not complying
with the disclosure/evidence order; it will be of no help if the evidence is held by other AI play-
ers, which may limit its usefulness. An amendment to the proposed AILD to ensure its extra-
Presumption of causal link between the fault and the damage caused by the AI
The most important innovation on the presumptions front is introduced by Article 4, that
accounts for the difficulties to prove the causal link between the damaging AI output (or lack
thereof) and the defendant’s fault. These difficulties stem from the intrinsic features of most AI
systems, notably autonomy and opacity130, as seen in a previous section. To account for them,
the provision empowers national courts to presume the existence of the causal link, while leav-
ing it to the defendant to rebut the presumption. For the presumption to operate, three cumu-
lative conditions must be met:
(i) The claimant has demonstrated—or the judge has presumed under Article 3—the
defendant’s fault i.e. the non-compliance with a duty of care ‘directly intended to pro-
tect against the damage’131;
(ii) The circumstances make it ‘reasonably likely’132 that the fault did influence the output
or its lack;
(iii) The claimant has proved that the damage derived from the output or lack thereof.
The proposed AILD goes to great lengths to make it clear that it does not intend to harmonize
the concept of fault or the conditions under which domestic courts establish fault.133 It must
be questioned whether it is at all possible to harmonize the rules about the causal link between
fault and output without intervening on the concept of fault. In fact, we would argue that the
draft AILD can be interpreted as a backdoor reform of civil fault, which is set to acquire an
autonomous meaning in EU law, ie ‘a human act or omission which does not meet a duty of
care under Union law or national law that is directly intended to protect against the damage that
occurred’134. This is much narrower, compared to most national conceptions of fault. Taking the
Italian legal system as an example, the colpa is seen as the expression of the general alterum non
laedere principle (do not harm), a standard135 that arises in the event of negligence, recklessness,
129
Guido Noto La Diega, ‘Ending Smart Data Enclosures: The European Approach to the Regulation of the Internet of Things
between Access and Intellectual Property’ in Stacy-Ann Elvy and Nancy Kim (eds), The Cambridge Handbook on Emerging Issues
at the Intersection of Commercial Law and Technology (Cambridge University Press 2024) 258.
130
Draft AILD, recital 27.
131
Draft AILD, art 4(1)(a).
132
Draft AILD, art 4(1)(b).
133
For example draft AILD, recitals 22–23; Explanatory Memorandum, 11.
134
Draft AILD, recital 22; and nearly verbatim art 4(1)(a).
135
Mario Barcellona, La responsabilità civile, in Salvatore Mazzamuto (dir), Trattato del diritto privato, vol 6 (Giappichelli
2021) 165.
16 • G. Noto La Diega and L.C.T. Bezerra
incompetence, or illegality136. The European concept would be limited to the fourth type of
colpa; fault by ‘illegality’ stems from the non-compliance with those legal provisions setting
forth measures aimed at avoiding or minimizing the risk of harm137. This notion of fault is addi-
tionally curtailed; indeed, not all safety rules would be relevant, but only the duties of care
that were ‘directly intended to protect against the damage that occurred’.138 For example, the
non-compliance with the AI Act’s documentation requirements—requirements whose compli-
ance the draft AILD is designed to incentivize—would not lead to the application of the causal
link presumption.139 Conversely, there would be fault in the event of physical injury that is the
136
Italy’s Civil Code, arts 2043 and 1176(1) c.c. and Criminal Code, art 43(1); see the monographic analysis conducted by
Laura Mancini, La colpa nella responsabilità civile (Giuffrè 2015).
137
C Massimo Bianca, Diritto civile. La responsabilità, vol 5 (2nd edn, Giuffrè 2019) 579.
138
Draft AILD, recital 22.
139
Draft AILD, recital 22.
140
With regard to claims for damages against a provider and those who are subject to the same obligations under certain cir-
cumstances (eg, manufacturers), the draft AILD, art 4(2) refers also to certain transparency, human oversight, accuracy require-
ments and the provisions on corrective actions to bring the system in line with certain obligations under the AI Act. When it
comes to claims against the user, the claimant needs to prove the non-compliance with the AI Act’s obligations to use or monitor
the system in line with the instructions, or not to expose the system to input data that is not relevant in view of the system’s
intended purpose (art 4(3)).
141
The reference is to Asimov’s three laws of robotics, see Ugo Pagallo, The Laws of Robots: Crimes, Contracts, and Torts
(Springer, Dordrecht (NL) 2013).
142
Draft AILD, art 4(4).
143
Draft AILD Explanatory Memorandum [4].
144
Draft AILD, art 4(5).
145
Draft AILD, art 4(6).
Can there be responsible AI without AI liability? • 17
these drawbacks, one cannot but wonder why the legislator—rather than introducing a total
of six requirements for the presumption to apply—did not simply provide that the defendant
retains a right to rebut the causal link presumption. Arguably a much neater solution, one that
would be more in line with the policy goals of this instrument.
Finally, the uncertainties related to the concept of fault and the burden of proof that remains
heavy despite the discovery rules and the presumption of causal link create a mismatch between
the complexity of the law and the simplicity required by AI as an automation engine. In the next
section, while evaluating the AILD’s fitness for GenAI, we will argue that this is an argument in
146
General purpose AI models have system risk when, due to their high-impact capabilities, are or will foreseeably be det-
rimental to ‘public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale
across the value chain’ (AI Act, art 3(65)).
147
‘Artificial Intelligence (AI) Act: Council Gives Final Green Light to the First Worldwide Rules on AI’ (EU Council, 21 May
2024) <https://www.consilium.europa.eu/en/press/press-releases/2024/05/21/artificial-intelligence-ai-act-council-gives-fi-
nal-green-light-to-the-first-worldwide-rules-on-ai/> accessed 17 June 2024.
148
Indeed, even though there are some additional requirements for general purpose AI with systemic risk, they have been
significantly watered down and now revolve around the idea of compliance through codes of practice (arts 55(2) and 56).
18 • G. Noto La Diega and L.C.T. Bezerra
concept of risk has a different relevance when it comes to apportioning liability for harms that
have already occurred, which is what the AILD would help doing. This is not to say that risk per-
forms no function in the context of torts. At the EU level, it justifies strict liability systems such
as liability for defective products149. As the Product Liability Directive states, ‘liability without
fault on the part of the producer is the sole means of adequately solving the problem, peculiar to our
age of increasing technicality, of a fair apportionment of the risks inherent in modern technological
production.’150 At the domestic level, going back to the Italian extra-contractual liability regime
for dangerous activities under Article 2050 of the Codice civile, risk justifies once again a form
One could put forward the objection that strict liability does not work for GenAI, namely for
foundation models, because ‘[o]ne Foundation Model, however, might be used in 1000 AI
applications, only one of them being a high-risk application.’151 This point goes hand in hand
with the suggestion that—rather than horizontally regulating liability in torts for all AI systems,
one should prefer a technology-specific approach ‘identifying single classes of applications that
need to be separately regulated with independent normative acts’152. The objection is not with-
out merit, but for the reasons above we disagree with the idea of a gradation of ex-post liability
based on risk. The limited control that the providers of upstream models have on downstream
applications can be accounted for in different ways, eg by leaving it to the defendant to prove
force majeure or fortuitous event153. It is true that imposing strict liability on the providers of
foundation models can be perceived as a drastic policy option, but we will be soon accepting
the notion that the providers of foundation models are providing an essential service, even an
essential facility, and that with great power comes greater responsibility—and in some instances
also greater liability. Another objection to a strict liability framework for AI is that it would con-
stitute an excessive burden on business and stifle innovation; indeed, the reasoning goes, that
strict liability would lead to a dramatic increase in litigation, and this litigation would always
be decided in favour of the claimant. Both worries are largely unwarranted. First, EU strict lia-
bility rules have been around for a long time, and they have produced a limited number of dis-
putes154. Second, strict liability does not mean that there would be no defences available to AI
149
More on the concept of risk in the Product Liability Directive in Daily Wuyts, ‘The Product Liability Directive—More than
Two Decades of Defective Products in Europe’ (2014) 5 J European Tort Law 1.
150
Product Liability Directive, recital 2. This is one of the few parts of the directive that remains mostly unchanged in the
Second Product Liability Directive (recital 2).
151
Philipp Hacker, ‘The European AI Liability Directives—Critique of a Half-Hearted Approach and Lessons for the Future’
(2023) 51 Comp Law Security Rev 105871, 32.
152
Andrea Bertolini, ‘Artificial Intelligence and Civil Liability’ ( JURI Committee 2020) 87.
153
The Italian regime of strict liability for dangerous activities leaves it to the defendant to prove that they adopted all safe-
guards suitable to avoid the harm (Codice Civile, art 2050). A similar regime is provided in under Article 2051 of the Civil Code,
which is also likely to be relevant in the context of AI harms, as observed by Barcellona (n 137) 266. Article 2051 applies to the
harm caused by the things that where within the defendant’s control e.g. the company responsible for managing a dam can be
held liable in the event of flooding (Corte di Cassazione, Sezioni Unite, ordinanza No 20943 of 30 June 2022, in CED Cassazione
[2022]). The defendant can escape liability by proving that the harm was caused by a fortuitous event.
154
This may change with the proposed Second Product Liability Directive, which has been rewritten to account for the
rise of AI and IoT technologies (recitals 3, 17, 18, 32, 50). However, the instrument contains a number of provisions aimed at
‘address[ing] a potential risk of litigation in an excessive number of cases’ (recital 22). The proposed directive is currently await-
ing the EU Council’s first reading position.
Can there be responsible AI without AI liability? • 19
companies. Risk, once again, may come in handy as there is no liability for defective products
in the event of scientifically unknowable risk.155 Something along the lines of this defence could
be envisaged for AI torts. If one considers the travaux préparatoires of the proposed AILD, it
becomes immediately apparent why strict liability was disregarded despite having overwhelm-
ing support from citizens, consumer groups, and academics156. Indeed, the ‘majority of business
respondents’157 considered the no-fault policy option to be disproportionate. As this instrument
is meant to contribute a single market for AI in Europe, it is no surprise that the EU lawmaker
would give weight to the views of private business. However, these views should not be the
155
Product Liability Directive, art 7(f); and in the US context, Richard E Byrne, ‘Strict Liability and the Scientifically
Unknowable Risk’ (1973) 57 Marquette Law Rev 660.
156
For many years now, scholars have argued that strict liability would be the best response to AI harms. See e.g. Wendehorst
(n 64); Herbert Zech, ‘Liability for AI: Public Policy Considerations’ (2021) 22 ERA Forum 147.
157
AILD Explanatory Memorandum [1].
158
Commission Staff Working Document Impact Assessment Report Accompanying the document Proposal for a Directive
of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence
(SWD/2022/319 final) [2.2].
159
Hacker (n 152) 30. The author also argues for the limitation of strict liability to economic operators and professional users
of high-risk AI systems. We are not convinced that linking the framework to the concept of high-risk systems is the best solution.
As we have argued in this paper, risk is a useful tool to impose ex-ante duties, less so when it comes to harms that have already
occurred.
160
It has been argued that, while some advancement can appreciated on the fragmentation front, the proposed AILD does
not bring about any meaningful improvement in terms of legal certainty (Marta Ziosi and others, ‘The EU AI Liability Directive
(AILD): Bridging Information Gaps’ (2023) 14 Eur J Law Technol 1 <https://ejlt.org/index.php/ejlt/article/view/962>
accessed 21 June 2024.).
161
Hacker (n 152) 7. To support SMEs, the author suggests an exception to the proposed strict liability framework; namely,
SMEs as well as operators and users of non-high-risk AI applications should only be covered by a presumption of defectiveness,
breach of duty and causality’ (ibid 33). Other exceptions to the strict liability rule would apply to actions against consumers using
AI (for which fault-based liability is put forward) and foundation models, as we will see.
162
Draft AILD, art 5(2).
163
See e.g. Mayuri Mehta, Vasile Palade and Indranath Chatterjee (eds), Explainable AI: Foundations, Methodologies and
Applications (Springer, Cham (CH) 2023).
20 • G. Noto La Diega and L.C.T. Bezerra
in ways that often cannot be predicted by its own creators, as the aforementioned phenomena of
emergence and zero-shot learning show. Predictability plays a key role when it comes to liability
in tort. For example, the test of remoteness in English tort law has the function of identifying
which consequences of the defendant’s conduct the latter should shoulder, and unpredictable
harms would be typically regarded as too remote to be eligible for compensation164. The rise of
agentic AI—agents that thanks to LLMs are becoming increasingly akin to Artificial General
Intelligence—are exacerbating the problem. In a world where artificial agents perform increas-
ingly complex tasks on behalf of their users, and do so in a way that is close to fully autonomous,
CONCLUSION
The lack of a general express competence to harmonize tort law166 has not prevented the gradual
emergence of EU tort law,167 which is hardly surprising as similar phenomena have been observed
in other areas, most notably Intellectual Property.168 One of the reasons why the GDPR became
the epitome of the Brussels effect was that it partly harmonized tort law in data-related scenar-
ios.169 Unless the EU corrects its course, it is unlikely that a Brussels effect will manifest also in
the AI space.170 If the AILD has the ambitious goal of ‘adapt[ing] private law to the needs of the
164
Andrew Tettenborn (ed), Clerk & Lindsell on Torts (24th edn, Sweet & Maxwell, London (UK) 2023) 2-140.
165
Hacker (n 152).
166
Bussani and Infantino (n 62) 5.
167
Paula Giliker (ed), Research handbook on EU tort law (Edward Elgar, Cheltenham (UK) 2017); Gert Brüggemeier, Tort law
in the European Union (2nd edn, Wolters Kluwer, Alphen aan den Rijn (NL) 2018).
168
Ana Ramalho, ‘Conceptualising the European Union’s Competence in Copyright—What Can the EU Do?’ (2014) 45 IIC
178.
169
See Claudio Scognamiglio, ‘Danno e Risarcimento Nel Sistema Del Rgpd: Un Primo Nucleo Di Disciplina Eurounitaria
Della Responsabilita` Civile?’ (2023) 5 NGCC 1150, describing the current state of things as a building site that will require
additional interventions by the Court of Justice, national court, and academics (ibid 1159). His work focussed on case C‑300/21
UI v Österreichische Post AG EU:C:2023:370, and the issue of compensability of non-pecuniary damage caused by the fear of
potential misuse of personal data, but those observations have wider applicability.
170
We would agree with the words of caution of Massimilano Granieri, ‘Una Sinopsi Comparativa e Una Prospettiva Critica
Sui Tentativi Di Regolazione Dell’intelligenza Artificiale’ (2023) 2 Comp dir civ 703.
Can there be responsible AI without AI liability? • 21
transition to the digital economy’, it cannot succeed in its current form. We would agree with
those commentators who have put forward that the EU approach to AI liability is half-hearted171
and cumbersome,172 and that the proposed AILD constitutes a ‘a very small step forward […]
a liability framework in the name only’173. However, the legislative process is ongoing and there
still is the opportunity to rise to the challenge and proceed with a more systematic harmoni-
zation of tort law. By its very nature—i.e. an AI-related instrument—one may say that even a
significant rewriting of the AILD to account for all the potential issues raised by AI-generated
damages would not suffice as the law would remain fragmented every time an AI is not involved
171
Hacker (n 152).
172
Kretschmer and others (n 12) 12.
173
Li and Schütte (n 22) 151.
174
See Guido Noto La Diega, ‘IoT and AI in Privacy Law’ in Ryan Abbott and Elizabeth Rothman (eds), Elgar Concise
Encyclopedia of Artificial Intelligence and the Law (Edward Elgar 2025).
175
One need only think of how ubiquitous (both in private and public ecosystems) Amazon’s AWS has become, or that we
now stream not only music and films, but also software itself (Software-as-a-Service or SaaS).
176
See also Ngwako Ralepelle, ‘Why Slow Thinking Matters for AI’ (Medium, 2 February 2024) <https://medium.com/@
nralepelle/why-slow-thinking-matters-for-ai-ba015c3bb84a> accessed 10 April 2024.