Cybersecurity of AI and Standardisation
Cybersecurity of AI and Standardisation
OF AI AND
STANDARDISATION
MARCH 2023
0
CYBERSECURITY OF AI AND STANDARDISATION
ABBREVIATIONS
Abbreviation Definition
AI Artificial Intelligence
CEN-
European Committee for Standardisation – European Committee for Electrotechnical Standardisation
CENELEC
EN European Standard
GR Group Report
IT Information Technology
ML Machine Learning
SC Subcommittee
TR Technical Report
TS Technical Specifications
WI Work Item
1
CYBERSECURITY OF AI AND STANDARDISATION
ABOUT ENISA
The European Union Agency for Cybersecurity, ENISA, is the Union’s agency dedicated to
achieving a high common level of cybersecurity across Europe. Established in 2004 and
strengthened by the EU Cybersecurity Act, the European Union Agency for Cybersecurity
contributes to EU cyber policy, enhances the trustworthiness of ICT products, services and
processes with cybersecurity certification schemes, cooperates with Member States and EU
bodies, and helps Europe prepare for the cyber challenges of tomorrow. Through knowledge
sharing, capacity building and awareness raising, the Agency works together with its key
stakeholders to strengthen trust in the connected economy, to boost resilience of the Union’s
infrastructure, and, ultimately, to keep Europe’s society and citizens digitally secure. More
information about ENISA and its work can be found here: www.enisa.europa.eu.
CONTACT
For contacting the authors please use team@enisa.europa.eu
For media enquiries about this paper, please use press@enisa.europa.eu.
AUTHORS
P. Bezombes, S. Brunessaux, S. Cadzow
EDITOR(S)
ENISA:
E. Magonara
S. Gorniak
P. Magnabosco
E. Tsekmezoglou
ACKNOWLEDGEMENTS
We would like to thank the Joint Research Centre and the European Commission for their active
contribution and comments during the drafting stage. Also, we would like to thank the ENISA Ad
Hoc Expert Group on Artificial Intelligence (AI) cybersecurity for the valuable feed-back and
comments in validating this report.
2
CYBERSECURITY OF AI AND STANDARDISATION
LEGAL NOTICE
This publication represents the views and interpretations of ENISA, unless stated otherwise. It
does not endorse a regulatory obligation of ENISA or of ENISA bodies pursuant to the
Regulation (EU) No 2019/881.
ENISA has the right to alter, update or remove the publication or any of its contents. It is
intended for information purposes only and it must be accessible free of charge. All references
to it or its use as a whole or partially must contain ENISA as its source.
Third-party sources are quoted as appropriate. ENISA is not responsible or liable for the content
of the external sources including external websites referenced in this publication.
Neither ENISA nor any person acting on its behalf is responsible for the use that might be made
of the information contained in this publication.
COPYRIGHT NOTICE
© European Union Agency for Cybersecurity (ENISA), 2023
This publication is licenced under CC-BY 4.0 “Unless otherwise noted, the reuse of this
document is authorised under the Creative Commons Attribution 4.0 International (CC BY 4.0)
licence https://creativecommons.org/licenses/by/4.0/). This means that reuse is allowed,
provided that appropriate credit is given and any changes are indicated”.
For any use or reproduction of photos or other material that is not under the ENISA copyright,
permission must be sought directly from the copyright holders.
3
CYBERSECURITY OF AI AND STANDARDISATION
TABLE OF CONTENTS
1. INTRODUCTION 8
2.2 CYBERSECURITY OF AI 10
3.1.1 CEN-CENELEC 12
3.1.2 ETSI 13
3.1.3 ISO-IEC 14
3.1.4 Others 14
4. ANALYSIS OF COVERAGE 16
5. CONCLUSIONS 24
5.1 WRAP-UP 24
5.2 RECOMMENDATIONS 25
A ANNEX: 27
4
CYBERSECURITY OF AI AND STANDARDISATION
5
CYBERSECURITY OF AI AND STANDARDISATION
EXECUTIVE SUMMARY
The overall objective of the present document is to provide an overview of standards (existing,
being drafted, under consideration and planned) related to the cybersecurity of artificial
intelligence (AI), assess their coverage and identify gaps in standardisation. It does so by
considering the specificities of AI, and in particular machine learning, and by adopting a broad
view of cybersecurity, encompassing both the ‘traditional’ confidentiality–integrity–availability
paradigm and the broader concept of AI trustworthiness. Finally, the report examines how
standardisation can support the implementation of the cybersecurity aspects embedded in the
proposed EU regulation laying down harmonised rules on artificial intelligence (COM(2021) 206
final) (draft AI Act).
The report describes the standardisation landscape covering AI, by depicting the activities of the
main Standards-Developing Organisations (SDOs) that seem to be guided by concern about
insufficient knowledge of the application of existing techniques to counter threats and
vulnerabilities arising from AI. This results in the ongoing development of ad hoc reports and
guidance, and of ad hoc standards.
The report argues that existing general purpose technical and organisational standards (such as
ISO-IEC 27001 and ISO-IEC 9001) can contribute to mitigating some of the risks faced by AI
with the help of specific guidance on how they can be applied in an AI context. This
consideration stems from the fact that, in essence, AI is software and therefore software
security measures can be transposed to the AI domain.
The report also specifies that this approach is not exhaustive and that it has some limitations.
For example, while the report focuses on software aspects, the notion of AI can include both
technical and organisational elements beyond software, such as hardware or infrastructure.
Other examples include the fact that determining appropriate security measures relies on a
system-specific analysis, and the fact that some aspects of cybersecurity are still the subject of
research and development, and therefore might be not mature enough to be exhaustively
standardised. In addition, existing standards seem not to address specific aspects such as the
traceability and lineage of both data and AI components, or metrics on, for example,
robustness.
The report also looks beyond the mere protection of assets, as cybersecurity can be considered
as instrumental to the correct implementation of trustworthiness features of AI and – conversely
–the correct implementation of trustworthiness features is key to ensuring cybersecurity. In this
context, it is noted that there is a risk that trustworthiness is handled separately within AI-
specific and cybersecurity-specific standardisation initiatives. One example of an area where
this might happen is conformity assessment.
Last but not least, the report complements the observations above by extending the analysis to
the draft AI Act. Firstly, the report stresses the importance of the inclusion of cybersecurity
aspects in the risk assessment of high-risk systems in order to determine the cybersecurity risks
that are specific to the intended use of each system. Secondly, the report highlights the lack of
standards covering the competences and tools of the actors performing conformity
assessments. Thirdly, it notes that the governance systems drawn up by the draft AI Act and the
6
CYBERSECURITY OF AI AND STANDARDISATION
Cybersecurity Act (CSA)1 should work in harmony to avoid duplication of efforts at national
level.
Finally, the report concludes that some standardisation gaps might become apparent only as
the AI technologies advance and with further study of how standardisation can support
cybersecurity.
1
Regulation (EU) 2019/881 of the European Parliament and of the Council of 17 April 2019 on ENISA (the European Union
Agency for Cybersecurity) and on information and communications technology cybersecurity certification and repealing
Regulation (EU) No 526/2013 (Cybersecurity Act) (https://eur-lex.europa.eu/eli/reg/2019/881/oj).
7
CYBERSECURITY OF AI AND STANDARDISATION
1. INTRODUCTION
The primary addressees of this report are standards-developing organisations (SDOs) and
public sector / government bodies dealing with the regulation of AI technologies.
The ambition of the report is to be a useful tool that can inform a broader set of stakeholders of
the role of standards in helping to address cybersecurity issues, in particular:
The reader is expected to have a degree of familiarity with software development and with the
confidentiality, integrity and availability (CIA) security model, and with the techniques of both
vulnerability analysis and risk analysis.
• definition of the perimeter of the analysis (Chapter 2): introduction to the concepts of AI
and cybersecurity of AI;
• inventory of standardisation activities relevant to the cybersecurity of AI (Chapter 3):
overview of standardisation activities (both AI-specific and non-AI specific) supporting the
cybersecurity of AI;
• analysis of coverage (Chapter 4): analysis of the coverage of the most relevant standards
identified in Chapter 3 with respect to the CIA security model and to trustworthiness
characteristics supporting cybersecurity;
• wrap-up and conclusions (Chapter 5): building on the previous sections, recommendations
on actions to ensure standardisation support to the cybersecurity of AI, and on preparation for
the implementation of the draft AI Act.
8
CYBERSECURITY OF AI AND STANDARDISATION
In line with previous ENISA work, which considers it the driving force in terms of AI
technologies, the report mainly focuses on ML. This choice is further supported by the fact that
there seem to be a general consensus on the fact that ML techniques are predominant in
current AI applications. Last but not least, it is considered that the specificities of ML result in
vulnerabilities that affect the cybersecurity of AI in a distinctive manner. It is to be noted that the
report considers AI from a life cycle perspective3. Considerations concerning ML only have been
flagged.
2
For example, the United Nations Educational, Scientific and Cultural Organization (UNESCO) in the ‘First draft of the
recommendation on the ethics of artificial intelligence’, and the European Commission’s High-Level Expert Group on
Artificial Intelligence.
3
See the life cycle approach portrayed in the ENISA report Securing Machine Learning Algorithms
(https://www.enisa.europa.eu/publications/securing-machine-learning-algorithms).
9
CYBERSECURITY OF AI AND STANDARDISATION
ML systems cannot achieve 100 % in both precision and recall. Depending on the situation, ML needs to
trade off precision for recall and vice versa. It means that AI systems will, once in a while, make wrong
predictions. This is all the more important because it is still difficult to understand when the AI system will fail,
but it will eventually.
This is one of the reasons for the need for explainability of AI systems. In essence, algorithms are
deemed to be explainable if the decisions they make can be understood by a human (e.g., a developer or an
auditor) and then explained to an end user (ENISA, Securing Machine Learning Algorithms).
A major specific characteristic of ML is that it relies on the use of large amounts of data to develop
ML models. Manually controlling the quality of the data can then become impossible. Specific traceability or
data quality procedures need to be put in place to ensure that, to the greatest extent possible, the data being
used do not contain biases (e.g. forgetting to include faces of people with specific traits), have not been
deliberately poisoned (e.g. adding data to modify the outcome of the model) and have not been deliberately
or unintentionally mislabelled (e.g. a picture of a dog labelled as a wolf).
2.2 CYBERSECURITY OF AI
AI and cybersecurity have been widely addressed by the literature both separately and in
combination. The ENISA report Securing Machine Learning Algorithms5 describes the
multidimensional relationship between AI and cybersecurity, and identifies three dimensions:
• cybersecurity of AI: lack of robustness and the vulnerabilities of AI models and algorithms,
• AI to support cybersecurity: AI used as a tool/means to create advanced cybersecurity (e.g.,
by developing more effective security controls) and to facilitate the efforts of law enforcement
and other public authorities to better respond to cybercrime,
• malicious use of AI: malicious/adversarial use of AI to create more sophisticated types of
attacks.
The current report focuses on the first of these dimensions, namely the cybersecurity of AI. Still,
there are different interpretations of the cybersecurity of AI that could be envisaged:
• a narrow and traditional scope, intended as protection against attacks on the confidentiality,
integrity and availability of assets (AI components, and associated data and processes)
across the life cycle of an AI system,
• a broad and extended scope, supporting and complementing the narrow scope with
trustworthiness features such as data quality, oversight, robustness, accuracy, explainability,
transparency and traceability.
The report adopts a narrow interpretation of cybersecurity, but it also includes considerations
about the cybersecurity of AI from a broader and extended perspective. The reason is that links
between cybersecurity and trustworthiness are complex and cannot be ignored: the
requirements of trustworthiness complement and sometimes overlap with those of AI
cybersecurity in ensuring proper functioning. As an example, oversight is necessary not only for
the general monitoring of an AI system in a complex environment, but also to detect abnormal
behaviours due to cyberattacks. In the same way, a data quality process (including data
traceability) is an added value alongside pure data protection from cyberattack. Hence,
4
Besides the ones mentioned in the box, the ‘False Negative Rate” and the ‘False Positive Rate” and the ‘F measure” are
examples of other relevant metrics.
5
https://www.enisa.europa.eu/publications/securing-machine-learning-algorithms
10
CYBERSECURITY OF AI AND STANDARDISATION
11
CYBERSECURITY OF AI AND STANDARDISATION
3. STANDARDISATION IN
SUPPORT OF
CYBERSECURITY OF AI
Still, in the scope of this report, the focus is mainly on standards that can be harmonised. This
limits the scope of analysis to those of the International Organization for Standardization (ISO)
and International Electrotechnical Commission (IEC), the European Committee for
Standardization (CEN) and European Committee for Electrotechnical Standardization
(CENELEC), and the European Telecommunications Standards Institute (ETSI). CEN and
CENELEC may transpose standards from ISO and IEC, respectively, to EU standards under the
auspices of, respectively, the Vienna and Frankfurt agreements.
3.1.1 CEN-CENELEC
CEN-CENELEC addresses AI and Cybersecurity mainly within two joint technical committees
(JTCs).
• JTC 13 ‘Cybersecurity and data protection’ has as its primary objective to transpose relevant
international standards (especially from ISO/IEC JTC 1 subcommittee (SC) 27) as European
standards (ENs) in the information technology (IT) domain. It also develops ‘homegrown’ ENs,
where gaps exist, in support of EU directives and regulations.
• JTC 21 ‘Artificial intelligence’ is responsible for the development and adoption of standards for
AI and related data (especially from ISO/IEC JTC 1 SC 42), and providing guidance to other
technical committees concerned with AI.
JTC 13 addresses what is described as the narrow scope of cybersecurity (see Section 2.2).
The committee has identified a list of standards from ISO-IEC that are of interest for AI
cybersecurity and might be adopted/adapted by CEN-CENELEC based on their technical
cooperation agreement. The most prominent identified standards belong to the ISO 27000
series on information security management systems, which may be complemented by the ISO
15408 series for the development, evaluation and/or procurement of IT products with security
functionality, as well as sector-specific guidance, e.g. ISO/IEC 27019:2017 Information
technology – Security techniques – Information security controls for the energy utility industry
(see the annex A.1, for the full list of relevant ISO 27000 series standards that have been
identified by CEN-CENELEC).
12
CYBERSECURITY OF AI AND STANDARDISATION
In addition, the following guidance and use case documents are drafts under development
(some at a very early stage) and explore AI more specifically. It is premature to evaluate the
impacts of these standards.
• ISO/IEC AWI 27090, Cybersecurity – Artificial intelligence – Guidance for addressing security
threats and failures in artificial intelligence systems: The document aims to provide information
to organisations to help them better understand the consequences of security threats to AI
systems, throughout their life cycles, and describes how to detect and mitigate such threats.
The document is at the preparatory stage.
• ISO/IEC CD TR 27563, Cybersecurity – Artificial Intelligence – Impact of security and privacy
in artificial intelligence use cases: The document is at the committee stage.
By design, JTC 21 is addressing the extended scope of cybersecurity (see Section 4.2), which
includes trustworthiness characteristics, data quality, AI governance, AI management systems,
etc. Given this, a first list of ISO-IEC/SC 42 standards has been identified as having direct
applicability to the draft AI Act and is being considered for adoption/adaption by JTC 21:
In addition, JTC 21 has identified two gaps and has launched accordingly two ad hoc groups
with the ambition of preparing new work item proposals (NWIPs) supporting the draft AI Act.
The potential future standards are:
Finally, it has been determined that ISO-IEC 42001 on AI management systems and ISO-IEC
27001 on cybersecurity management systems may be complemented by ISO 9001 on quality
management systems in order to have proper coverage of AI and data quality management.
3.1.2 ETSI
ETSI has set up a dedicated Operational Co-ordination Group on Artificial Intelligence, which
coordinates the standardisation activities related to AI that are handled in the technical bodies,
committees and industry specification groups (ISGs) of ETSI. In addition, ETSI has a specific
group on the security of AI (SAI) that has been active since 2019 in developing reports that give
a more detailed understanding of the problems that AI brings to systems. In addition, a large
number of ETSI’s technical bodies have been addressing the role of AI in different areas, e.g.,
zero touch network and service management (ISG ZSM), health TC eHEALTH) and transport
(TC ITS).
ISG SAI is a pre-standardisation group identifying paths to protect systems from AI, and AI from
attack. This group is working on a technical level, addressing specific characteristics of AI. It has
published a number of reports and is continuing to develop reports to promote a wider
understanding and to give a set of requirements for more detailed normative standards if such
are proven to be required.
13
CYBERSECURITY OF AI AND STANDARDISATION
The following are published group reports (GRs) from ISG SAI that apply to understanding and
developing protections to and from AI:
The following work items of ISG SAI are in development/pending publication at the time of
writing:
In addition to the work already published and being developed, the group maintains a ‘roadmap’
that identifies the longer-term planning of work and how various stakeholders interact.
In addition, as a direct consequence of the draft AI Act and the Cybersecurity Act, the following
potential future WIs are being discussed: AI readiness and transition, testing, and certification.
The work in ETSI ISG SAI is within the wider context of ETSI’s work in AI, which includes
contributions from the other ETSI bodies, including its cybersecurity technical committee (TC
Cyber). Among other projects, the committee is specifically extending TS 102 165-1, Methods
and protocols; Part 1: Method and pro forma for threat, vulnerability, risk analysis (TVRA).
3.1.3 ISO-IEC
ISO-IEC carries out its work on AI in JTC 1 SC 42. The list in the annex A.2 presents the
standards published or under development with their publication target dates (unless already
mentioned in the previous sections).
3.1.4 Others
Almost all horizontal and sectorial standardisation organisations have launched AI-related
standardisation activities with very little consistency among them. The report Landscape of AI
standards AI standardisation landscape published by StandICT6 identifies more than 250
documents, and it is most likely that a lot are missing. The International Telecommunication
Union (ITU), the Institute of Electrical and Electronics Engineers (IEEE) and SAE International
are some of the organisations that are very active on AI. In the process of building the
standardisation landscape, it has been observed that it is almost impossible to have access to
the content of the documents, especially if they are in their development phase, and it is
therefore impossible to assess their relevance and maturity beyond their titles.
6
https://www.standict.eu/
14
CYBERSECURITY OF AI AND STANDARDISATION
One of the most interesting identified projects, though, is SAE AIR AS6983, which is dedicated
to AI/ML in aeronautics and is very similar in scope to the ambition of the JTC 21 project on AI
trustworthiness characterisation. Its publication is expected in 2023.
It is also recognised that major software vendors prepare their own standards and guidance on
the use of their AI functional capabilities, and in many cases (e.g. where software is distributed
by an app store) will require detailed review and quality controls before being made available on
the market. This is in addition to the statutory obligations of the developer. Finally, the US
National Institute of Standards and Technology (NIST) is also active in the area of AI and has
released its AI Risk Management Framework (AI RMF 1.0) in January 20237.
7
https://www.nist.gov/itl/ai-risk-management-framework
15
CYBERSECURITY OF AI AND STANDARDISATION
4. ANALYSIS OF COVERAGE
This section provides an analysis of the coverage of the most relevant standards identified in
the previous chapters with respect to the CIA security model and to trustworthiness
characteristics supporting cybersecurity.
Integrity Evasion: A type of attack in which the attacker works on the ML algorithm’s inputs to find small perturbations
leading to large modification of its outputs (e.g. decision errors). It is as if the attacker created an ‘optical
illusion for the algorithm. Such modified inputs are often called adversarial examples.
Poisoning: A type of attack in which the attacker alters data or models to modify the ML algorithm’s
behaviour in a chosen direction (e.g. to sabotage its results or to insert a back door). It is as if the attacker
conditioned the algorithm according to its motivation.
Availability Denial of service: ML algorithms usually consider input data in a defined format to make their predictions.
Thus, a denial of service could be caused by input data whose format is inappropriate. However, it may also
happen that a malicious user of the model constructs an input data (a sponge example) specifically designed
to increase the computation time of the model and thus potentially cause a denial of service.
If we consider AI systems as software and we consider their whole life cycles, general-purpose
standards, i.e. those that are not specific to AI and that address technical and organisational
aspects, can contribute to mitigating many of the risks faced by AI. The following ones have
been identified as particularly relevant:
• ISO/IEC 27001, Information security management, and ISO/IEC 27002, Information security
controls: relevant to all security objectives,
• ISO/IEC 9001, Quality management system: especially relevant to integrity (e.g. in particular
for data quality management to protect against poisoning) and availability.
8
Based on the White Paper ‘Towards auditable AI systems’ of Germany’s Federal Office for Information Security
(https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/KI/Towards_Auditable_AI_Systems.pdf?__blob=publicationFile&
v=6) and on the ENISA report Securing Machine Learning Algorithms (https://www.enisa.europa.eu/publications/securing-
machine-learning-algorithms).
9
There are also cybersecurity attacks that are not specific to AI, but could affect CIA even more severely. ETSI GR/SAI-
004, Problem Statement, and ETSI GR/SAI-006, The Role of Hardware in Security of AI, can be referred to for more
detailed descriptions of traditional cyberattacks on hardware and software.
16
CYBERSECURITY OF AI AND STANDARDISATION
• firstly, the extent to which general-purpose standards should be adapted to the specific AI
context for a given threat,
• secondly, whether existing standards are sufficient to address the cybersecurity of AI or they
need to be complemented.
Concerning the first question, it is suggested that general-purpose standards either apply or can
be applied if guidance is provided. To simplify, although AI has some specificities, it is in its
essence software; therefore, what is applicable to software can be applied to AI. Still, SDOs are
actively addressing AI specificities, and many existing general-purpose standards are in the
process of being supplemented to better address AI. This means that, at a general level,
existing gaps concern clarification of AI terms and concepts, and the application of existing
standards to an AI context, and in particular the following.
Concerning the second question, it is clear from the activity of the SDOs that there is concern
about insufficient knowledge of the application of existing techniques to counter threats and
vulnerabilities arising from AI. The concern is legitimate and, while it can be addressed with ad
hoc guidance/updates, it is argued that this approach might not be exhaustive and has some
limitations, as outlined below.
• The notion of AI can include both technical and organisational elements not limited to
software, such as hardware or infrastructure, which also need specific guidance. For
example, ISO/IEC/IEEE 42010 edition 2, Architecture description vocabulary, considers the
cybersecurity of an entity of interest that integrates AI capabilities, including for example
hardware, software, organisations and processes. In addition, new changes in AI system and
application scenarios should be taken into consideration when closing the gap between
general systems and AI ones.
• The application of best practices for quality assurance in software might be hindered
by the opacity of some AI models.
• Compliance with ISO 9001 and ISO/IEC 27001 is at organisation level, not at system
level. Determining appropriate security measures relies on a system-specific analysis.
The identification of standardised methods supporting the CIA security objectives is often
complex and application or domain specific, as in large part the attacks to be mitigated
depend on the application or domain. Although there are general attacks on many cyber
systems, and some very specific attacks that can be directed at many different systems, they
17
CYBERSECURITY OF AI AND STANDARDISATION
often rely upon a small set of vulnerabilities that can be exploited that are specific to a domain
or an application. In this sense, ETSI TS 102 165-1, Methods and protocols; Part 1: Method
and pro forma for threat, vulnerability, risk analysis (TVRA)10, and ISO/IEC 15408-1,
Evaluation criteria for IT security, can be used to perform specific risk assessments.
• The support that standards can provide to secure AI is limited by the maturity of
technological development, which should therefore be encouraged and monitored. In
other words, in some areas existing standards cannot be adapted or new standards cannot be
fully defined yet, as related technologies are still being developed and not yet quite mature
enough to be standardised. In some cases, first standards can be drafted (e.g. ISO/IEC TR
24029-1:2021 on the robustness of deep neural networks) but will probably need to be
regularly updated and adapted as research and development (R&D) progresses. For example,
from the perspective of ML research, much of the work on adversarial examples, evasion
attacks, measuring and certifying adversarial robustness, addressing specificities of data
poisoning for ML models, etc. is still quite active R&D. Another challenge related to R&D on AI
and standardisation is benchmarking: research results are often not comparable, resulting in a
situation where it is not always clear what works under what conditions.
Continuous learning is the ability of an AI component to evolve during its operational life through
the use of in-operation data for retraining the AI component. This function is often perceived as
the key ability of AI.
Model poisoning is easy to do during continuous learning / in-operation learning. For example,
during continuous learning, it is very challenging to check the quality of the data in real time.
When it comes to high-risk AI components, the use of continuous learning would imply
continuous validation of the data used for the training of the AI component (continuous data
quality assessment), continuous monitoring of the AI component, continuous risk assessment,
continuous validation and continuous certification if needed. While the issues with continuous
learning have been described in ISO/IEC 22989, Information technology – Artificial intelligence
– Artificial intelligence concepts and terminology, and the activities described above are
conceptually feasible, their execution is still the object of R&D.
• The traceability and lineage of both data and AI components are not fully addressed.
The traceability of processes is addressed by several standards related to quality. In that
regard, ISO 9001 is the cornerstone of quality management. However, the traceability of data
and AI components throughout their life cycles remains an issue that cuts across most threats
and remains largely unaddressed. Indeed, both data and AI components may have very
complex life cycles, with data coming from many sources and being transformed and
augmented, and, while AI components may reuse third parties’ components or even open-
source components, all of those are obviously a source of increased risks. This aspect implies
that technologies, techniques and procedures related to traceability need to be put in place to
ensure the quality of AI systems, for instance that data being used do not contain biases (e.g.
forgetting to include faces of people with specific traits), have not been deliberately poisoned
(e.g. adding data to modify the outcome of the model) and have not been deliberately or
unintentionally mislabelled (e.g. a picture of a dog labelled as a wolf).
• The inherent features of ML are not fully reflected in existing standards. As introduced in
Section 2.1, ML cannot, by design, be expected to be 100 % accurate. While this can also be
true for (for example) ruled-based systems designed by humans, ML has a larger input space
(making exhaustive testing difficult), black box properties and high sensitivity, meaning that
small changes in inputs can lead to large changes in outputs. Therefore, it is even more
10
Currently under revision to include AI as well.
11
It is to be noted though that the concept of continuous learning is subject to different interpretations. It is not always clear
how it differs from updating the system from time to time, i.e. what frequency of re-training would justify the label
‘continuous learning”.
18
CYBERSECURITY OF AI AND STANDARDISATION
important to understand, on the one hand, how the risk of failure can be mitigated and, on the
other, if/when a failure is caused by a malicious actor. The most obvious aspects to be
considered in existing/new standards can be summarised as follows.
• AI/ML components may be associated with hardware or other software components in order
to mitigate the risk of functional failure, therefore changing the cybersecurity risks associated
with the resulting set-up12.
• Reliable metrics can help a potential user detect a failure. For example, with precision and
recall metrics for AI systems relying on supervised classification, if users know the
precision/recall thresholds of an AI system they should be able to detect anomalies when
measuring values outside those thresholds, which may indicate a cybersecurity incident.
While this would be a general check (more efficient for attacks on a massive scale than for
specific attacks), the accurate definition of reliable metrics is a prerequisite to define more
advanced measurements.
• Testing procedures during the development process can lead to certain levels of
accuracy/precision.
It is to be noted that the subject of metrics for AI systems and of testing procedures is
addressed by standardisation deliverables such as ISO/IEC DIS 5338-AI system life cycle
processes (under development); ISO/IEC AWI TS 12791-Treatment of unwanted bias in
classification and regression machine learning tasks (under development); ETSI TR 103 305-x,
Critical security controls for effective cyber defence; and ETSI GR SAI-006, The role of
hardware in security of AI13. However, the coverage of the AI systems trustworthiness metrics
that are needed is incomplete, which is one reason for the CEN-CENELEC initiative on the ‘AI
trustworthiness characterisation’ project.
Table 3 exemplifies this relation in the context of the draft AI Act. It shows the role of
cybersecurity within a set of requirements outlined by the act that can be considered as
referring to the trustworthiness of an AI ecosystem. In fact, some of them (e.g. quality
management, risk management) contribute to building an AI ecosystem of trust indirectly, but
have been included because they are considered equally important and they are requirements
of the draft AI Act14.
12
For example, a self-driving car could be automatically deactivated if the supervising system detected abnormal conditions
that could signal a cybersecurity attack.
13
Other examples include ISO/IEC 23894, Information technology – Artificial intelligence – Guidance on risk management;
ISO/IEC DIS 42001, Information technology – Artificial intelligence – Management system; and ISO/IEC DIS 24029-2,
Artificial intelligence (AI) – Assessment of the robustness of neural networks – Part 2: Methodology for the use of formal
methods.
14
The European Commission’s High-Level Expert Group on Artificial Intelligence has identified seven characteristics of
trustworthiness: human agency and oversight; technical robustness and safety; privacy and data governance;
transparency; diversity, non-discrimination and fairness; societal and environmental well-being; and accountability.
19
CYBERSECURITY OF AI AND STANDARDISATION
Table 315: Role of cybersecurity within a set of requirements outlined by the draft AI Act
Draft AI Act
Description Relevance of cybersecurity
Requirement
Data and data High-risk AI systems which make use of The requirements here address data quality, which is
governance techniques involving the training of models with key to secure data feeds, processing and outputs.
data shall be developed on the basis of training, Data quality can be reinforced by the use of tools that
validation, and testing datasets that meet a set verify the source of data and the integrity of data (i.e.
of quality criteria to prove that data have not been manipulated
between source and sink), and by limiting access to
data.
Record-keeping High-risk AI systems shall be designed and All of the major security management control
developed with capabilities enabling the standards (e.g. ISO 27000 and ETSI TR 103 305)
automatic recording of events (‘logs’) while the address the importance of event logging and having
high-risk AI systems is operating. Those logging the staff to analyse the logs. These logs probably
capabilities shall conform to recognised contain sensitive data, and appropriate standard
standards or common specifications. cybersecurity measures, i.e. CIA, need to be
deployed.
Transparency and High-risk AI systems shall be designed and As noted above, documentation in itself is not a
provision of developed in such a way to ensure that their security requirement. However, as a security control,
information to operation is sufficiently transparent to enable technical documentation is a key element in system
users users to interpret the system’s output and use it transparency and in (high-level) explainability.
appropriately. An appropriate type and degree of
transparency shall be ensured, with a view to
achieving compliance with the relevant
obligations of the user and of the provider set
out in Chapter 3 of [COM(2021) 206 final].
Human oversight High-risk AI systems shall be designed and This form of control is identified in ISO27001 and in
developed in such a way, including with ETSI TS 103 305-1. (16) Where human oversight is
appropriate human–machine interface tools, that required, it should form an integral part of the design
they can be effectively overseen by natural of the system, and performance and other constraints
persons during the period in which the AI system should be added to the role of oversight. This may
is in use. include the performance of mandatory actions and
checks, and rules for escalation of an event
assessment.
Risk management An assessment through internal checks for ISO/IEC 31000 is a framework for risk analysis and
system ‘stand-alone’ high-risk AI systems would require the management of risk analysis systems. At a more
a full, effective and properly documented ex ante detailed level, tools for vulnerability analysis (e.g.
compliance with all requirements of the ETSI TS 102 165-1) may apply, as well as runtime
regulation and compliance with robust quality analysis tools. Many development environments will
and risk management systems and post-market perform both static and dynamic tests on software
monitoring. that allow risks in the codebase to be identified. The
suite of measures should operate in concert.
A risk management system shall be established,
implemented, documented and maintained in
relation to high-risk AI systems.
Quality Providers of high-risk AI systems shall put a ISO 9001 is the overarching standard for the
management quality management system in place that implementation of a quality management system in
system ensures compliance with this Regulation. development environments, which should include
security management aspects.
The provider should establish a sound quality
management system, ensure the
accomplishment of the required conformity
assessment procedure, draw up the relevant
documentation and establish a robust post-
market monitoring system.
Conformity AI systems that create a high risk to the health This is necessary for the evaluation of all
assessment and safety or fundamental rights of natural requirements, including cybersecurity.
persons: in line with a risk-based approach,
these high-risk AI systems are permitted on the
European market subject to compliance with
15
Source: adapted from Nativi, S. and De Nigris, S., AI Standardisation Landscape: State of play and link to the EC
proposal for an AI regulatory framework (https://publications.jrc.ec.europa.eu/repository/handle/JRC125952).
20
CYBERSECURITY OF AI AND STANDARDISATION
Robustness AI systems should be resilient against risks Cybersecurity is one of the key aspects – albeit not
connected to the limitations of the system (e.g. the only one – of robustness. It concerns the
errors, faults, inconsistencies, unexpected protection of the AI system against attacks as well as
situations) as well as against malicious actions the capacity to recover from such attacks.
that may compromise the security of the AI
system and result in harmful or otherwise
undesirable behaviour.
The general-purpose technical and organisational standards outlined in Section 3.1 cover these
trustworthiness aspects to some extent. The SDOs are actively tackling the matter and are
developing AI-specific standards in support of trustworthiness. In particular, ISO/IEC SC 42 is
developing most of those aspects in multiple standards, and CEN-CENELEC JTC 21 is working
towards adopting/adapting those standards (see annex A.3). This is normal and, to some
extent, inevitable at first. Still, in a regulatory context, one could expect a unified
comprehensive, coherent and synthetic approach to trustworthiness while avoiding the
multiplication – and to some extent duplication – of efforts. Furthermore, it would be inefficient
and even counterproductive to have multiple sets of standards for the same characteristics
(robustness, explainability, etc.), some coming from the cybersecurity domain and some coming
from the AI domain, with risks of discrepancy. The result is that a unified approach to
trustworthiness characteristics is highly recommended. In particular, in order to bring coherency
and comprehensiveness, it is necessary to clarify who is doing what, in order to avoid needless
and confusing duplication, and a certain level of coordination and liaison is vital.
When it comes to AI systems, conformity assessment will be performed against all requirements
outlined in the draft AI Act, trustworthiness, including its cybersecurity aspects, being among
them. Existing standards on trustworthiness lack conformity assessment methods, sometimes
including technical requirements and metrics. While there are a lot of activities in ISO/IEC SC 42
regarding trustworthiness characteristics, there are also a lot of gaps and very few developed
requirements and metrics. Therefore, there is the risk that conformity assessment methods will
be addressed by different standards depending on the characteristic being evaluated. Since
some characteristics overlap each other, while others might be contradictory (e.g. there might
be a trade-off between transparency and cybersecurity), a global and coherent approach is
needed.
For example, identified adversarial attack threats could be used in both the ML algorithm and
the testing and validation process. In that specific case, the threats could have been identified
by the AI system’s monitoring/oversight process and the testing process. It is likely that some
technical requirements/adjustments coming from the cybersecurity threat assessment should
find their place in the AI standards repository relating both to oversight and to testing.
21
CYBERSECURITY OF AI AND STANDARDISATION
It is important to note that this approach differs from the cybersecurity risk-based approach,
which sees a cybersecurity risk as a function of its adverse impact and its likelihood of
occurrence. Based on the draft AI Act, cybersecurity is a requirement that applies, and therefore
is assessed, only once a system is identified as high risk.
These high-risk systems are subject to a number of requirements, cybersecurity being one of
them, as in Article 15, ‘Accuracy, robustness and cybersecurity’. The cybersecurity
requirements outlined are legal and remain at a high level. Still, explicit reference is made to
some technical aspects:
High-risk AI systems shall be resilient as regards attempts by unauthorised third parties to alter
their use or performance by exploiting the system vulnerabilities.
[…]
The technical solutions to address AI specific vulnerabilities shall include, where appropriate,
measures to prevent and control for attacks trying to manipulate the training dataset
(‘data poisoning’), inputs designed to cause the model to make a mistake (‘adversarial
examples’), or model flaws.
The draft AI Act also lays down, in Article 13, ‘Transparency and provision of information to
users’, that high-risk AI systems are to be accompanied by instructions for use, specifying,
among other things, the ‘the level of accuracy, robustness and cybersecurity referred to in
Article 15 against which the high-risk AI system has been tested and validated and which can
be expected, and any known and foreseeable circumstances that may have an impact on
that expected level of accuracy, robustness and cybersecurity’.
In addition, the draft AI Act refers to cybersecurity in its recitals. In particular, recital 51 mentions
that, ‘To ensure a level of cybersecurity appropriate to the risks, suitable measures should
therefore be taken by the providers of high-risk AI systems, also taking into account as
appropriate the underlying ICT infrastructure’.
Finally, the draft AI Act tackles cybersecurity through a number of other requirements, as
exemplified in Table 2. The annexes (A.3 and A.4) contain an overview of activities of European
standardisation organisations (ESOs) with respect to the requirements of the AI Act. Building on
those, as well as on the previous sections, the following considerations have been outlined
concerning the implementation of the draft AI Act from a cybersecurity perspective.
17
For example, ISO/IEC JTC 1/SC 42 is working on an AI risk management standard (ISO 23894, Information technology–
Artificial intelligence – Guidance on risk management) to be complemented by a specific JTC 21 standard on ‘AI risk
catalogue and AI risk management’.
22
CYBERSECURITY OF AI AND STANDARDISATION
The draft AI Act also depicts a governance system upon which the conformity assessment of
AI systems relies. Besides the specific recommendations on conformity assessment outlined
above, the following are noted.
• Ensure that the actors performing conformity assessment on AI systems have
standardised tools and competences, including on cybersecurity. In certain cases,
conformity assessment may be performed by notified bodies. AI trustworthiness will therefore
rely partly on the competences of those bodies. If those bodies do not have the proper
competences, they could make bad assessments and even bias the market. To date there are
no standards that adequately cover cybersecurity and describing the competences of
organisations for auditing, certification and testing of AI systems (and AI management
systems) and their evaluators. This is crucial, as it is most likely that some AI algorithms will
attack AI systems while other AI algorithms will protect them. The new AI threats (threats
using AI) will probably be more and more efficient at exploiting existing vulnerabilities, while AI
algorithms (cybersecurity using AI) could, for example, monitor the behaviour of an AI system
to protect it. To sum up, there are standardisation gaps on competences for validation, testing,
auditing, certification’ of AI systems and on ‘competences for auditing and certification of AI
management systems (Although a project on this last point is being prepared by ISO/IEC SC
42, it is unclear to what extent it will be sufficient.)
• Ensure regulatory coherence between the draft AI Act and legislation on cybersecurity.
In particular, Article 42 of the draft AI Act sets out a presumption of conformity with
cybersecurity requirements for high-risk AI systems that have been certified or for which a
statement of conformity has been issued under a cybersecurity scheme pursuant to
Regulation (EU) 2019/881 (the Cybersecurity Act)18. While no official request for a EU
cybersecurity certification scheme for AI has been issued yet, it is important that, if developed,
such a scheme would take due consideration of the draft AI Act – and vice versa. For
example, the Cybersecurity Act sets out three levels of assurance (basic, substantial, high),
which are commensurate with the level of the risk associated with the intended use of an ICT
product/service/ process. These levels provide the rigour and depth of the evaluation of the
ICT product/service/process and refer to technical specifications, standards and procedures,
including those to mitigate or prevent incidents. It remains to be defined whether and how
these assurance levels can apply in the context of the draft AI Act.
• Another regulatory development that might affect the draft AI Act is the proposal COM(2022)
454 for a regulation on horizontal cybersecurity requirements for products with digital
elements (the Cyber Resilience Act)19. The proposal was presented in September 2022.
18
Regulation (EU) 2019/881 on ENISA (the European Union Agency for Cybersecurity) and on information and
communications technology cybersecurity certification and repealing Regulation (EU) No 526/2013 (Cybersecurity Act)
(https://eur-lex.europa.eu/eli/reg/2019/881/oj).
19
https://digital-strategy.ec.europa.eu/en/library/cyber-resilience-act
23
CYBERSECURITY OF AI AND STANDARDISATION
5. CONCLUSIONS
This section sums up the report and recommends actions to ensure standardisation support to
the cybersecurity of AI, and to the implementation of the draft AI Act.
5.1 WRAP-UP
The study suggests that general-purpose standards for information security and quality
management (in particular ISO/IEC 27001, ISO/IEC 27002 and ISO/IEC 9001) can partially
mitigate the cybersecurity risks related to the confidentiality, integrity and availability of AI
systems. This conclusion relies on the assumption that AI is in its essence software, and
therefore what is applicable to software can be applied to AI, if adequate guidance is provided.
This approach can suffice at a general level but needs to be complemented by a system-
specific analysis (e.g. relying on ISO/IEC 15408-1:2009), as the identification of standardised
methods supporting the CIA security objectives is often domain specific. It is a matter of debate
to what extent the assessment of compliance with the resulting security requirements can be
based on AI-specific horizontal standards and to what extent it can be based on vertical/sector-
specific standards.
• the traceability of processes is addressed by several standards, but the traceability of the data
and AI components throughout their life cycles remains an issue that cuts across most threats
and remains largely unaddressed in practice, despite being covered well in various standards
or drafts (e.g. ISO/IEC DIS 42001 on AI management systems 20 and the ISO/IEC CD 5259
series on data quality for analytics and ML21);
• the inherent features of ML are not fully reflected in existing standards, especially in terms of
metrics and testing procedures;
• in some areas, existing standards cannot be adapted or new standards cannot be fully defined
yet, as related technologies are still being developed and not yet quite mature enough to be
standardised.
Going beyond the mere CIA paradigm and considering the broader trustworthiness perspective,
the main takeaway is that, since cybersecurity cuts across a number of trustworthiness
requirements (e.g. data governance, transparency), it is important that standardisation activities
around these requirements treat cybersecurity in a coherent manner.
Concerning the implementation of the draft AI Act, besides the considerations above, the
following gaps have been identified:
• to date there are no standards that adequately cover cybersecurity and describe the
competences of organisations for auditing, certification and testing of AI systems (and AI
management systems) and their evaluators;
• the abovementioned gap on areas that are the subject of R&D is relevant to the
implementation of the draft AI Act, in particular with respect to data poisoning and adversarial
examples.
20
ISO/IEC DIS 42001, Information technology — Artificial intelligence — Management system (under development)
21
The series is under development (https://www.iso.org/ics/35.020/x/)
24
CYBERSECURITY OF AI AND STANDARDISATION
5.2 RECOMMENDATIONS
While it is recognised that the ESOs have different operational models and different
membership profiles, it is also recognised that the ESOs operate cooperatively in many fields,
and this is, again, to be encouraged. Competitive effort to develop standards is to some extent
inevitable and, while that is recognised, the ESOs are strongly discouraged from negative
competition. One area where harmonisation is seen as essential is in the adoption of a common
AI-related terminology and set of concepts not only across SDOs but also with other
stakeholders. The present report does not suggest which SDO/ESO should initiate this activity
but it is strongly suggested that, without a common set of cross-domain terminology and
concepts, the first risk to cybersecurity would be not understanding each other22.
22
Two horizontal terminology-related standards (ISO/IEC 22989 and ISO/IEC 23053) have been published recently (June
and July 2022). JTC 21 will base all its work on ISO/IEC terminology.
25
CYBERSECURITY OF AI AND STANDARDISATION
Recommendation 7: Support the development of standards for the tools and competences of
the actors performing conformity assessment.
Recommendation 8: Ensure coherence between the draft AI Act and other legislative initiatives
on cybersecurity, notably Regulation (EU) 2019/881 (the Cybersecurity Act) and the proposal
COM(2022) 454 for a regulation on horizontal cybersecurity requirements for products with
digital elements (the Cyber Resilience Act).
26
CYBERSECURITY OF AI AND STANDARDISATION
A ANNEX:
27
CYBERSECURITY OF AI AND STANDARDISATION
28
CYBERSECURITY OF AI AND STANDARDISATION
Information technology — Big data reference architecture — Part 2: Use ISO/IEC TR 20547-
Published
cases and derived requirements 2:2018
Published (to be
ISO/IEC revised, new version
Artificial Intelligence (AI) Use cases
TR 24030:2021 expected in May
2023)
29
CYBERSECURITY OF AI AND STANDARDISATION
Software and systems engineering — Software testing — Part 11: Testing ISO/IEC AWI
Feb-24
of AI system TS 29119-11[1]
Quality evaluation guidelines for AI systems ISO/IEC AWI TS 5471 Under consideration
30
CYBERSECURITY OF AI AND STANDARDISATION
Conformity assessment
TR, Action: adopt/adapt
information to users
Name TS, (ISO/IEC) – develop
Human oversight
EN (ESOs)
Record keeping
Cybersecurity
Robustness
Target date
Accuracy
ISO/IEC 22989:2022
IS Adopt July 2022 x x x x x x x x x x
Artificial intelligence concepts and terminology
ISO/IEC 23053:2022
Framework for artificial intelligence (AI) systems using IS Adopt July 2022 x x x x x x x x x x
machine learning (ML)
ISO/IEC CD 5259-1
Data quality for analytics and machine learning (ML) – IS Adopt December 2023 x x x x x x x x x x
Part 1: Overview, terminology, and examples
ISO/IEC 9001:2015
IS 2015 x x
Quality management systems – Requirements
ISO/IEC 42001
IS Adopt December 2023 x x
Artificial intelligence – Management system
ISO/IEC 27001:2022
Information security management systems – IS 2022 x x
Requirements
ISO/IEC 23894
IS Adopt December 2023 x
Guidance on risk management
31
CYBERSECURITY OF AI AND STANDARDISATION
CEN-CENELEC
EN Develop Q1 2025 x x
Risk catalogue and risk management
ISO/IEC TR 24029-1
Assessment of the robustness of neural networks – TR Adopt December 2022 x
Part 1: Overview
ISO/IEC 24029-2
Assessment of the robustness of neural networks – IS Adopt December 2023 x
Part 2: Methodology for the use of formal methods
CEN-CENELEC
EN Develop Q1 2025 ? x x x x x x
AI trustworthiness characterisation
ISO/IEC CD 5259-2
Data quality for analytics and machine learning (ML) – IS Adopt Q2 2024 x
Part 2: Data quality measures
ISO/IEC CD 5259-3
Data quality for analytics and machine learning (ML) –
IS Adopt Q3 2023 x
Part 3: Data quality management requirements and
guidelines
ISO/IEC CD 5259-4
Data quality for analytics and machine learning (ML) – IS Adopt Q4 2024 x
Part 4: Data quality process framework
32
CYBERSECURITY OF AI AND STANDARDISATION
Conformity assessment
TR,
information to users
Name TS, Status (a)
Human oversight
EN
Record keeping
Cybersecurity
Robustness
Target date
Accuracy
DTR/MTS-103910 - MTS AI Testing Test Methodology and
TR Early draft July 2024 x x
Test Specification for AI-enabled Systems
DTR/MTS-1191168
TR WI Adopted TBC x x
MTS AI Testing AI-enabled Testing in Standardisation
TR 103 911
TR Under development (late) — x
MTS AI testing AI-enabled testing in standardisation
EN 303 645
Cyber security for consumer internet of things: Baseline EN Published — x
requirements
TR 103 304
Personally identifiable information (PII) protection in TR Published — x
mobile and cloud services
TR 103 305
TR Published — x x x x x x x
Critical security controls for effective cyber defence
TR 103 370
Practical introductory guide to technical standards for TR Published — x
privacy
33
CYBERSECURITY OF AI AND STANDARDISATION
TR 103 404
Network technologies (NTECH); Autonomic network
engineering for the self-managing future internet (AFI); TR Published — x x
Autonomicity and self-management in the backhaul and
core network parts of the 3GPP architecture
TR 103 473
Evolution of management towards autonomic future
TR Published — x x
internet (AFI); Autonomicity and self-management in the
Broadband Forum (BBF) architectures
TR 103 626
Autonomic network engineering for the self-managing
future internet (AFI); An instantiation and implementation
TR Published — x x
of the generic autonomic network architecture (GANA)
model onto heterogeneous wireless access technologies
using cognitive algorithms
TR 103 627
Core network and interoperability testing (INT/WG AFI) TR Published — x x
autonomicity and self-management in IMS architecture
TR 103 629
Evolution of management towards autonomic future
TR Early draft — x x
internet (AFI); Confidence in autonomic functions;
Guidelines for design and testability
TR 103 747
Core network and interoperability testing (INT/WG AFI);
Federated GANA knowledge planes (KPs) for multi-
TR Published — x x
domain autonomic management & control (AMC) of
slices in the NGMN® 5G end-to-end architecture
framework
TR 103 748
Core network and interoperability testing (INT); Artificial
TR Published — x x x x
intelligence (AI) in test systems and testing of AI models;
Use and benefits of AI technologies in testing
TR 103 749
INT artificial intelligence (AI) in test systems and testing TR Start of work May 2023 x x x x
AI models; Testing of AI with definition of quality metrics
TR 103 821
Autonomic network engineering for the self-managing
TR Start of work — x
future internet (AFI); Artificial intelligence (AI) in test
systems and testing AI models
TR 103 857
Autonomic management and control (AMC) intelligence
for self-managed fixed & mobile integrated networks
(AFI); Generic framework for E2E federated GANA
TR Stable draft March 2023 x x x x
knowledge planes for AI-powered closed-loop self-
adaptive security management & control, across multiple
5G network slices, segments, services and
administrative domains
34
CYBERSECURITY OF AI AND STANDARDISATION
TS 102 165-1
Methods and protocols; Part 1: Method and pro forma for TS Published — x x
threat, vulnerability, risk analysis (TVRA)
TS 103 195-2
Autonomic network engineering for the self-managing
future internet (AFI); Generic autonomic network
TS Published — x x x x x x
architecture; Part 2: An architectural reference model for
autonomic networking, cognitive networking and self-
management
TS 103 485
TS Published — x
Mechanisms for privacy assurance and verification
TS 103 701
Cyber security for consumer internet of things: TS Published — x
Conformance assessment of baseline requirements
35
TP-03-23-011-EN-C
ABOUT ENISA
The European Union Agency for Cybersecurity, ENISA, is the Union’s agency dedicated to
achieving a high common level of cybersecurity across Europe. Established in 2004 and
strengthened by the EU Cybersecurity Act, the European Union Agency for Cybersecurity
contributes to EU cyber policy, enhances the trustworthiness of ICT products, services and
processes with cybersecurity certification schemes, cooperates with Member States and EU
bodies, and helps Europe prepare for the cyber challenges of tomorrow. Through knowledge
sharing, capacity building and awareness raising, the Agency works together with its key
stakeholders to strengthen trust in the connected economy, to boost resilience of the Union’s
infrastructure, and, ultimately, to keep Europe’s society and citizens digitally secure. More
information about ENISA and its work can be found here: www.enisa.europa.eu.
ISBN 978-92-9204-613-2
doi: 10.2824/308470