Securing Artificial Intelligence (SAI); AI Act mapping and gap analysis to ETSI workplan

DTR/SAI-0013

General Information

Status
Not Published
Current Stage
12 - Citation in the OJ (auto-insert)
Due Date
02-Jun-2025
Completion Date
22-May-2025
Ref Project
Standard
ETSI TR 104 065 V1.1.1 (2025-05) - Securing Artificial Intelligence (SAI); AI Act mapping and gap analysis to ETSI workplan
English language
30 pages
sale 15% off
Preview
sale 15% off
Preview

Standards Content (Sample)


TECHNICAL REPORT
Securing Artificial Intelligence (SAI);
AI Act mapping and gap analysis to ETSI workplan

2 ETSI TR 104 065 V1.1.1 (2025-05)

Reference
DTR/SAI-0013
Keywords
artificial intelligence, regulation

ETSI
650 Route des Lucioles
F-06921 Sophia Antipolis Cedex - FRANCE

Tel.: +33 4 92 94 42 00  Fax: +33 4 93 65 47 16

Siret N° 348 623 562 00017 - APE 7112B
Association à but non lucratif enregistrée à la
Sous-Préfecture de Grasse (06) N° w061004871

Important notice
The present document can be downloaded from the
ETSI Search & Browse Standards application.
The present document may be made available in electronic versions and/or in print. The content of any electronic and/or
print versions of the present document shall not be modified without the prior written authorization of ETSI. In case of any
existing or perceived difference in contents between such versions and/or in print, the prevailing version of an ETSI
deliverable is the one made publicly available in PDF format on ETSI deliver repository.
Users should be aware that the present document may be revised or have its status changed,
this information is available in the Milestones listing.
If you find errors in the present document, please send your comments to
the relevant service listed under Committee Support Staff.
If you find a security vulnerability in the present document, please report it through our
Coordinated Vulnerability Disclosure (CVD) program.
Notice of disclaimer & limitation of liability
The information provided in the present deliverable is directed solely to professionals who have the appropriate degree of
experience to understand and interpret its content in accordance with generally accepted engineering or
other professional standard and applicable regulations.
No recommendation as to products and services or vendors is made or should be implied.
No representation or warranty is made that this deliverable is technically accurate or sufficient or conforms to any law
and/or governmental rule and/or regulation and further, no representation or warranty is made of merchantability or fitness
for any particular purpose or against infringement of intellectual property rights.
In no event shall ETSI be held liable for loss of profits or any other incidental or consequential damages.

Any software contained in this deliverable is provided "AS IS" with no warranties, express or implied, including but not
limited to, the warranties of merchantability, fitness for a particular purpose and non-infringement of intellectual property
rights and ETSI shall not be held liable in any event for any damages whatsoever (including, without limitation, damages
for loss of profits, business interruption, loss of information, or any other pecuniary loss) arising out of or related to the use
of or inability to use the software.
Copyright Notification
No part may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying and
microfilm except as authorized by written permission of ETSI.
The content of the PDF version shall not be modified without the written authorization of ETSI.
The copyright and the foregoing restriction extend to reproduction in all media.

© ETSI 2025.
All rights reserved.
ETSI
3 ETSI TR 104 065 V1.1.1 (2025-05)
Contents
Intellectual Property Rights . 4
Foreword . 4
Modal verbs terminology . 4
Introduction . 4
1 Scope . 5
2 References . 5
2.1 Normative references . 5
2.2 Informative references . 5
3 Definition of terms, symbols and abbreviations . 7
3.1 Terms . 7
3.2 Symbols . 8
3.3 Abbreviations . 8
4 Summary of provisions of the AI Act . 8
4.1 Purpose of the act . 8
4.2 References to the role of standards in the act . 8
4.3 Mandates for standards/standards content in the act . 10
4.4 Relationship to open source projects . 11
5 ETSI's AI standards . 11
5.1 Overview of ETSI's AI standards . 11
5.2 Relationship to other SDO activity . 13
5.3 Catalogue of ETSI TC SAI work items . 13
5.4 Framing the content of standards . 13
6 Article by article mapping of AI Act to ETSI Standardization programme . 15
6.1 Mapping AI act to ETSI . 15
6.2 Mapping ETSI TC SAI and ISG AI output to AI act . 20
Annex A: Summary of ETSI's involvement in the DIGITALEUROPE - ESOs collaboration
on education material for standards . 23
Annex B: Catalogue of ETSI TC SAI and ISG SAI deliverables . 24
Annex C: Bibliography . 29
History . 30

ETSI
4 ETSI TR 104 065 V1.1.1 (2025-05)
Intellectual Property Rights
Essential patents
IPRs essential or potentially essential to normative deliverables may have been declared to ETSI. The declarations
pertaining to these essential IPRs, if any, are publicly available for ETSI members and non-members, and can be
found in ETSI SR 000 314: "Intellectual Property Rights (IPRs); Essential, or potentially Essential, IPRs notified to
ETSI in respect of ETSI standards", which is available from the ETSI Secretariat. Latest updates are available on the
ETSI IPR online database.
Pursuant to the ETSI Directives including the ETSI IPR Policy, no investigation regarding the essentiality of IPRs,
including IPR searches, has been carried out by ETSI. No guarantee can be given as to the existence of other IPRs not
referenced in ETSI SR 000 314 (or the updates on the ETSI Web server) which are, or may be, or may become,
essential to the present document.
Trademarks
The present document may include trademarks and/or tradenames which are asserted and/or registered by their owners.
ETSI claims no ownership of these except for any which are indicated as being the property of ETSI, and conveys no
right to use or reproduce any trademark and/or tradename. Mention of those trademarks in the present document does
not constitute an endorsement by ETSI of products, services or organizations associated with those trademarks.
DECT™, PLUGTESTS™, UMTS™ and the ETSI logo are trademarks of ETSI registered for the benefit of its
Members. 3GPP™, LTE™ and 5G™ logo are trademarks of ETSI registered for the benefit of its Members and of the
3GPP Organizational Partners. oneM2M™ logo is a trademark of ETSI registered for the benefit of its Members and of ®
the oneM2M Partners. GSM and the GSM logo are trademarks registered and owned by the GSM Association.
Foreword
This Technical Report (TR) has been produced by ETSI Technical Committee Securing Artificial Intelligence (SAI).
Modal verbs terminology
In the present document "should", "should not", "may", "need not", "will", "will not", "can" and "cannot" are to be
interpreted as described in clause 3.2 of the ETSI Drafting Rules (Verbal forms for the expression of provisions).
"must" and "must not" are NOT allowed in ETSI deliverables except when used in direct citation.
Introduction
The present document has been developed to give a summary of the role of ETSI's standards, and those in associated
SDOs, that support the mandates and recommendations found in the EU's AI Act [i.1]. The content is indicative and as
such there may be standards that have been overlooked and therefore not considered in the mapping.

ETSI
5 ETSI TR 104 065 V1.1.1 (2025-05)
1 Scope
The present document provides an analysis of the standardization requirements of the AI Act [i.1] against the workplan
of ETSI (across all TBs) in order to identify gaps and the means to fill them.
NOTE: The present document is a Technical Report and contains no requirements, however the text does contain
quotes from the AI Act [i.1] where mandates are stated, but where quoted these have no normative effect.
2 References
2.1 Normative references
Normative references are not applicable in the present document.
2.2 Informative references
References are either specific (identified by date of publication and/or edition number or version number) or
non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the
referenced document (including any amendments) applies.
NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee
their long-term validity.
The following referenced documents may be useful in implementing an ETSI deliverable or add to the reader's
understanding, but are not required for conformance to the present document.
[i.1] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying
down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008,
(EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and
Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act).
[i.2] ETSI TR 104 029: "Securing Artificial Intelligence (SAI); Global Ecosystem".
[i.3] ETSI White Paper 52: "ETSI Activities in the field of Artificial Intelligence Preparing the
implementation of the European AI Act".
[i.4] ETSI White Paper 61: "ETSI Technology Radar".
[i.5] Lisbon Treaty: "Consolidated Versions Of The Treaty On European Union And The Treaty On
The Functioning Of The European Union".
[i.6] European Union: "Charter of Fundamental Rights of the European Union".
[i.7] ETSI Directives.
[i.8] ETSI FORGE.
[i.9] ETSI SAREF source files.
[i.10] ETSI GR SAI 001 (V1.1.1): "Securing Artificial Intelligence (SAI); AI Threat Ontology".
NOTE: Updated by ETSI TS 104 050 [i.20].
[i.11] ETSI GR SAI 006 (V1.1.1): "Securing Artificial Intelligence (SAI); The role of hardware in
security of AI".
[i.12] ETSI GR SAI 009 (V1.1.1): "Securing Artificial Intelligence (SAI); Artificial Intelligence
Computing Platform Security Framework".
ETSI
6 ETSI TR 104 065 V1.1.1 (2025-05)
[i.13] ETSI GR SAI 002 (V1.1.1): "Securing Artificial Intelligence (SAI); Data Supply Chain Security".
NOTE: Updated by ETSI TR 104 048 [i.25].
[i.14] ETSI GR SAI 007 (V1.1.1): "Securing Artificial Intelligence (SAI); Explicability and transparency
of AI processing".
NOTE: Updated by ETSI TS 104 224 [i.23].
[i.15] ETSI GR SAI 013 (V1.1.1): "Securing Artificial Intelligence (SAI); Proofs of Concepts
Framework".
NOTE: Updated by ETSI TR 104 067 [i.19].
[i.16] ETSI GR SAI 005 (V1.1.1): "Securing Artificial Intelligence (SAI); Mitigation Strategy Report".
NOTE: Updated by ETSI TR 104 222 [i.28].
[i.17] ETSI GR SAI 004 (V1.1.1): "Securing Artificial Intelligence (SAI); Problem Statement".
NOTE: Updated by ETSI TR 104 221 [i.27].
[i.18] ETSI GR SAI 011 (V1.1.1): "Securing Artificial Intelligence (SAI); Automated Manipulation of
Multimedia Identity Representations".
NOTE: Updated by ETSI TR 104 062 [i.29].
[i.19] ETSI TR 104 067 (V1.1.1): "Securing Artificial Intelligence (SAI); Proofs of Concepts
Framework".
[i.20] ETSI TS 104 050 (V1.1.1): "Securing Artificial Intelligence (SAI); AI Threat Ontology and
definitions".
[i.21] ETSI TR 104 225 (V1.1.1): "Securing Artificial Intelligence TC (SAI); Privacy aspects of AI/ML
systems".
[i.22] ETSI TR 104 031 (V1.1.1): "Securing Artificial Intelligence (SAI); Collaborative Artificial
Intelligence".
[i.23] ETSI TS 104 224 (V1.1.1): "Securing Artificial Intelligence (SAI); Explicability and transparency
of AI processing".
[i.24] ETSI TR 104 032 (V1.1.1): "Securing Artificial Intelligence (SAI); Traceability of AI Models".
[i.25] ETSI TR 104 048 (V1.1.1): "Securing Artificial Intelligence (SAI); Data Supply Chain Security".
[i.26] ETSI TR 104 030 (V1.1.1): "Securing Artificial Intelligence (SAI); Critical Security Controls for
Effective Cyber Defence; Artificial Intelligence Sector".
[i.27] ETSI TR 104 221 (V1.1.1): "Securing Artificial Intelligence (SAI); Problem Statement".
[i.28] ETSI TR 104 222 (V1.2.1): "Securing Artificial Intelligence; Mitigation Strategy Report".
[i.29] ETSI TR 104 062 (V1.2.1): "Securing Artificial Intelligence; Automated Manipulation of
Multimedia Identity Representations".
[i.30] ETSI TR 104 066 (V1.1.1): "Securing Artificial Intelligence (SAI); Security Testing of AI".
[i.31] ETSI Member Portal.
[i.32] ETSI standards search and download.
[i.33] ETSI Writing world class standards guidance.
[i.34] ETSI TR 103 935: "Cyber Security (CYBER); Assessment of cyber risk based on products'
properties to support market placement".
ETSI
7 ETSI TR 104 065 V1.1.1 (2025-05)
[i.35] ETSI TS 104 223: "Securing Artificial Intelligence (SAI); Baseline Cyber Security Requirements
for AI Models and Systems".
[i.36] ETSI TR 103 305 (all parts): "Cyber Security (CYBER); Critical Security Controls for Effective
Cyber Defence".
[i.37] ETSI TS 102 165-1: "Cyber Security (CYBER); Methods and protocols; Part 1: Method and pro
forma for Threat, Vulnerability, Risk Analysis (TVRA)".
[i.38] Regulation (EU) No 1025/2012 of the European Parliament and of the Council of 25 October 2012
on European standardisation, amending Council Directives 89/686/EEC and 93/15/EEC and
Directives 94/9/EC, 94/25/EC, 95/16/EC, 97/23/EC, 98/34/EC, 2004/22/EC, 2007/23/EC,
2009/23/EC and 2009/105/EC of the European Parliament and of the Council and repealing
Council Decision 87/95/EEC and Decision No 1673/2006/EC of the European Parliament and of
the Council.
3 Definition of terms, symbols and abbreviations
3.1 Terms
For the purposes of the present document, the terms given in the AI Act [i.1] and the following apply:
AI system: machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness
after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs
such as predictions, content, recommendations, or decisions that can influence physical or virtual environments
dactyloscopic: identification by comparison of fingerprints
deep fake: AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places
or other entities or events and would falsely appear to a person to be authentic or truthful
general-purpose AI model: AI model, including where such an AI model is trained with a large amount of data using
self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of
distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of
downstream systems or applications, except AI models that are used for research, development or prototyping activities
before they are released on the market
informed consent: subject's freely given, specific, unambiguous and voluntary expression of his or her willingness to
participate in a particular testing in real-world conditions, after having been informed of all aspects of the testing that
are relevant to the subject's decision to participate
input data: data provided to or directly acquired by an AI system on the basis of which the system produces an output
intended purpose: use for which an AI system is intended by the provider, including the specific context and
conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or
sales materials and statements, as well as in the technical documentation
risk: combination of the probability of an occurrence of harm and the severity of that harm
substantial modification: change to an AI system after its placing on the market or putting into service which is not
foreseen or planned in the initial conformity assessment carried out by the provider
testing data: data used for providing an independent evaluation of the AI system in order to confirm the expected
performance of that system before its placing on the market or putting into service
training data: data used for training an AI system through fitting its learnable parameters
validation data: data used for providing an evaluation of the trained AI system and for tuning its non-learnable
parameters and its learning process in order, inter alia, to prevent underfitting or overfitting
validation data set: separate data set or part of the training data set, either as a fixed or variable split
ETSI
8 ETSI TR 104 065 V1.1.1 (2025-05)
3.2 Symbols
Void.
3.3 Abbreviations
For the purposes of the present document, the following abbreviations apply:
AI Artificial Intelligence
AICIE AI Common Incident Expression
EN European Norm
hEN Harmonised European Norm
OJEU Official Journal of the European Union
PoC Proof of Concept
SDO Standards Development Organisation
TB Technical Body
UCYBEX Universal Cybersecurity Information Exchange Framework
4 Summary of provisions of the AI Act
4.1 Purpose of the act
The purpose of the AI Act [i.1] is identified in recital 1 of the agreed text as follows:
QUOTE: "To improve the functioning of the internal market by laying down a uniform legal framework in
particular for the development, the placing on the market, the putting into service and the use of
artificial intelligence systems (AI systems) in the Union, in accordance with Union values, to
promote the uptake of human centric and trustworthy artificial intelligence (AI) while ensuring a
high level of protection of health, safety, fundamental rights as enshrined in the Charter of
fundamental rights of the European Union (the 'Charter'), including democracy, the rule of law
and environmental protection, against the harmful effects of AI systems in the Union, and to
support innovation. This Regulation ensures the free movement, cross-border, of AI-based goods
and services, thus preventing Member States from imposing restrictions on the development,
marketing and use of AI systems, unless explicitly authorised by this Regulation."
The definition of what the AI act [i.1] covers is addressed in some detail in Articles 1 (Subject Matter), 2 (Scope) and
3 (Definitions) of the act. For the purposes of the present document some of the definitions given in Article 3 have been
transposed to an ETSI style in clause 3.1 above and presented as terms.
NOTE: Definitions given in Article 3 of the AI Act [i.1] are usually considered as normative whereas the terms
given in the present document are not.
4.2 References to the role of standards in the act
An arbitrary search of the text of the act for the word standard finds 109 occurrences many of which can be summarily
dismissed with respect to the activity of ETSI or any other SDO. Recital 117 does introduce the role of Harmonised
Standards and the role of the AI Office in their management. Therefore the purpose of conformance to Harmonised
Standards as giving a presumption of conformity to the provisions of the act is consistent with other EU regulation. This
does suggest that the SDOs should work to develop such hENs but also requires awareness that an SDO cannot simply
choose to publish an hEN, rather an hEN has to be written against specific actions of the EU.
NOTE 1: An hEN is the product of a specific standardization request and has a citation in the Official Journal of the
European Union (OJEU).
In Recital 121 of the Act [i.1] the means of elaboration of a technical standard are addressed and cites the importance of
wide stakeholder involvement. It can be fairly stated that ETSI's representation satisfies the conditions underpinning the
text of Recital 121.
ETSI
9 ETSI TR 104 065 V1.1.1 (2025-05)
NOTE 2: There are support schemes available to bolster SME attendance to ETSI that further enhance ETSI's
ability to satisfy the stakeholder diversity criteria (noting too that ETSI's global reach enhances its
cultural diversity).
NOTE 3: The voting of different forms of ETSI deliverable are defined in the ETSI Directives [i.7]. However with
the exception of ENs where the National Voting process is invoked, the general model in ETSI Technical
Bodies (TBs) is to work to achieve consensus and to not determine acceptability of the content from a
member weighted vote.
NOTE 4: Recent updates to the ETSI Directives [i.7] address voting and participation exemptions for activity in
ETSI taken in response to an EU Standardisation Request.
NOTE 5: When registering to meetings of ETSI's TBs delegates register as representatives of their member
organizations and not as either individuals or national representatives (unless their member organization
is explicitly the national representative).
There is some mention of energy efficiency of AI (see Article 40(2), Article 59(1)(a)(iii), Article 95(2)(b), and is a
requirement of the technical documentation addressed in Article 53(1)(a) of [i.1]). In ETSI the TB Environmental
Engineering (EE) does address some of the issues arising. However at a slightly deeper level the silicon architectures
contribute here and without particular input from SDOs most of the major manufacturers of silicon processors are
addressing power efficiency, and specialized processor models, that will continuously improve the environmental
footprint of AI. This is obviously deep rooted and has concerns over how data is stored and accessed, the transfer of
data and so forth.
RECOMME ND AT ION:
TC EE and similar bodies in ETSI (and partner SDOs) should take note of the energy efficiency
requirements from [i.1] and give advice to ensure that stakeholders can comply with the intent of
the regulation.
Article 31(12) of [i.1] is itself partly addressed by the widespread availability of the present document during its
development and post publication. To quote: "Notified bodies shall participate in coordination activities as referred to
in Article 38. They shall also take part directly, or be represented in, European standardisation organisations, or
ensure that they are aware and up to date in respect of relevant standards." There is no restriction of membership of
ETSI that would prevent the cited notified bodies from engaging in the ETSI standardization process. Furthermore as
the notified bodies are explicitly mandated to participate in the standardization process ETSI cannot be seen to inhibit
them.
NOTE 6: The calendar of meetings of ETSI's TBs is visible to all through the ETSI Portal [i.31] and ETSI's
members and counsellors have access to all meetings and documents. In addition all of ETSI's published
output is available without charge from ETSI's website [i.32].
In Article 40(2) there is a general statement regarding standards quality (to quote "When issuing a standardisation
request to European standardisation organisations, the Commission shall specify that standards have to be clear,
consistent, including with the standards developed in the various sectors for products covered by the existing Union
harmonisation legislation listed in Annex I, and aiming to ensure that AI systems or AI models placed on the market or
put into service in the Union meet the relevant requirements laid down in this Regulation") that is fully met by ETSI's
quality control processes, by the ETSI Writing World Class Standards guidance [i.33], and by ETSI Directives [i.7].
It is further indicated in Article 40(3) that, to quote (emphasis added for the purposes of the present document), "The
participants in the standardisation process shall seek to promote investment and innovation in AI, including through
increasing legal certainty, as well as the competitiveness and growth of the Union market, and shall contribute to
strengthening global cooperation on standardisation and taking into account existing international standards in the
field of AI that are consistent with Union values, fundamental rights and interests, and shall enhance multi-stakeholder
governance ensuring a balanced representation of interests and the effective participation of all relevant stakeholders".
The reference here to existing international standards could be taken, for the present document, to mean that the output
from TC SAI and prior output from ISG SAI, as well as output from other ETSI TBs and SDOs (e.g. ITU-T, IETF,
IEEE), can be used in support of the Act [i.1]. However, the definition of international standards that has been adopted
in Europe is quite restrictive, limiting international standards to those adopted by the recognized International
Standardization Organizations ISO, IEC and ITU. For the purposes of the present document however the wider aim is to
have European standards adopted globally, thus reinforcing the implicit model of ETSI to bring global stakeholders
together to develop standards in ETSI that are then adopted globally.
ETSI
10 ETSI TR 104 065 V1.1.1 (2025-05)
Reflecting on the requirement in Article 40(3) that standards developed, and the development process, is "consistent
with Union values, fundamental rights and interests" it is recognized that the EU's values are laid out in Article 2 of the
Lisbon Treaty [i.5] and the EU Charter of Fundamental Rights [i.6] and can be summarized in the following headings:
• Human dignity
• Freedom
• Democracy
• Equality
• Rule of law
• Human rights
ETSI addresses these in the Code of Conduct for ETSI Members found in the ETSI Directives [i.7] and in the ETSI
Values statements also in the ETSI Directives [i.7].
4.3 Mandates for standards/standards content in the act
There are significant numbers of mandates in the act as evidenced by a simple search for the term "shall" for which
there are 888 instances. Few of those are directed to technical standardization but rather to actions of member states and
the offices of the EU. Therefore based on a crude filtering the number of mandates that require action by SDOs results
in the smaller set considered below.
Article 4 (AI Literacy) may be met in part by the general purpose reports prepared by ETSI TC SAI. This includes the
technical reports that address the AI Problem Statement [i.27], the AI Data Supply Chain [i.25], the AI Ontology [i.20],
and others. Whilst it is mandated to improve AI literacy and places responsibility on providers and deployers the
general analysis provided by ETSI can be seen as making provision for building that literacy.
NOTE: As a technology becomes more pervasive there is often a movement to address its role across all levels of
education. This has been observed across all of human history and it is reasonably expected that
knowledge and expertise in AI will become an endemic element of education in coming years.
ETSI has expanded on its activity in education in partnership with other SDOs in the form of a project under the Digital
Europe banner addressing ESOs collaboration on education material for standards. A summary of the project is given in
Annex A of the present document.
In Article 5 (prohibited AI practices) the mandate is framed as what cannot be done, whereas for standardization it has
to be framed somewhat differently. This means what measures can be provided that, when followed, prevent the
prohibited practices being placed in the market. This is explored in more detail later in the present document.
Some of the mandates allow techniques to be applied only in very particular contexts. This may not be a standards issue
unless the AI facility is sufficiently autonomous to be able to select its functionality based on context.
In Article 12 (Recording-Keeping), the mandate is to automate recording of events over the duration of the lifetime of
the system. While technical specifications for the security framework of AI computing platform prepared by ETSI
TC SAI can support the mechanism of recording keeping by protect the integrity of the logs collected to guarantee the
procedure for transparency and provision of information to deployers described in Article 13 there are additional
safeguards that may need to be applied. This is addressed in part in ETSI TS 104 224 [i.23] where clause 6.4.1 states:
"An AI/ML system can make decisions at a rate that, if a detailed evidential record was to be created, and retained
securely, has potential to overload the system. Rather than take a detailed evidential record for every decision the goal
of explicability and transparency is to ensure that the rationale for a decision is clear". Thus [i.23] advises care in
recording events in order to ensure the system functionality is balanced with accountability.
Article 15 (Accuracy, Robustness and Cybersecurity) may be met by the reports/specifications prepared by ETSI
TC SAI. This includes the technical reports that introduce the mitigation strategy for AI security issues (see [i.28]) and
the hardware role in AI security (see [i.11]), and technical specifications on the security for the underlying AI
computing platform of AI systems and relevant cyber security requirements for systems.
ETSI
11 ETSI TR 104 065 V1.1.1 (2025-05)
4.4 Relationship to open source projects
As much of the AI Act [i.1] is about ensuring Union values and such there is a useful diversion in Recital 89 of the act
that appears to distinguish liability and the role of open source. To quote "Third parties making accessible to the public
tools, services, processes, or AI components other than general-purpose AI models, shall not be mandated to comply
with requirements targeting the responsibilities along the AI value chain, in particular towards the provider that has
used or integrated them, when those tools, services, processes, or AI components are made accessible under a free and
open licence". Whilst SDOs do not often make tools, services or components they often do make the standards that
underpin them accessible under free and open licences, and most SDOs have an open IPR policy that grants access
under FRAND terms for any protected IPR contained in, and essential to the application of, the standard. Or in other
words a body such as ETSI does not control how a standard is applied, nor does it make any conditions on open source,
only that IPR is offered on FRAND terms (which may include an open source licence model).
It is also useful to note that many standards from multiple SDOs are executable in one form or another. Many of these
code elements from ETSI are provided as an element of the standard and often are made public with an open source
form of licence. It is also noted that most SDOs are attempting to move to models that allow more open access to
standards both during development and once published, which may blur some of the points outlined in Recital 89.
EXAMPLE 1: ETSI's FORGE [i.8] resource contains libraries of data definitions (e.g. ASN.1 source files), and
for linked data (e.g. for NSGSI-LD).
EXAMPLE 2: ETSI makes available semantic data in the form of ontologies (e.g. SAREF [i.9]).
Whilst not made clear in the quote from Recital 89 of [i.1], and further in Recitals 102, 103 and 104, the main issue of
the use of open source is addressed in Article 2(12) which states "This Regulation does not apply to AI systems released
under free and open-source licences, unless they are placed on the market or put into service as high-risk AI systems or
as an AI system that falls under Article 5 or 50". The definition of AI System given in clause 3.1 of the present
document, and the one given in Article 3 of [i.1] from which it is quoted, make reference to "a machine-based system
designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for
explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content,
recommendations, or decisions that can influence physical or virtual environments". Many machine-based systems will,
over time, include AI elements, some of which will have roots in open-source, thus the classification under Articles 5 or
50 should take precedence as stated in Article 12. However, the requirements and observations identified in ETSI
TR 104 048 [i.25] relating to security of the supply chain are applicable irrespective of the nature of the supply chain
components.
In the ETSI domain there is considerable support of open source. Thus the wording of the cited recitals and of
Article 2(12) may have an impact, in particular, on ETSI's family of Open Source TBs (i.e. OCF, OSL, OSM, TFS)
where AI models may form part of their output.
5 ETSI's AI standards
5.1 Overview of ETSI's AI standards
ETSI has a long history of involvement in AI, even if the term AI has not been explicit in work items and deliverables.
The Experiential Networked Intelligence (ENI) group uses data from multiple sources to learn, the Zero-touch network
and Service Management (ZSM) group similarly gathers data to make autonomous network provisioning and
management decisions. In the F5G group the role of Digital Twins as AI driven entities is explored. In Intelligent
Transport Systems (ITS) there is an implicit acceptance of AI/ML processing of data in vehicle and traffic management.
In more AI explicit groups there again is a lot of activity across ETSI. This includes TC SAI, OCG AI (a coordination
body pulling together expertise from almost all of ETSI's technical bodies), TC MTS AI looking at the testing of AI
(and the role of AI in testing). In the security domain in addition to TC SAI there is activity in TC CYBER to ensure
that core processes and methods for risk analysis, for countermeasures and for evaluation, all take AI and ML into
account either as an attack vector, or as an analysis accelerator.
The recent white paper from ETSI gives a good summary of ETSI's historic and planned AI activity [i.3].
Additionally ETSI's Technology Radar white paper from late 2023 [i.4] identifies 16 ETSI TBs active in AI across
12 dimensions as documented in Table 1 of the white paper copied below.
ETSI
12 ETSI TR 104 065 V1.1.1 (2025-05)

Figure 1: ETSI Activity in AI
The 12 dimensions listed in Figure 1 are all present in some form in the text of the AI Act [i.1] and it is therefore clear
from this level of analysis that with the exception of "Trustworthiness and Explainability", and "KPIs and conformance"
that even in mid-2023 that ETSI can claim to be strongly involved in 10 of the 12 dimensions. The situation is changed
since the time of these white papers as ISG SAI published ETSI GR SAI 007 [i.14] on Explicability and Transparency
which has been further updated in ETSI TS 104 224 [i.23] (see clause 5.3 below) into a full Technical
Specification (TS), and ETSI's TC MTS is moving ahead in addressing testing (particularly conformance testing) in
addition to the activity in TC SAI (see clause 5.3 below). Since preparing the picture given in Figure 1 it is obvious that
ISG Electronic Signature and Trust Infrastructures (ESI) are extremely active in the domain "KPIs and conformance",
and that TC SAI and TC MTS together address the "Trustworthiness and Explainability" dimension. It is therefore
asserted that ETSI has technical activity across of 12 of the dimensions. Further, as suggested in clause 4.3 as AI
becomes increasingly endemic so SDOs and the population as a whole will be engaged across all of the AI dimensions
indicated in Figure 1.
In addition as AI becomes increasingly more pervasive it can be reasonably anticipated that TBs across ETSI will, at
least, ask questions regarding the role of AI the context of their TBs. As ETSI standards/deliverables are driven by
contributions of the members who are inevitably going to question if there is a role for AI in their domain, hence the list
of ETSI TBs active in AI is more likely to grow than shrink.
ETSI
13 ETSI TR 104 065 V1.1.1 (2025-05)
5.2 Relationship to other SDO activity
ETSI is not the only body in standards for AI. In addressing this role ETSI TC SAI maintains a "roadmap" of activity
that identifies key concerns and maps from those concerns to standards activity. In addition the work item for ETSI
TR 104 029 [i.2] identifies in some detail the global ecosystem.
NOTE 1: The text of ETSI TR 104 029 [i.2] is made public as a rolling update after every meeting of TC SAI and
linked from the front matter of SAI's entry on the ETSI Portal (see marked up screenshot in Figure 2
below).
Figure 2: Persistent availability of SAI Ecosystem document from ETSI Portal (SAI)
NOTE 2: As stated in clause 4.2 the definition of standards body, and international standards body, addressed by
ETSI, and seen in the context of ETSI TR 104 029 [i.2], is much wider than the definition of each that is
used in the AI Act [i.1].
5.3 Catalogue of ETSI TC SAI work items
With the exception of the present document ETSI TC SAI has not raised work items specifically against the text of the
AI Act [i.1], and in particular has not raised or developed any work items specifically to address the text of the
Standardisation Request (SR) that was addressed to CEN/CENELEC and being directly addressed by JTC21 (where
security aspects are addressed in WG5). However ETSI is clearly active in topics that intersect with the AI Act [i.1] and
with the associated SR, and has a mode-4 agreement in place that allows experts from ETSI to be nominated to attend
related meetings of JTC21 and vice-versa. The cooperation between ETSI and JTC21, and the public availability of
ETSI's deliverables, particularly any TSs, should not inhibit their citation from deliverables of JTC21. In this regard the
activity of ETSI is maintained to be consistent with the aims of the AI Act [i.1], the SR, and the work programme of
JTC21.
Once published, as with all ETSI deliverables, the PDF format document is made available for download without
charge (the hyperlinks for the published version are to the PDF file). The editable word file of the published deliverable
is also available to all ETSI members.
The full catalogue of ETSI TC SAI work items is given in Annex B of the present document, and mapped to the
AI Act [i.1] in detail in clause 6.2.
5.4 Framing the content of standards
As noted in clause 4.3 above, Article 5 of the AI Act [i.1] (prohibited AI practices) is framed as what cannot be done,
whereas for standardization it has to be framed somewhat differently. This means what measures can be provided that,
when followed, prevent the placement on the market of AI solutions that enable the prohibited practices.
ETSI
14 ETSI TR 104 065 V1.1.1 (2025-05)
The concern here is that some of the prohibited practices build on allowed practices. The primary answer offered to date
is to adopt principles of transparency and explicability (the term explainability is used by some other SDOs) as required
by Article 13. In determining the application and risk associated to the application of AI the general guidance of risk
awareness has been addressed in ETSI across a number of TBs. It is noted that Article 9 requires that "a risk
management system shall be established, implemented, documented and maintained in relation to high-risk AI systems"
which requires that a preliminary assessment has been made to properly identify the system as a high-risk AI system.
Whilst the general definition of a high-risk AI system given in Article 6 is quite broad the approach in ETSI is to
perform a quantitative risk analysis, using the ETSI TS 102 165-1 [i.37] method that, for any attack surface identifies
the overall risk. In addition ETSI publishes a set of Cyber Security Controls and have applied those to AI in ETSI
TR 104 030 [i.26]. Here where AI is involved there is a broad understanding that few systems will be AI for AI's sake,
rather that AI is used in support of another process such as facial recognition. The primary purpose is then facial
recognition and not AI. The text in the AI act [i.1] in Article 5 therefore is not interpreted to prevent the development of
AI for biometric recognition but rather to restrict its use in certain circumstances. The standards developed in ETSI to
give assurance of the security of AI thus ensure that the data, its processing, and its interpretation, are clear. In doing so
it is intended that any deployment of the system can clearly demonstrate if the deployment is allowed under the
restrictions set by Article 5.
The rules and requirements for identifying the purpose of AI in a system are defined in ETSI TS 104 224 [i.23]. In
particular the intent (as per Article 13 of the AI Act [i.1]) is to ensure that the statement of purpose of a system allows a
layperson to clearly understand the purpose of the system and to explicitly identify the role of AI in achieving that
purpose. The particular way in which this is framed in ETSI TS 104 224 [i.23] is copied below (see Table 1).
Table 1:
...

Questions, Comments and Discussion

Ask us and Technical Secretary will try to provide an answer. You can facilitate discussion about the standard in here.

Loading comments...