Securing Artificial Intelligence (SAI); Guide to Cyber Security for AI Models and Systems

DTR/SAI-0012

General Information

Status
Not Published
Current Stage
12 - Citation in the OJ (auto-insert)
Due Date
28-May-2025
Completion Date
19-May-2025
Ref Project
Standard
ETSI TR 104 128 V1.1.1 (2025-05) - Securing Artificial Intelligence (SAI); Guide to Cyber Security for AI Models and Systems
English language
74 pages
sale 15% off
Preview
sale 15% off
Preview

Standards Content (Sample)


TECHNICAL REPORT
Securing Artificial Intelligence (SAI);
Guide to Cyber Security for AI Models and Systems

2 ETSI TR 104 128 V1.1.1 (2025-05)

Reference
DTR/SAI-0012
Keywords
artificial intelligence, security
ETSI
650 Route des Lucioles
F-06921 Sophia Antipolis Cedex - FRANCE

Tel.: +33 4 92 94 42 00  Fax: +33 4 93 65 47 16

Siret N° 348 623 562 00017 - APE 7112B
Association à but non lucratif enregistrée à la
Sous-Préfecture de Grasse (06) N° w061004871

Important notice
The present document can be downloaded from the
ETSI Search & Browse Standards application.
The present document may be made available in electronic versions and/or in print. The content of any electronic and/or
print versions of the present document shall not be modified without the prior written authorization of ETSI. In case of any
existing or perceived difference in contents between such versions and/or in print, the prevailing version of an ETSI
deliverable is the one made publicly available in PDF format on ETSI deliver repository.
Users should be aware that the present document may be revised or have its status changed,
this information is available in the Milestones listing.
If you find errors in the present document, please send your comments to
the relevant service listed under Committee Support Staff.
If you find a security vulnerability in the present document, please report it through our
Coordinated Vulnerability Disclosure (CVD) program.
Notice of disclaimer & limitation of liability
The information provided in the present deliverable is directed solely to professionals who have the appropriate degree of
experience to understand and interpret its content in accordance with generally accepted engineering or
other professional standard and applicable regulations.
No recommendation as to products and services or vendors is made or should be implied.
No representation or warranty is made that this deliverable is technically accurate or sufficient or conforms to any law
and/or governmental rule and/or regulation and further, no representation or warranty is made of merchantability or fitness
for any particular purpose or against infringement of intellectual property rights.
In no event shall ETSI be held liable for loss of profits or any other incidental or consequential damages.

Any software contained in this deliverable is provided "AS IS" with no warranties, express or implied, including but not
limited to, the warranties of merchantability, fitness for a particular purpose and non-infringement of intellectual property
rights and ETSI shall not be held liable in any event for any damages whatsoever (including, without limitation, damages
for loss of profits, business interruption, loss of information, or any other pecuniary loss) arising out of or related to the use
of or inability to use the software.
Copyright Notification
No part may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying and
microfilm except as authorized by written permission of ETSI.
The content of the PDF version shall not be modified without the written authorization of ETSI.
The copyright and the foregoing restriction extend to reproduction in all media.

© ETSI 2025.
All rights reserved.
ETSI
3 ETSI TR 104 128 V1.1.1 (2025-05)
Contents
Intellectual Property Rights . 5
Foreword . 5
Modal verbs terminology . 5
Introduction . 5
1 Scope . 6
2 References . 6
2.1 Normative references . 6
2.2 Informative references . 6
3 Definition of terms, symbols and abbreviations . 11
3.1 Terms . 11
3.2 Symbols . 13
3.3 Abbreviations . 13
4 How to use the present document . 14
4.1 Purpose . 14
4.2 Relationship to ETSI TS 104 223 . 15
5 Guidance on implementation . 15
6 Examples to meet AI Security Provisions . 17
6.1 Principle 1: Raise awareness of AI security threats and risks . 17
6.1.1 Provision 5.1.1-1 . 17
6.1.2 Provision 5.1.1-1.1 . 17
6.1.3 Provision 5.1.1-2 . 19
6.1.4 Provision 5.1.1-2.1 . 19
6.1.5 Provision 5.1.1-2.2 . 20
6.2 Principle 2: Design the AI System for Security as well as Functionality and Performance . 21
6.2.1 Provision 5.1.2-1 . 21
6.2.2 Provision 5.1.2-1.1 . 23
6.2.3 Provision 5.1.2-2 . 23
6.2.4 Provision 5.1.2-3 . 24
6.2.5 Provision 5.1.2-4 . 25
6.2.6 Provision 5.1.2-5 . 26
6.2.7 Provision 5.1.2-5.1 . 26
6.2.8 Provision 5.1.2-6 . 27
6.2.9 Provision 5.1.2-7 . 28
6.3 Principle 3: Evaluate the threats and manage the risks to the AI system. 28
6.3.1 Provision 5.1.3-1 . 28
6.3.2 Provision 5.1.3-1.1 . 29
6.3.3 Provision 5.1.3-1.2 . 30
6.3.4 Provision 5.1.3-1.3 . 31
6.3.5 Provision 5.1.3-2 . 32
6.3.6 Provision 5.1.3-3 . 32
6.3.7 Provision 5.1.3-4 . 33
6.4 Principle 4: Enable human responsibility for AI systems . 33
6.4.1 Provision 5.1.4-1 . 33
6.4.2 Provision 5.1.4-2 . 35
6.4.3 Provision 5.1.4-3 . 35
6.4.4 Provision 5.1.4-4 . 36
6.4.5 Provision 5.1.4-5 . 36
6.5 Principle 5: Identify, track, and protect assets . 37
6.5.1 Provision 5.2.1-1 . 37
6.5.2 Provision 5.2.1-2 . 38
6.5.3 Provision 5.2.1-3 . 39
6.5.4 Provision 5.2.1-3.1 . 39
ETSI
4 ETSI TR 104 128 V1.1.1 (2025-05)
6.5.5 Provision 5.2.1-4 . 40
6.5.6 Provision 5.2.1-4.1 . 41
6.5.7 Provision 5.2.1-4.2 . 42
6.6 Principle 6: Secure the infrastructure . 42
6.6.1 Provision 5.2.2-1 . 42
6.6.2 Provision 5.2.2-2 . 43
6.6.3 Provision 5.2.2-3 . 44
6.6.4 Provision 5.2.2-4 . 45
6.6.5 Provision 5.2.2-5 . 45
6.6.6 Provision 5.2.2-6 . 46
6.7 Principle 7: Secure the supply chain. 47
6.7.1 Provision 5.2.3-1 . 47
6.7.2 Provision 5.2.3-2 . 48
6.7.3 Provision 5.2.3-2.1 . 49
6.7.4 Provision 5.2.3-2.2 . 49
6.7.5 Provision 5.2.3-3 . 50
6.7.6 Provision 5.2.3-4 . 51
6.8 Principle 8: Document Data, Models, and Prompts . 51
6.8.1 Provision 5.2.4-1 . 51
6.8.2 Provision 5.2.4-1.1 . 52
6.8.3 Provision 5.2.4-1.2 . 52
6.8.4 Provision 5.2.4-2 . 53
6.8.5 Provision 5.2.4-2.1 . 54
6.8.6 Provision 5.2.4-3 . 54
6.9 Principle 9: Conduct appropriate testing and evaluation . 55
6.9.1 Provision 5.2.5-1 . 55
6.9.2 Provision 5.2.5-2 . 56
6.9.3 Provision 5.2.5-2.1 . 57
6.9.4 Provision 5.2.5-3 . 57
6.9.5 Provision 5.2.5-4 . 58
6.9.6 Provision 5.2.5-4.1 . 59
6.10 Principle 10: Communication and processes associated with End-users and Affected Entities . 59
6.10.1 Provision 5.3.1-1 . 59
6.10.2 Provision 5.3.1-2 . 60
6.10.3 Provision 5.3.1-2.1 . 61
6.10.4 Provision 5.3.1-2.2 . 61
6.10.5 Provision 5.3.1-3 . 62
6.11 Principle 11: Maintain Regular Security Updates, Patches, and Mitigations . 63
6.11.1 Provision 5.4.1-1 . 63
6.11.2 Provision 5.4.1-1.1 . 64
6.11.3 Provision 5.4.1-2 . 64
6.11.4 Provision 5.4.1-3 . 65
6.12 Principle 12: Monitor the system's behaviour . 65
6.12.1 Provision 5.4.2-1 . 65
6.12.2 Provision 5.4.2-2 . 67
6.12.3 Provision 5.4.2-3 . 67
6.12.4 Provision 5.4.2-4 . 68
6.13 Principle 13: Ensure proper data and model disposal . 69
6.13.1 Provision 5.5.1-1 . 69
6.13.2 Provision 5.5.1-2 . 70
Annex A: Mapping from design and organization principles to SAI . 71
Annex B: Bibliography . 73
History . 74

ETSI
5 ETSI TR 104 128 V1.1.1 (2025-05)
Intellectual Property Rights
Essential patents
IPRs essential or potentially essential to normative deliverables may have been declared to ETSI. The declarations
pertaining to these essential IPRs, if any, are publicly available for ETSI members and non-members, and can be
found in ETSI SR 000 314: "Intellectual Property Rights (IPRs); Essential, or potentially Essential, IPRs notified to
ETSI in respect of ETSI standards", which is available from the ETSI Secretariat. Latest updates are available on the
ETSI IPR online database.
Pursuant to the ETSI Directives including the ETSI IPR Policy, no investigation regarding the essentiality of IPRs,
including IPR searches, has been carried out by ETSI. No guarantee can be given as to the existence of other IPRs not
referenced in ETSI SR 000 314 (or the updates on the ETSI Web server) which are, or may be, or may become,
essential to the present document.
Trademarks
The present document may include trademarks and/or tradenames which are asserted and/or registered by their owners.
ETSI claims no ownership of these except for any which are indicated as being the property of ETSI, and conveys no
right to use or reproduce any trademark and/or tradename. Mention of those trademarks in the present document does
not constitute an endorsement by ETSI of products, services or organizations associated with those trademarks.
DECT™, PLUGTESTS™, UMTS™ and the ETSI logo are trademarks of ETSI registered for the benefit of its
Members. 3GPP™, LTE™ and 5G™ logo are trademarks of ETSI registered for the benefit of its Members and of the
3GPP Organizational Partners. oneM2M™ logo is a trademark of ETSI registered for the benefit of its Members and of ®
the oneM2M Partners. GSM and the GSM logo are trademarks registered and owned by the GSM Association.
Foreword
This Technical Report (TR) has been produced by ETSI Technical Committee Securing Artificial Intelligence (SAI).
Modal verbs terminology
In the present document "should", "should not", "may", "need not", "will", "will not", "can" and "cannot" are to be
interpreted as described in clause 3.2 of the ETSI Drafting Rules (Verbal forms for the expression of provisions).
"must" and "must not" are NOT allowed in ETSI deliverables except when used in direct citation.
Introduction
The growing deployment and technological advancements of Artificial Intelligence (AI) has further reiterated the need
for tailored security requirements for AI systems. The present document will guide stakeholders across the AI supply
chain on implementations of ETSI TS 104 223 [i.1] by providing non-exhaustive scenarios as well as examples of
practical solutions to meet these provisions.

ETSI
6 ETSI TR 104 128 V1.1.1 (2025-05)
1 Scope
The present document gives guidance to help stakeholders in the AI supply chain in meeting the cyber security
provisions defined for AI models and systems in ETSI TS 104 223 [i.1]. These stakeholders could include a diverse
range of entities, including large enterprises and government departments, independent developers, Small and Medium
Enterprises (SMEs), charities, local authorities and other non-profit organizations. The present document will also be
useful for stakeholders planning to purchase AI services. Additionally, the present document has been designed to
support the future development of AI cyber security standards, including specifications that could inform future
assurance and certification programmes. Where relevant, the present document signposts supporting specifications and
international frameworks.
2 References
2.1 Normative references
Normative references are not applicable in the present document.
2.2 Informative references
References are either specific (identified by date of publication and/or edition number or version number) or
non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the
referenced document (including any amendments) applies.
NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee
their long-term validity.
The following referenced documents may be useful in implementing an ETSI deliverable or add to the reader's
understanding, but are not required for conformance to the present document.
[i.1] ETSI TS 104 223: "Securing Artificial Intelligence (SAI); Baseline Cyber Security Requirements
for AI Models and Systems".
[i.2] ETSI TR 104 222: "Securing Artificial Intelligence; Mitigation Strategy Report".
[i.3] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying
down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008,
(EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and
Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act).
[i.4] ISO/IEC 22989:2022: "Information technology — Artificial intelligence — Artificial intelligence
concepts and terminology".
[i.5] NCSC: "Machine learning principles".
[i.6] NCSC: "Guidelines for Secure AI system development".
[i.7] CISA: "Joint Cybersecurity Information - Deploying AI Systems Securely".
[i.8] MITRE: "ATLAS Framework".
[i.9] NIST AI 100-2 E2023: "Adversarial Machine Learning: A Taxonomy and Terminology of Attacks
and Mitigations". ®
[i.10] OWASP : "AI Exchange". ®
[i.11] OWASP : "Top 10 for LLM Applications".
[i.12] ETSI TR 104 032: "Securing Artificial Intelligence (SAI); Traceability of AI Models".
[i.13] ETSI TR 104 225: "Securing Artificial Intelligence TC (SAI); Privacy aspects of AI/ML systems".
ETSI
7 ETSI TR 104 128 V1.1.1 (2025-05)
[i.14] ETSI TR 104 066: "Securing Artificial Intelligence (SAI); Security Testing of AI".
[i.15] ISO/IEC 42001:2023: "Information technology — Artificial intelligence — Management system".
[i.16] ISO/IEC 25059:2023: "Software engineering — Systems and software Quality Requirements and
Evaluation (SQuaRE) — Quality model for AI system)".
[i.17] NIST: "AI Risk Management Framework".
[i.18] International Scientific Report on the Safety of Advanced AI: "Interim Report".
[i.19] ICO: "Data Protection Audit Framework".
[i.20] ICO: "Generative AI: eight questions that developers and users need to ask".
[i.21] OWASP: "LLM Applications Cybersecurity and Governance Checklist".
[i.22] ICO: "How should we assess security and data minimisation in AI?".
[i.23] AI Village Defcon 2024 Report: "Generative AI Red Teaming Challenge: Transparency Report". ®
[i.24] OWASP : "Threat Modeling Cheat Sheet".
[i.25] ICO: "Newsletters".
[i.26] NCSC: "News on AI".
[i.27] MITRE: "Slack Channel". ®
[i.28] OWASP : "Slack Invite".
[i.29] NIST: "NIST Secure Software Development Framework for Generative AI and for Dual Use
Foundation Models Virtual Workshop". ®
[i.30] OWASP : "OWASP Secure Coding Practices-Quick Reference Guide".
[i.31] NCSC: "Secure Development and Deployment Guidance".
[i.32] ICO: "Do we need to consult the ICO?".
[i.33] NCSC: "Risk management".
[i.34] ICO: "What is the impact of Article 22 of the UK GDPR on fairness?".
[i.35] ICO: "AI and data protection risk toolkit". ®
[i.36] PCI DSS.
[i.37] FCA Standards.
[i.38] EU Artificial Intelligence Act: "The AI Act Explorer".
[i.39] NCSC: "Risk management - Threat Modelling".
[i.40] NIST: "NIST AI RMF Playbook".
[i.41] NIST: "NIST AI RMF Crosswalk Documents".
[i.42] ICO: "What are the accountability and governance implications of AI?".
[i.43] European AI Alliance: "Implementing AI Governance: from Framework to Practice". ®
[i.44] OWASP : "New OWASP AI Security Center of Excellence (CoE) Guide". ®
[i.45] MITRE ATT&CK .
[i.46] CSA: "Guidelines and Companion Guide on Securing AI Systems".
[i.47] NCSC: "Secure Design Principles".
ETSI
8 ETSI TR 104 128 V1.1.1 (2025-05)
[i.48] MLOps: "MLOps Principles".
[i.49] NIST: "Cybersecurity Supply Chain Risk Management".
[i.50] NIST SP 800-161 Rev. 1: "Cybersecurity Supply Chain Risk Management Practices for Systems
and Organizations".
[i.51] NCSC: "Supply Chain Security Guidance".
[i.52] ICO: "UK GDPR Guidance and Resources".
[i.53] NCSC: "Protecting bulk personal data".
[i.54] European Commission: "GDPR Compliance Guidelines by EU Commission".
[i.55] ISO/IEC 27001:2022: "Information security, cybersecurity and privacy protection — Information
security management systems — Requirements".
[i.56] ICO: "Reporting Processes".
[i.57] ICO: "Breach identification, assessment and logging".
[i.58] NCSC: "Cyber Security Toolkit for Boards: Developing a positive cyber security culture".
[i.59] NCSC: "Responding to a cyber incident - a guide for CEOs".
[i.60] NCSC: "Cloud Security Guidance - Using a cloud platform securely - Apply access controls".
[i.61] NCSC: "Zero trust architecture design principles". ®
[i.62] OWASP : "Threat Modeling Process".
[i.63] NIST SP 800-154: "Guide to Data-Centric System Threat Modeling".
[i.64] NCSC: "Introduction to Logging for Security Purposes". ®
[i.65] OWASP : "Top 10 for APIs - API4:2019 Lack of Resources & Rate Limiting". ®
[i.66] OWASP : "Threat modeling in practice". ®
[i.67] OWASP : "AI Top 10 API Security Risks - 2023".
[i.68] Yi Dong, Ronghui Mu, et al.: "Building Guardrails for Large Language Models". ®
[i.69] OWASP : "Threat Modeling Playbook".
[i.70] NCSC: "Risk Management - Cybersecurity Risk Management Framework".
[i.71] ISO 9001: "What does it mean in the supply chain?".
[i.72] CISA: "Software Bill of Materials (SBOM)".
[i.73] World Economic Forum: "Adopting AI Responsibly: Guidelines for Procurement of AI Solutions
by the Private Sector: Insight Report".
[i.74] MITRE: "AI Risk Database".
[i.75] NIST SP 800-137: "Information Security Continuous Monitoring (ISCM) for Federal Information
Systems and Organizations".
[i.76] NCSC: "Early Warning".
[i.77] NCSC: "Building a Security Operations Centre (SOC) - Threat Intelligence".
[i.78] ICO: "Audit Framework toolkit on AI - Human review".
[i.79] NISTIR 8312: "Four Principles of Explainable Artificial Intelligence".
[i.80] ICO: "Explaining Decisions made with AI".
ETSI
9 ETSI TR 104 128 V1.1.1 (2025-05)
[i.81] G. Detommaso, M. Bertran, R. Fogliato, A. Roth: "Multicalibration for Confidence Scoring in
LLMs".
[i.82] H. Luo, L. Specia: "From Understanding to Utilization: A Survey on Explainability for Large
Language Models".
[i.83] ICO: "Audits".
[i.84] ICO: "A Guide to ICO Audit - Artificial Intelligence (AI) Audits". ®
[i.85] OWASP : "LLM and Generative AI Security Solutions Landscape".
[i.86] CISA: "CISA, JCDC, Government and Industry Partners Conduct AI Tabletop Exercise". ®
[i.87] OWASP : "Guide for Preparing and Responding to Deepfake Events".
[i.88] ICO: "A guide to data security". ®
[i.89] OWASP : "OWASP Application Security Verification Standard (ASVS)".
[i.90] Technical Disclosure Commons: "Training Dataset Validation to Protect Machine Learning
Models from Data Poisoning".
[i.91] NCSC: "Device Security Guidance - Logging and Protective Monitoring".
[i.92] Towards Data Science: "LLM Monitoring and Observability - A Summary of Techniques and
Approaches for Responsible AI". ®
[i.93] "API Management Overview and links" from Wikipedia .
[i.94] The Alan Turing Institute: "What is synthetic data and how can it advance research and
development?".
[i.95] DSTL: "Machine learning with limited data".
[i.96] NCSC: "Secure development and deployment guidance".
[i.97] NIST SP 800-218: "Secure Software Development Framework (SSDF) Version 1.1:
Recommendations for Mitigating the Risk of Software Vulnerabilities".
[i.98] NCSC: "Vulnerability Disclosure Toolkit".
[i.99] OWASP: "Cryptographic Storage Cheat Sheet".
[i.100] ISO/IEC 29147:2018: "Information technology — Security techniques — Vulnerability
disclosure".
[i.101] CSA: "Incident Response Checklist".
[i.102] NCSC: "Incident management".
[i.103] CSA: "AI Organizational Responsibilities - Governance, Risk Management, Compliance and
Cultural Aspects".
[i.104] ICO: "Security requirements".
[i.105] MITRE: "System of Trust Framework".
[i.106] OWASP: "Machine Learning Bill of Materials (ML-BOM)".
[i.107] SLSA: "Supply-chain Levels for Software Artifacts (SLSA) specification".
[i.108] NCSC: "Vulnerability Management". ®
[i.109] OWASP : "OWASP Vulnerability Management Guide".
[i.110] ISO 2800:2022: "Security and resilience — Security management systems — Requirements".
ETSI
10 ETSI TR 104 128 V1.1.1 (2025-05)
[i.111] GOV.UK: "Guidance and tools for digital accessibility".
[i.112] GitHub: "AI-secure/DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT
Models".
[i.113] AI Safety Institute: "Inspect - An open-source framework for large language model evaluations".
[i.114] NIST: "AI Test, Evaluation, Validation, and Verification (TEVV)".
[i.115] NIST: "Dioptra Test Platform".
[i.116] ICO: "Retention and destruction of information".
[i.117] NIST: "Cryptographic Standards and Guidelines". ®
[i.118] OWASP : "OWASP Top 10 - 2021".
[i.119] "Generative AI Red Teaming Challenge: Transparency Report", AI Village Defcon 2024".
[i.120] NCSC: "Penetration testing".
[i.121] Cornell University: "Red-Teaming for Generative AI: Silver Bullet or Security Theater?".
[i.122] NCSC: "CHECK penetration testing".
[i.123] ISO/IEC/IEEE 29119-1:2022: "Software and systems engineering — Software testing — Part 1:
General concepts". ®
[i.124] Linux Foundation AI & Data Foundation: "Adversarial Robustness Toolbox". ®
NOTE: Linux is the registered trademark of Linus Torvalds in the U.S. and other countries.
[i.125] Github: "TextAttack: Generating adversarial examples for NLP models".
[i.126] ICO: "Generative AI second call for evidence: Purpose limitation in the generative AI lifecycle".
[i.127] ICO: "What do we need to know about accuracy and statistical accuracy?".
[i.128] ICO: "Disposal and deletion".
[i.129] ICO: "Guidance on AI and data protection".
[i.130] NIST SP 800-88 Rev. 1: "Guidelines for Media Sanitization".
[i.131] NCSC: "Secure sanitisation of storage media".
[i.132] ETSI TR 104 221: "Securing Artificial Intelligence (SAI); Problem Statement".
[i.133] ETSI TR 104 062 (V1.2.1) (2024-07); "Securing Artificial Intelligence; Automated Manipulation
of Multimedia Identity Representations".
[i.134] ETSI TR 104 029: "Securing Artificial Intelligence (SAI); Global Ecosystem".
[i.135] ETSI TS 104 050: "Securing Artificial Intelligence (SAI); AI Threat Ontology and definitions".
[i.136] ETSI TR 104 030: "Securing Artificial Intelligence (SAI); Critical Security Controls for Effective
Cyber Defence; Artificial Intelligence Sector".
[i.137] ETSI TS 102 165-1: "Cyber Security (CYBER); Methods and protocols; Part 1: Method and pro
forma for Threat, Vulnerability, Risk Analysis (TVRA)".
[i.138] ETSI TS 103 485: "CYBER; Mechanisms for privacy assurance and verification".
[i.139] ETSI TS 104 224: "Securing Artificial Intelligence (SAI); Explicability and transparency of AI
processing".
[i.140] ETSI TR 103 305-1: "Cyber Security (CYBER); Critical Security Controls for Effective Cyber
Defence; Part 1: The Critical Security Controls".
ETSI
11 ETSI TR 104 128 V1.1.1 (2025-05)
[i.141] ETSI TR 103 305-2: "CYBER; Critical Security Controls for Effective Cyber Defence; Part 2:
Measurement and auditing".
[i.142] ETSI TR 103 305-3: "CYBER; Critical Security Controls for Effective Cyber Defence; Part 3:
Service Sector Implementations".
[i.143] ETSI TR 103 305-4: "Cyber Security (CYBER); Critical Security Controls for Effective Cyber
Defence; Part 4: Facilitation Mechanisms".
[i.144] ETSI TR 103 305-5: "Cyber Security (CYBER); Critical Security Controls for Effective Cyber
Defence; Part 5: Privacy and personal data protection enhancement".
[i.145] ETSI TR 104 048: "Securing Artificial Intelligence (SAI); Data Supply Chain Security".
[i.146] JTC21024: "Risk management".
[i.147] Directive 2006/42/EC of the European Parliament and of the Council of 17 May 2006 on
machinery, and amending Directive 95/16/EC.
[i.148] Joint programme with Digital Europe.
[i.149] Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning
the processing of personal data and the protection of privacy in the electronic communications
sector (Directive on privacy and electronic communications).
[i.150] ETSI White Paper No. #34: "Artificial Intelligence and future directions for ETSI".
[i.151] ETSI White Paper No. #52: "ETSI Activities in the field of Artificial Intelligence Preparing the
implementation of the European AI Act".
3 Definition of terms, symbols and abbreviations
3.1 Terms
For the purposes of the present document, the following terms apply:
administrator: user who has the highest-privilege level possible for a user of the device
NOTE: This can mean they are able to change any configuration related to the intended functionality.
adversarial AI: techniques and methods that exploit vulnerabilities in the way AI systems work
EXAMPLE: By introducing malicious inputs to exploit their machine learning aspect and deceive the system
into producing incorrect or unintended results. These techniques are commonly used in adversarial
attacks but are not a distinct type of AI system.
adversarial attack: attempt to manipulate an AI model by introducing specially crafted inputs to cause the model to
produce errors or unintended outcomes
agentic systems: AI systems capable of initiating and executing actions autonomously, often interacting with other
systems or environments to achieve their goals
Application Programming Interface (API): set of tools and protocols that allow different software systems to
communicate and interact
Bill of Materials (BOM): comprehensive inventory of all components used in a system, such as software
dependencies, configurations, and hardware
data custodian: See definition in ETSI TS 104 223 [i.1].
data poisoning: type of adversarial attack where malicious data is introduced into training datasets to compromise the
AI system's performance or behaviour
ETSI
12 ETSI TR 104 128 V1.1.1 (2025-05)
Data Protection Impact Assessment (DPIA): tool used in UK GDPR to assess and mitigate privacy risks associated
with processing personal data in AI systems
embeddings: vector representations of data (e.g. text, images) that capture their semantic meaning in a mathematical
space, commonly used to improve the efficiency of search, clustering and similarity comparisons
evasion attack: type of adversarial attack where an adversary manipulates input data to cause the AI system to produce
incorrect or unexpected outputs without altering the underlying model
excessive agency: situation where an AI system has the capability to make decisions or take actions beyond its intended
scope, potentially leading to unintended consequences or misuse
explainability: ability of an AI system to provide human-understandable insights into its decision-making process
feature selection: process of selecting a subset of relevant features (variables) for use in model training to improve
performance, reduce complexity and prevent overfitting
generative AI: AI models that generate new content, such as text, images or audio, based on training data
EXAMPLE: Image synthesis models and large language models like chatbots.
governance framework: policies and procedures established to oversee the ethical, secure and compliant use of AI
systems
guardrails: predefined constraints or rules implemented to control and limit an AI system's outputs and behaviours,
ensuring safety, reliability, and alignment with ethical or operational guidelines
hallucination (in AI): AI-generated content that appears factual but is incorrect or misleading
NOTE: This is prevalent in LLMs, which can produce plausible sounding but inaccurate responses.
harm: injury or damage to the health, or damage to property or the environment, or interference with the fundamental
rights of the person
hazard: potential source of harm
hazardous situation: circumstance in which people, property or the environment is/are exposed to one or more hazards
inference attack: privacy attack where an adversary retrieves sensitive information about the training data, or users, by
analysing the outputs of an AI model
Large Language Model (LLM): type of AI model trained on vast amounts of text data to understand and generate
human-like language
EXAMPLE: Chatbots and content generation tools.
Machine Learning (ML): subset of AI where systems improve their performance on a task over time by learning from
data rather than following explicit instructions
Machine Learning Bill of Materials (ML BOM): specialized BOM for AI systems that catalogues models, datasets,
parameters and training configurations used in the development and deployment of machine learning solutions
Machine Learning Operations (ML Ops): set of practices and tools that streamline and standardize the deployment,
monitoring and maintenance of machine learning models in production environments
model extraction: attack where an adversary recreates or approximates a proprietary AI model by querying it and
analysing its outputs, potentially exposing trade secrets or intellectual property
model inversion: privacy attack where an adversary infers sensitive information about the training data by analysing
the AI model's outputs
multimodal models: AI models that process and integrate multiple types of data (e.g. text, images, audio) to perform
tasks
Natural Language Processing (NLP): type of machine learning that understands, interprets, and generates human
language in a way that is meaningful and useful
ETSI
13 ETSI TR 104 128 V1.1.1 (2025-05)
predictive (or discriminative) AI: type of machine learning designed to classify inputs or make predictions based on
existing data
NOTE: These models focus on identifying patterns and drawing distinctions, such as fraud detection or customer
segmentation.
prompt: input provided to an AI model, often in the form of text, that directs or guides its response
NOTE: Prompts can include questions, instructions, or context for the desired output.
prompt injection: attacker exploits a vulnerability in AI models by using prompts that produce unintended or harmful
outputs
Reinforcement Learning (RL): machine learning approach where an agent learns by interacting with its environment
and receiving feedback in the form of rewards or penalties
Retrieval-Augmented Generation (RAG): AI approach that combines external knowledge retrieval (e.g. documents
or databases) with prompts to language model generation to provide accurate and up-to-date responses
risk assessment: process of identifying, analysing and mitigating potential threats to the security or functionality of an
AI system
sanitisation: process of cleaning and validating data or inputs to remove errors, inconsistencies and malicious content,
ensuring data integrity and security
Software Bill of Materials (SBOM): detailed list of all software components in a system, including open-source
libraries, versions and licences to ensure transparency and security
system prompt: predefined input or set of instructions provided to guide the behaviour of an AI model, often used to
define its tone, rules, or operational context
threat modelling: process to identify and address potential security threats to a system during its design and
development phases
training: process of teaching an AI model to recognize patterns, make decisions, or generate outputs by exposing it to
labelled data and adjusting its parameters to minimize errors
web cont
...

Questions, Comments and Discussion

Ask us and Technical Secretary will try to provide an answer. You can facilitate discussion about the standard in here.

Loading comments...