ETSI TS 104 223 V1.1.1 (2025-04)
Securing Artificial Intelligence (SAI); Baseline Cyber Security Requirements for AI Models and Systems
Securing Artificial Intelligence (SAI); Baseline Cyber Security Requirements for AI Models and Systems
DTS/SAI-0014
General Information
Standards Content (Sample)
TECHNICAL SPECIFICATION
Securing Artificial Intelligence (SAI);
Baseline Cyber Security Requirements for
AI Models and Systems
2 ETSI TS 104 223 V1.1.1 (2025-04)
Reference
DTS/SAI-0014
Keywords
artificial intelligence, security
ETSI
650 Route des Lucioles
F-06921 Sophia Antipolis Cedex - FRANCE
Tel.: +33 4 92 94 42 00 Fax: +33 4 93 65 47 16
Siret N° 348 623 562 00017 - APE 7112B
Association à but non lucratif enregistrée à la
Sous-Préfecture de Grasse (06) N° w061004871
Important notice
The present document can be downloaded from the
ETSI Search & Browse Standards application.
The present document may be made available in electronic versions and/or in print. The content of any electronic and/or
print versions of the present document shall not be modified without the prior written authorization of ETSI. In case of any
existing or perceived difference in contents between such versions and/or in print, the prevailing version of an ETSI
deliverable is the one made publicly available in PDF format on ETSI deliver repository.
Users should be aware that the present document may be revised or have its status changed,
this information is available in the Milestones listing.
If you find errors in the present document, please send your comments to
the relevant service listed under Committee Support Staff.
If you find a security vulnerability in the present document, please report it through our
Coordinated Vulnerability Disclosure (CVD) program.
Notice of disclaimer & limitation of liability
The information provided in the present deliverable is directed solely to professionals who have the appropriate degree of
experience to understand and interpret its content in accordance with generally accepted engineering or
other professional standard and applicable regulations.
No recommendation as to products and services or vendors is made or should be implied.
No representation or warranty is made that this deliverable is technically accurate or sufficient or conforms to any law
and/or governmental rule and/or regulation and further, no representation or warranty is made of merchantability or fitness
for any particular purpose or against infringement of intellectual property rights.
In no event shall ETSI be held liable for loss of profits or any other incidental or consequential damages.
Any software contained in this deliverable is provided "AS IS" with no warranties, express or implied, including but not
limited to, the warranties of merchantability, fitness for a particular purpose and non-infringement of intellectual property
rights and ETSI shall not be held liable in any event for any damages whatsoever (including, without limitation, damages
for loss of profits, business interruption, loss of information, or any other pecuniary loss) arising out of or related to the use
of or inability to use the software.
Copyright Notification
No part may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying and
microfilm except as authorized by written permission of ETSI.
The content of the PDF version shall not be modified without the written authorization of ETSI.
The copyright and the foregoing restriction extend to reproduction in all media.
© ETSI 2025.
All rights reserved.
ETSI
3 ETSI TS 104 223 V1.1.1 (2025-04)
Contents
Intellectual Property Rights . 4
Foreword . 4
Modal verbs terminology . 4
Introduction . 4
1 Scope . 5
2 References . 5
2.1 Normative references . 5
2.2 Informative references . 6
3 Definition of terms, symbols and abbreviations . 6
3.1 Terms . 6
3.2 Symbols . 7
3.3 Abbreviations . 7
4 Audience . 8
5 AI Security Principles and Provisions . 9
5.1 Secure Design . 9
5.1.1 Principle 1: Raise awareness of AI security threats and risks . 9
5.1.2 Principle 2: Design the AI system for security as well as functionality and performance . 9
5.1.3 Principle 3: Evaluate the threats and manage the risks to the AI system . 10
5.1.4 Principle 4: Enable human responsibility for AI systems . 10
5.2 Secure Development. 11
5.2.1 Principle 5: Identify, track and protect the assets . 11
5.2.2 Principle 6: Secure the infrastructure . 11
5.2.3 Principle 7: Secure the supply chain . 12
5.2.4 Principle 8: Document data, models and prompts . 12
5.2.5 Principle 9: Conduct appropriate testing and evaluation . 12
5.3 Secure Deployment . 13
5.3.1 Principle 10: Communication and processes associated with End-users and Affected Entities . 13
5.4 Secure Maintenance . 13
5.4.1 Principle 11: Maintain regular security updates, patches and mitigations . 13
5.4.2 Principle 12: Monitor the system's behaviour . 14
5.5 Secure End of Life . 14
5.5.1 Principle 13: Ensure proper data and model disposal . 14
History . 15
ETSI
4 ETSI TS 104 223 V1.1.1 (2025-04)
Intellectual Property Rights
Essential patents
IPRs essential or potentially essential to normative deliverables may have been declared to ETSI. The declarations
pertaining to these essential IPRs, if any, are publicly available for ETSI members and non-members, and can be
found in ETSI SR 000 314: "Intellectual Property Rights (IPRs); Essential, or potentially Essential, IPRs notified to
ETSI in respect of ETSI standards", which is available from the ETSI Secretariat. Latest updates are available on the
ETSI IPR online database.
Pursuant to the ETSI Directives including the ETSI IPR Policy, no investigation regarding the essentiality of IPRs,
including IPR searches, has been carried out by ETSI. No guarantee can be given as to the existence of other IPRs not
referenced in ETSI SR 000 314 (or the updates on the ETSI Web server) which are, or may be, or may become,
essential to the present document.
Trademarks
The present document may include trademarks and/or tradenames which are asserted and/or registered by their owners.
ETSI claims no ownership of these except for any which are indicated as being the property of ETSI, and conveys no
right to use or reproduce any trademark and/or tradename. Mention of those trademarks in the present document does
not constitute an endorsement by ETSI of products, services or organizations associated with those trademarks.
DECT™, PLUGTESTS™, UMTS™ and the ETSI logo are trademarks of ETSI registered for the benefit of its
Members. 3GPP™, LTE™ and 5G™ logo are trademarks of ETSI registered for the benefit of its Members and of the
3GPP Organizational Partners. oneM2M™ logo is a trademark of ETSI registered for the benefit of its Members and of ®
the oneM2M Partners. GSM and the GSM logo are trademarks registered and owned by the GSM Association.
Foreword
This Technical Specification (TS) has been produced by ETSI Technical Committee Securing Artificial Intelligence
(SAI).
Modal verbs terminology
In the present document "shall", "shall not", "should", "should not", "may", "need not", "will", "will not", "can" and
"cannot" are to be interpreted as described in clause 3.2 of the ETSI Drafting Rules (Verbal forms for the expression of
provisions).
"must" and "must not" are NOT allowed in ETSI deliverables except when used in direct citation.
Introduction
Artificial Intelligence (AI) is transforming our daily lives. As the technology continues to evolve and be embedded in
people's lives, it is crucial that efforts are taken to protect AI systems from growing cyber security threats. Special focus
on the cybersecurity of Artificial Intelligence (AI) is important due to its distinct differences compared to traditional
software. These characteristics include security risks such as data poisoning, model obfuscation, indirect prompt
injection and operational differences associated with data management.
The present document utilizes existing good practice in the AI and cyber security landscape alongside novel measures
to provide a set of targeted high-level principles and provisions at each stage of the AI lifecycle. The objective of the
present document is to provide stakeholders in the AI supply chain with clear baseline security requirements to help
protect AI systems.
Information on the AI lifecycle and implementation examples are given in ETSI TR 104 128 [i.1].
ETSI
5 ETSI TS 104 223 V1.1.1 (2025-04)
1 Scope
The present document defines baseline security requirements for AI models and systems. This includes systems that
incorporate deep neural networks, such as generative AI. For consistency, the term "AI systems" throughout the present
document when framing the scope of provisions and "AI security" which is considered a subset of cyber security. The
present document is not designed for academics who are creating and testing AI systems only for research purposes
(AI systems which are not going to be deployed).
The present document separates principles and requirements into five phases. These are secure design, secure
development, secure deployment, secure maintenance and secure end of life. Relevant standards and publications are
signposted at the start of each principle to highlight links between the various documents and the present document.
This is not an exhaustive list.
NOTE: The principles can also be mapped to the life cycle stages described in ISO/IEC 22989 [i.3]. The secure
design and development principles can be applied during the Design and development life cycle stage.
Similarly, the secure deployment principles can be applied during the Deployment stage, secure
maintenance to the Operations and monitoring stage and secure end of life during the Retirement stage.
2 References
2.1 Normative references
References are either specific (identified by date of publication and/or edition number or version number) or
non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the
referenced document (including any amendments) applies.
Referenced documents which are not found to be publicly available in the expected location might be found in the
ETSI docbox.
NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee
their long term validity.
The following referenced documents are necessary for the application of the present document.
[1] ISO/IEC 27001:2022: "Information security, cybersecurity and privacy protection — Information
security management systems — Requirements".
[2] CISA: "Software Bill of Materials (SBOM)".
[3] NIST: "AI Risk Management Framework: Second Draft", 2022.
[4] NIST AI 100-1: "AI Risk Management Framework 1.0", 2023.
[5] Australian Signals Directorate: "An introduction to Artificial Intelligence", 2023.
[6] World Economic Forum, IBM: "Presidio AI Framework: Towards Safe Generative AI Models",
2024.
[7] OWASP: "OWASP AI Exchange".
[8] MITRE ATLAS™: "Mitigations". ®
[9] Google : "Secure AI Approach Framework: A quick guide to implementing the Secure AI
Framework (SAIF)", 2023.
[10] ELSA: "ELSA - European Lighthouse on Secure and Safe AI", 2023.
[11] Cisco: "The Cisco Responsible AI Framework", 2024.
[12] Amazon: "AWS Cloud Adoption Framework for Artificial Intelligence, Machine Learning, and
Generative AI", Amazon White Paper, 2024.
ETSI
6 ETSI TS 104 223 V1.1.1 (2025-04)
[13] NIST AI 100-2 E2023: "Adversarial Machine Learning Taxonomy: A Taxonomy and
Terminology of Attacks and Mitigations".
[14] ENISA: "Multilayer Framework for Good Cybersecurity Practices for AI", 2023.
[15] NCSC: "Guidelines for secure AI system development", 2023.
[16] Federal Office in Information Security: "AI Security Concerns in a Nutshell", 2023.
[17] G7 Hiroshima Summit: "Hiroshima Process International Code of Conduct for Organizations
Developing Advanced AI Systems", 2023.
[18] United States Department of Health and Human Services: "Trustworthy AI (TAI) Playbook:
Executive Summary", 2021.
[19] OpenAI: "Preparedness Framework (Beta)", 2023.
[20] Information Commissioner's Office (ICO): "Guidance on the AI Auditing Framework", 2020.
[21] Nvidia: "NeMo-Guardrails", 2023.
2.2 Informative references
References are either specific (identified by date of publication and/or edition number or version number) or
non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the
referenced document (including any amendments) applies.
NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee
their long term validity.
The following referenced documents are not necessary for the application of the present document but they assist the
user with regard to a particular subject area.
[i.1] ETSI TR 104 128: "Securing Artificial Intelligence (SAI); Guide to Cyber Security for AI Models
and Systems".
[i.2] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying
down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008,
(EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and
Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act).
[i.3] ISO/IEC 22989:2022: "Information technology — Artificial intelligence — Artificial intelligence
concepts and terminology".
3 Definition of terms, symbols and abbreviations
3.1 Terms
For the purposes of the present document, the following terms apply:
adversarial attack: attempt to manipulate an AI model by introducing specially crafted inputs to cause the model to
produce errors or unintended outcomes
AI system: engineered system that generates outputs such as content, forecasts, recommendations or decisions for a
given set of human-defined objectives
Application Programming Interface (API): set of tools and protocols that allow different software systems to
communicate and interact
data poisoning: type of adversarial attack where malicious data is introduced into training datasets to compromise the
AI system's performance or behaviour
ETSI
7 ETSI TS 104 223 V1.1.1 (2025-04)
guardrails: predefined constraints or rules implemented to control and limit an AI system's outputs and behaviours,
ensuring safety, reliability, and alignment with ethical or operational guidelines
inference: reasoning by which conclusions are derived from known premises
NOTE 1: In AI, a premise is either a fact, a rule, a model, a feature or raw data.
NOTE 2: The term "inference" refers both to the process and its result.
input data: data for which an AI system calculates a predicted output or inference
machine learning algorithm: algorithm to determine parameters of a machine learning model from data according to
given criteria
machine learning model: mathematical construct that generates an inference or prediction based on input data or
information
model inversion: privacy attack where an adversary infers sensitive information about the training data by analysing
the AI model's outputs
model training: process to determine or to improve the parameters of a machine learning model, based on a machine
learning algorithm, by using training data
prediction: primary output of an AI system when provided with input data or information
NOTE 1: Predictions can be followed by additional outputs, such as recommendations, decisions and actions.
NOTE 2: Prediction does not necessarily refer to predicting something in the future.
NOTE 3: Predictions can refer to various kinds of data analysis or production applied to new data or historical data
(including translating text, creating synthetic images or diagnosing a previous power failure).
prompt: input provided to an AI model, often in the form of text, that directs or guides its response.
NOTE: Prompts can include questions, instructions, or context for the desired output.
risk assessment: process of identifying, analysing and mitigating potential threats to the security or functionality of an
AI system
sanitisation: process of cleaning and validating data or inputs to remove errors, inconsistencies and malicious content,
ensuring data integrity and security
system prompt: predefined input or set of instructions provided to guide the behaviour of an AI model, often used to
define its tone, rules, or operational context
threat modelling: process to identify and address potential security threats to a system during its design and
development phases
training: process of teaching an AI model to recognize patterns, make decisions, or generate outputs by exposing it to
labelled data and adjusting its parameters to minimize errors
training data: data used to train a machine learning model
3.2 Symbols
Void.
3.3 Abbreviations
For the purposes of the present document, the following abbreviations apply:
AI Artificial Intelligence
API Application Programming Interface
ETSI
8 ETSI TS 104 223 V1.1.1 (2025-04)
4 Audience
This clause defines the stakeholder groups that form the AI supply chain. An indication is given for each principle on
which stakeholders are primarily responsible for its implementation. Importantly, a single entity can hold multiple
stakeholder roles in the present document as well as responsibilities from different regulatory regimes.
NOTE: Examples include under data protection law, when processing personal data organizations can have a role
of controller and/or joint controller and/or processor, depending on their role in creating and setting up
AI systems.
All stakeholders included in Table 4-1 should note that they can have data protection obligations. Additionally, senior
leaders in an organization also have responsibilities to help protect their staff and infrastructure. Some provisions for
Developers in the present document are less applicable to AI systems involving open-source models. Further
information and guidance about different types of AI systems can be found in ETSI TR 104 128 [i.1].
Table 4-1: Stakeholder definition
Stakeholder Definitions
Dev
...








Questions, Comments and Discussion
Ask us and Technical Secretary will try to provide an answer. You can facilitate discussion about the standard in here.
Loading comments...