Securing Artificial Intelligence (SAI) - Baseline Cyber Security Requirements for AI Models and Systems

The present document defines baseline security requirements for AI models and systems. The present document includes in its scope systems that incorporate deep neural networks, such as generative AI. For consistency, the term "AI systems" is used throughout the present document when framing the scope of provisions and the term "AI security", which is considered a subset of cybersecurity, is used when addressing any cybersecurity issues in the scope of the provisions. The present document is not designed for academics who are creating and testing AI systems only for research purposes (AI systems which are not going to be deployed).

Zavarovanje umetne inteligence (SAI) - Osnovne zahteve za kibernetsko varnost za modele in sisteme umetne inteligence

General Information

Status
Not Published
Public Enquiry End Date
30-Nov-2025
Current Stage
6060 - National Implementation/Publication (Adopted Project)
Start Date
15-Dec-2025
Due Date
19-Feb-2026
Standard
ETSI EN 304 223 V2.0.0 (2025-09) - Securing Artificial Intelligence (SAI); Baseline Cyber Security Requirements for AI Models and Systems
English language
16 pages
sale 15% off
Preview
sale 15% off
Preview
Standard
ETSI EN 304 223 V2.1.1 (2025-12) - Securing Artificial Intelligence (SAI); Baseline Cyber Security Requirements for AI Models and Systems
English language
16 pages
sale 15% off
Preview
sale 15% off
Preview
Draft
oSIST prEN 304 223 V2.0.0:2025
English language
16 pages
sale 10% off
Preview
sale 10% off
Preview
e-Library read for
1 day

Standards Content (Sample)


Draft ETSI EN 304 223 V2.0.0 (2025-09)

EUROPEAN STANDARD
Securing Artificial Intelligence (SAI);
Baseline Cyber Security Requirements for
AI Models and Systems
2 Draft ETSI EN 304 223 V2.0.0 (2025-09)

Reference
REN/SAI-0022
Keywords
artificial intelligence, security
ETSI
650 Route des Lucioles
F-06921 Sophia Antipolis Cedex - FRANCE

Tel.: +33 4 92 94 42 00  Fax: +33 4 93 65 47 16

Siret N° 348 623 562 00017 - APE 7112B
Association à but non lucratif enregistrée à la
Sous-Préfecture de Grasse (06) N° w061004871

Important notice
The present document can be downloaded from the
ETSI Search & Browse Standards application.
The present document may be made available in electronic versions and/or in print. The content of any electronic and/or
print versions of the present document shall not be modified without the prior written authorization of ETSI. In case of any
existing or perceived difference in contents between such versions and/or in print, the prevailing version of an ETSI
deliverable is the one made publicly available in PDF format on ETSI deliver repository.
Users should be aware that the present document may be revised or have its status changed,
this information is available in the Milestones listing.
If you find errors in the present document, please send your comments to
the relevant service listed under Committee Support Staff.
If you find a security vulnerability in the present document, please report it through our
Coordinated Vulnerability Disclosure (CVD) program.
Notice of disclaimer & limitation of liability
The information provided in the present deliverable is directed solely to professionals who have the appropriate degree of
experience to understand and interpret its content in accordance with generally accepted engineering or
other professional standard and applicable regulations.
No recommendation as to products and services or vendors is made or should be implied.
In no event shall ETSI be held liable for loss of profits or any other incidental or consequential damages.

Any software contained in this deliverable is provided "AS IS" with no warranties, express or implied, including but not
limited to, the warranties of merchantability, fitness for a particular purpose and non-infringement of intellectual property
rights and ETSI shall not be held liable in any event for any damages whatsoever (including, without limitation, damages
for loss of profits, business interruption, loss of information, or any other pecuniary loss) arising out of or related to the use
of or inability to use the software.
Copyright Notification
No part may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying and
microfilm except as authorized by written permission of ETSI.
The content of the PDF version shall not be modified without the written authorization of ETSI.
The copyright and the foregoing restriction extend to reproduction in all media.

© ETSI 2025.
All rights reserved.
ETSI
3 Draft ETSI EN 304 223 V2.0.0 (2025-09)
Contents
Intellectual Property Rights . 4
Foreword . 4
Modal verbs terminology . 4
Introduction . 5
1 Scope . 6
2 References . 6
2.1 Normative references . 6
2.2 Informative references . 6
3 Definition of terms, symbols and abbreviations . 7
3.1 Terms . 7
3.2 Symbols . 8
3.3 Abbreviations . 8
4 Stakeholders . 9
5 AI Security Principles and Provisions . 10
5.1 Secure Design . 10
5.1.1 Principle 1: Raise awareness of AI security threats and risks . 10
5.1.2 Principle 2: Design the AI system for security as well as functionality and performance . 10
5.1.3 Principle 3: Evaluate the threats and manage the risks to the AI system . 11
5.1.4 Principle 4: Enable human responsibility for AI systems . 11
5.2 Secure Development. 12
5.2.1 Principle 5: Identify, track and protect the assets . 12
5.2.2 Principle 6: Secure the infrastructure . 12
5.2.3 Principle 7: Secure the supply chain . 13
5.2.4 Principle 8: Document data, models and prompts . 13
5.2.5 Principle 9: Conduct appropriate testing and evaluation . 13
5.3 Secure Deployment . 14
5.3.1 Principle 10: Communication and processes associated with End-users and Affected Entities . 14
5.4 Secure Maintenance . 14
5.4.1 Principle 11: Maintain regular security updates, patches and mitigations . 14
5.4.2 Principle 12: Monitor the system's behaviour . 15
5.5 Secure End of Life . 15
5.5.1 Principle 13: Ensure proper data and model disposal . 15
History . 16

ETSI
4 Draft ETSI EN 304 223 V2.0.0 (2025-09)
Intellectual Property Rights
Essential patents
IPRs essential or potentially essential to normative deliverables may have been declared to ETSI. The declarations
pertaining to these essential IPRs, if any, are publicly available for ETSI members and non-members, and can be
found in ETSI SR 000 314: "Intellectual Property Rights (IPRs); Essential, or potentially Essential, IPRs notified to
ETSI in respect of ETSI standards", which is available from the ETSI Secretariat. Latest updates are available on the
ETSI IPR online database.
Pursuant to the ETSI Directives including the ETSI IPR Policy, no investigation regarding the essentiality of IPRs,
including IPR searches, has been carried out by ETSI. No guarantee can be given as to the existence of other IPRs not
referenced in ETSI SR 000 314 (or the updates on the ETSI Web server) which are, or may be, or may become,
essential to the present document.
Trademarks
The present document may include trademarks and/or tradenames which are asserted and/or registered by their owners.
ETSI claims no ownership of these except for any which are indicated as being the property of ETSI, and conveys no
right to use or reproduce any trademark and/or tradename. Mention of those trademarks in the present document does
not constitute an endorsement by ETSI of products, services or organizations associated with those trademarks.
DECT™, PLUGTESTS™, UMTS™ and the ETSI logo are trademarks of ETSI registered for the benefit of its
Members. 3GPP™, LTE™ and 5G™ logo are trademarks of ETSI registered for the benefit of its Members and of the
3GPP Organizational Partners. oneM2M™ logo is a trademark of ETSI registered for the benefit of its Members and of ®
the oneM2M Partners. GSM and the GSM logo are trademarks registered and owned by the GSM Association.
Foreword
This draft European Standard (EN) has been produced by ETSI Technical Committee Securing Artificial Intelligence
(SAI), and is now submitted for the combined Public Enquiry and Vote phase of the ETSI EN Approval Procedure
(ENAP).
Proposed national transposition dates
Date of latest announcement of this EN (doa): 3 months after ETSI publication
Date of latest publication of new National Standard
or endorsement of this EN (dop/e): 6 months after doa
Date of withdrawal of any conflicting National Standard (dow): 6 months after doa

Modal verbs terminology
In the present document "shall", "shall not", "should", "should not", "may", "need not", "will", "will not", "can" and
"cannot" are to be interpreted as described in clause 3.2 of the ETSI Drafting Rules (Verbal forms for the expression of
provisions).
"must" and "must not" are NOT allowed in ETSI deliverables except when used in direct citation.
ETSI
5 Draft ETSI EN 304 223 V2.0.0 (2025-09)
Introduction
Artificial Intelligence (AI) is transforming our daily lives. As the technology continues to evolve and be embedded in
people's lives, it is crucial that efforts are taken to protect AI systems from growing cyber security threats. Special focus
on the cybersecurity of Artificial Intelligence (AI) is important due to its distinct differences compared to traditional
software. These characteristics include security risks such as data poisoning, model obfuscation, indirect prompt
injection and operational differences associated with data management.
The present document utilizes existing good practice in the AI and cyber security landscape alongside novel measures
to provide a set of targeted high-level principles and provisions at each stage of the AI lifecycle. The objective of the
present document is to provide stakeholders in the AI supply chain with clear baseline security requirements to help
protect AI systems.
The present document separates principles and requirements into five phases. These are secure design, secure
development, secure deployment, secure maintenance and secure end of life. Relevant standards and publications are
signposted at the start of each principle to highlight links between the various documents and the present document.
This is not an exhaustive list.
NOTE: The principles, described in the present document, can also be mapped to the life cycle stages described in
ISO/IEC 22989 [i.3] as follows:
 the secure design and development principles can be applied during the Design and development
life cycle stage;
 the secure deployment principles can be applied during the Deployment stage;
 secure maintenance to the Operations and monitoring stage; and
 secure end of life during the Retirement stage.
Information on the AI lifecycle and implementation examples are given in ETSI TR 104 128 [i.1].
ETSI TS 104 216 [i.4] provides guidance on how to evaluate and validate AI systems in accordance with the provisions
of the present document.
ETSI
6 Draft ETSI EN 304 223 V2.0.0 (2025-09)
1 Scope
The present document defines baseline security requirements for AI models and systems. The present document
includes in its scope systems that incorporate deep neural networks, such as generative AI. For consistency, the term
"AI systems" is used throughout the present document when framing the scope of provisions and the term "AI security",
which is considered a subset of cybersecurity, is used when addressing any cybersecurity issues in the scope of the
provisions. The present document is not designed for academics who are creating and testing AI systems only for
research purposes (AI systems which are not going to be deployed).
2 References
2.1 Normative references
References are either specific (identified by date of publication and/or edition number or version number) or
non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the
referenced document (including any amendments) applies.
Referenced documents which are not found to be publicly available in the expected location might be found in the
ETSI docbox.
NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee
their long-term validity.
The following referenced documents are necessary for the application of the present document.
Not applicable.
2.2 Informative references
References are either specific (identified by date of publication and/or edition number or version number) or
non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the
referenced document (including any amendments) applies.
NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee
their long-term validity.
The following referenced documents may be useful in implementing an ETSI deliverable or add to the reader's
understanding, but are not required for conformance to the present document.
[i.1] ETSI TR 104 128: "Securing Artificial Intelligence (SAI); Guide to Cyber Security for AI Models
and Systems".
[i.2] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying
down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008,
(EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and
Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act).
[i.3] ISO/IEC 22989:2022: "Information technology — Artificial intelligence — Artificial intelligence
concepts and terminology".
[i.4] ETSI TS 104 216: "Securing Artificial Intelligence TC (SAI); Conformance assessment for AI
(EN 304 223)".
[i.5] ISO/IEC 27001:2022: "Information security, cybersecurity and privacy protection — Information
security management systems — Requirements".
[i.6] CISA: "Software Bill of Materials (SBOM)".
[i.7] NIST: "AI Risk Management Framework: Second Draft", 2022.
ETSI
7 Draft ETSI EN 304 223 V2.0.0 (2025-09)
[i.8] NIST AI 100-1: "AI Risk Management Framework (AI RMF 1.0)", 2023.
[i.9] Australian Signals Directorate: "An introduction to Artificial Intelligence", 2023.
[i.10] World Economic Forum, IBM: "Presidio AI Framework: Towards Safe Generative AI Models",
2024.
[i.11] OWASP: "OWASP AI Exchange".
[i.12] MITRE ATLAS™: "Mitigations". ®
[i.13] Google : "Secure AI Approach Framework: A quick guide to implementing the Secure AI
Framework (SAIF)", 2023.
[i.14] ELSA: "ELSA - European Lighthouse on Secure and Safe AI", 2023.
[i.15] Cisco: "The Cisco Responsible AI Framework", 2024.
[i.16] Amazon: "AWS Cloud Adoption Framework for Artificial Intelligence, Machine Learning, and
Generative AI", Amazon White Paper, 2024.
[i.17] NIST AI 100-2 E2023: "Adversarial Machine Learning: A Taxonomy and Terminology of Attacks
and Mitigations".
[i.18] ENISA: "Multilayer Framework for Good Cybersecurity Practices for AI", 2023.
[i.19] NCSC: "Guidelines for secure AI system development", 2023.
[i.20] Federal Office for Information Security: "AI Security Concerns in a Nutshell", 2023.
[i.21] G7 Hiroshima Summit: "Hiroshima Process International Code of Conduct for Organizations
Developing Advanced AI Systems", 2023.
[i.22] United States Department of Health and Human Services: "Trustworthy AI (TAI) Playbook:
Executive Summary", 2021.
[i.23] OpenAI: "Preparedness Framework (Beta)", 2023.
[i.24] Information Commissioner's Office (ICO): "Guidance on the AI Auditing Framework", 2020.
[i.25] Nvidia: "NeMo-Guardrails", 2023.
3 Definition of terms, symbols and abbreviations
3.1 Terms
For the purposes of the present document, the following terms apply:
adversarial attack: attempt to manipulate an AI model by introducing specially crafted inputs to cause the model to
produce errors or unintended outcomes
AI system: engineered system that generates outputs such as content, forecasts, recommendations or decisions for a
given set of human-defined objectives
Application Programming Interface (API): set of tools and protocols that allow different software systems to
communicate and interact
data poisoning: type of adversarial attack where malicious data is introduced into training datasets to compromise the
AI system's performance or behaviour
guardrails: predefined constraints or rules implemented to control and limit an AI system's outputs and behaviours,
ensuring safety, reliability, and alignment with ethical or operational guidelines
ETSI
8 Draft ETSI EN 304 223 V2.0.0 (2025-09)
inference: reasoning by which conclusions are derived from known premises
NOTE 1: In AI, a premise is either a fact, a rule, a model, a feature or raw data.
NOTE 2: The term "inference" refers both to the process and its result.
input data: data for which an AI system calculates a predicted output or inference
machine learning algorithm: algorithm to determine parameters of a machine learning model from data according to
given criteria
machine learning model: mathematical construct that generates an inference or prediction based on input data or
information
model inversion: privacy attack where an adversary infers sensitive information about the training data by analysing
the AI model's outputs
model training: process to determine or to improve the parameters of a machine learning model, based on a machine
learning algorithm, by using training data
prediction: primary output of an AI system when provided with input data or information
NOTE 1: Predictions can be followed by additional outputs, such as recommendations, decisions and actions.
NOTE 2: Prediction does not necessarily refer to predicting something in the future.
NOTE 3: Predictions can refer to various kinds of data analysis or production applied to new data or historical data
(including translating text, creating synthetic images or diagnosing a previous power failure).
prompt: input provided to an AI model, often in the form of text, that directs or guides its response
NOTE: Prompts can include questions, instructions, or context for the desired output.
risk assessment: process of identifying, analysing and mitigating potential threats to the security or functionality of an
AI system
sanitisation: process of cleaning and validating data or inputs to remove errors, inconsistencies and malicious content,
ensuring data integrity and security
system prompt: predefined input or set of instructions provided to guide the behaviour of an AI model, often used to
define its tone, rules, or operational context
threat modelling: process to identify and address potential security threats to a system during its design and
development phases
training: process of teaching an AI model to recognize patterns, make decisions, or generate outputs by exposing it to
labelled data and adjusting its parameters to minimize errors
training data: data used to train a machine learning model
3.2 Symbols
Void.
3.3 Abbreviations
For the purposes of the present document, the following abbreviations apply:
AI Artificial Intelligence
API Application Programming Interface
ETSI
9 Draft ETSI EN 304 223 V2.0.0 (2025-09)
4 Stakeholders
The present clause defines the stakeholder groups that form the AI supply chain. An indication is given for each
principle on which stakeholders are primarily responsible for its implementation. A single entity can hold multiple
stakeholder roles, as defined in the present document, in addition to any responsibilities arising from different
regulatory regimes.
EXAMPLE: Under data protection law, when processing personal data, organizations can have a role of
controller and/or joint controller and/or processor, depending on their role in creating and setting
up AI systems.
All stakeholders included in Table 4-1 should note that they can have data protection obligations. Additionally, senior
leaders in an organization also have responsibilities to help protect their staff and infrastructure. Some provisions for
Developers in the present document are l
...


EUROPEAN STANDARD
Securing Artificial Intelligence (SAI);
Baseline Cyber Security Requirements for
AI Models and Systems
2 ETSI EN 304 223 V2.1.1 (2025-12)

Reference
REN/SAI-0022
Keywords
artificial intelligence, security
ETSI
650 Route des Lucioles
F-06921 Sophia Antipolis Cedex - FRANCE

Tel.: +33 4 92 94 42 00  Fax: +33 4 93 65 47 16

Siret N° 348 623 562 00017 - APE 7112B
Association à but non lucratif enregistrée à la
Sous-Préfecture de Grasse (06) N° w061004871

Important notice
The present document can be downloaded from the
ETSI Search & Browse Standards application.
The present document may be made available in electronic versions and/or in print. The content of any electronic and/or
print versions of the present document shall not be modified without the prior written authorization of ETSI. In case of any
existing or perceived difference in contents between such versions and/or in print, the prevailing version of an ETSI
deliverable is the one made publicly available in PDF format on ETSI deliver repository.
Users should be aware that the present document may be revised or have its status changed,
this information is available in the Milestones listing.
If you find errors in the present document, please send your comments to
the relevant service listed under Committee Support Staff.
If you find a security vulnerability in the present document, please report it through our
Coordinated Vulnerability Disclosure (CVD) program.
Notice of disclaimer & limitation of liability
The information provided in the present deliverable is directed solely to professionals who have the appropriate degree of
experience to understand and interpret its content in accordance with generally accepted engineering or
other professional standard and applicable regulations.
No recommendation as to products and services or vendors is made or should be implied.
In no event shall ETSI be held liable for loss of profits or any other incidental or consequential damages.

Any software contained in this deliverable is provided "AS IS" with no warranties, express or implied, including but not
limited to, the warranties of merchantability, fitness for a particular purpose and non-infringement of intellectual property
rights and ETSI shall not be held liable in any event for any damages whatsoever (including, without limitation, damages
for loss of profits, business interruption, loss of information, or any other pecuniary loss) arising out of or related to the use
of or inability to use the software.
Copyright Notification
No part may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying and
microfilm except as authorized by written permission of ETSI.
The content of the PDF version shall not be modified without the written authorization of ETSI.
The copyright and the foregoing restriction extend to reproduction in all media.

© ETSI 2025.
All rights reserved.
ETSI
3 ETSI EN 304 223 V2.1.1 (2025-12)
Contents
Intellectual Property Rights . 4
Foreword . 4
Modal verbs terminology . 4
Introduction . 5
1 Scope . 6
2 References . 6
2.1 Normative references . 6
2.2 Informative references . 6
3 Definition of terms, symbols and abbreviations . 7
3.1 Terms . 7
3.2 Symbols . 8
3.3 Abbreviations . 8
4 Stakeholders . 9
5 AI Security Principles and Provisions . 10
5.1 Secure Design . 10
5.1.1 Principle 1: Raise awareness of AI security threats and risks . 10
5.1.2 Principle 2: Design the AI system for security as well as functionality and performance . 10
5.1.3 Principle 3: Evaluate the threats and manage the risks to the AI system . 11
5.1.4 Principle 4: Enable human responsibility for AI systems . 11
5.2 Secure Development. 12
5.2.1 Principle 5: Identify, track and protect the assets . 12
5.2.2 Principle 6: Secure the infrastructure . 12
5.2.3 Principle 7: Secure the supply chain . 13
5.2.4 Principle 8: Document data, models and prompts . 13
5.2.5 Principle 9: Conduct appropriate testing and evaluation . 13
5.3 Secure Deployment . 14
5.3.1 Principle 10: Communication and processes associated with End-users and Affected Entities . 14
5.4 Secure Maintenance . 14
5.4.1 Principle 11: Maintain regular security updates, patches and mitigations . 14
5.4.2 Principle 12: Monitor the system's behaviour . 15
5.5 Secure End of Life . 15
5.5.1 Principle 13: Ensure proper data and model disposal . 15
History . 16

ETSI
4 ETSI EN 304 223 V2.1.1 (2025-12)
Intellectual Property Rights
Essential patents
IPRs essential or potentially essential to normative deliverables may have been declared to ETSI. The declarations
pertaining to these essential IPRs, if any, are publicly available for ETSI members and non-members, and can be
found in ETSI SR 000 314: "Intellectual Property Rights (IPRs); Essential, or potentially Essential, IPRs notified to
ETSI in respect of ETSI standards", which is available from the ETSI Secretariat. Latest updates are available on the
ETSI IPR online database.
Pursuant to the ETSI Directives including the ETSI IPR Policy, no investigation regarding the essentiality of IPRs,
including IPR searches, has been carried out by ETSI. No guarantee can be given as to the existence of other IPRs not
referenced in ETSI SR 000 314 (or the updates on the ETSI Web server) which are, or may be, or may become,
essential to the present document.
Trademarks
The present document may include trademarks and/or tradenames which are asserted and/or registered by their owners.
ETSI claims no ownership of these except for any which are indicated as being the property of ETSI, and conveys no
right to use or reproduce any trademark and/or tradename. Mention of those trademarks in the present document does
not constitute an endorsement by ETSI of products, services or organizations associated with those trademarks.
DECT™, PLUGTESTS™, UMTS™ and the ETSI logo are trademarks of ETSI registered for the benefit of its
Members. 3GPP™, LTE™ and 5G™ logo are trademarks of ETSI registered for the benefit of its Members and of the
3GPP Organizational Partners. oneM2M™ logo is a trademark of ETSI registered for the benefit of its Members and of ®
the oneM2M Partners. GSM and the GSM logo are trademarks registered and owned by the GSM Association.
Foreword
This European Standard (EN) has been produced by ETSI Technical Committee Securing Artificial Intelligence (SAI).

National transposition dates
Date of adoption of this EN: 8 December 2025
Date of latest announcement of this EN (doa): 31 March 2026
Date of latest publication of new National Standard
or endorsement of this EN (dop/e): 30 September 2026
Date of withdrawal of any conflicting National Standard (dow): 30 September 2026

Modal verbs terminology
In the present document "shall", "shall not", "should", "should not", "may", "need not", "will", "will not", "can" and
"cannot" are to be interpreted as described in clause 3.2 of the ETSI Drafting Rules (Verbal forms for the expression of
provisions).
"must" and "must not" are NOT allowed in ETSI deliverables except when used in direct citation.
ETSI
5 ETSI EN 304 223 V2.1.1 (2025-12)
Introduction
Artificial Intelligence (AI) is transforming our daily lives. As the technology continues to evolve and be embedded in
people's lives, it is crucial that efforts are taken to protect AI systems from growing cyber security threats. Special focus
on the cybersecurity of Artificial Intelligence (AI) is important due to its distinct differences compared to traditional
software. These characteristics include security risks such as data poisoning, model obfuscation, indirect prompt
injection and operational differences associated with data management.
The present document utilizes existing good practice in the AI and cyber security landscape alongside novel measures
to provide a set of targeted high-level principles and provisions at each stage of the AI lifecycle. The objective of the
present document is to provide stakeholders in the AI supply chain with clear baseline security requirements to help
protect AI systems.
The present document separates principles and requirements into five phases. These are secure design, secure
development, secure deployment, secure maintenance and secure end of life. Relevant standards and publications are
signposted at the start of each principle to highlight links between the various documents and the present document.
This is not an exhaustive list.
NOTE: The principles, described in the present document, can also be mapped to the life cycle stages described in
ISO/IEC 22989 [i.3] as follows:
 the secure design and development principles can be applied during the Design and development
life cycle stage;
 the secure deployment principles can be applied during the Deployment stage;
 secure maintenance to the Operations and monitoring stage; and
 secure end of life during the Retirement stage.
Information on the AI lifecycle and implementation examples are given in ETSI TR 104 128 [i.1].
ETSI TS 104 216 [i.4] provides guidance on how to evaluate and validate AI systems in accordance with the provisions
of the present document.
ETSI
6 ETSI EN 304 223 V2.1.1 (2025-12)
1 Scope
The present document defines baseline security requirements for AI models and systems. The present document
includes in its scope systems that incorporate deep neural networks, such as generative AI. For consistency, the term
"AI systems" is used throughout the present document when framing the scope of provisions and the term "AI security",
which is considered a subset of cybersecurity, is used when addressing any cybersecurity issues in the scope of the
provisions. The present document is not designed for academics who are creating and testing AI systems only for
research purposes (AI systems which are not going to be deployed).
2 References
2.1 Normative references
References are either specific (identified by date of publication and/or edition number or version number) or
non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the
referenced document (including any amendments) applies.
Referenced documents which are not found to be publicly available in the expected location might be found in the
ETSI docbox.
NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee
their long-term validity.
The following referenced documents are necessary for the application of the present document.
Not applicable.
2.2 Informative references
References are either specific (identified by date of publication and/or edition number or version number) or
non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the
referenced document (including any amendments) applies.
NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee
their long-term validity.
The following referenced documents may be useful in implementing an ETSI deliverable or add to the reader's
understanding, but are not required for conformance to the present document.
[i.1] ETSI TR 104 128: "Securing Artificial Intelligence (SAI); Guide to Cyber Security for AI Models
and Systems".
[i.2] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying
down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008,
(EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and
Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act).
[i.3] ISO/IEC 22989:2022: "Information technology — Artificial intelligence — Artificial intelligence
concepts and terminology".
[i.4] ETSI TS 104 216: "Securing Artificial Intelligence TC (SAI); Conformance assessment for AI
(EN 304 223)".
[i.5] ISO/IEC 27001:2022: "Information security, cybersecurity and privacy protection — Information
security management systems — Requirements".
[i.6] CISA: "Software Bill of Materials (SBOM)".
[i.7] NIST: "AI Risk Management Framework: Second Draft", 2022.
ETSI
7 ETSI EN 304 223 V2.1.1 (2025-12)
[i.8] NIST AI 100-1: "AI Risk Management Framework (AI RMF 1.0)", 2023.
[i.9] Australian Signals Directorate: "An introduction to Artificial Intelligence", 2023.
[i.10] World Economic Forum, IBM: "Presidio AI Framework: Towards Safe Generative AI Models",
2024.
[i.11] OWASP: "OWASP AI Exchange".
[i.12] MITRE ATLAS™: "Mitigations". ®
[i.13] Google : "Secure AI Approach Framework: A quick guide to implementing the Secure AI
Framework (SAIF)", 2023.
[i.14] ELSA: "ELSA - European Lighthouse on Secure and Safe AI", 2023.
[i.15] Cisco: "The Cisco Responsible AI Framework", 2024.
[i.16] Amazon: "AWS Cloud Adoption Framework for Artificial Intelligence, Machine Learning, and
Generative AI", Amazon White Paper, 2024.
[i.17] NIST AI 100-2 E2023: "Adversarial Machine Learning: A Taxonomy and Terminology of Attacks
and Mitigations".
[i.18] ENISA: "Multilayer Framework for Good Cybersecurity Practices for AI", 2023.
[i.19] NCSC: "Guidelines for secure AI system development", 2023.
[i.20] Federal Office for Information Security: "AI Security Concerns in a Nutshell", 2023.
[i.21] G7 Hiroshima Summit: "Hiroshima Process International Code of Conduct for Organizations
Developing Advanced AI Systems", 2023.
[i.22] United States Department of Health and Human Services: "Trustworthy AI (TAI) Playbook:
Executive Summary", 2021.
[i.23] OpenAI: "Preparedness Framework (Beta)", 2023.
[i.24] Information Commissioner's Office (ICO): "Guidance on the AI Auditing Framework", 2020.
[i.25] Nvidia: "NeMo-Guardrails", 2023.
3 Definition of terms, symbols and abbreviations
3.1 Terms
For the purposes of the present document, the following terms apply:
adversarial attack: attempt to manipulate an AI model by introducing specially crafted inputs to cause the model to
produce errors or unintended outcomes
AI system: engineered system that generates outputs such as content, forecasts, recommendations or decisions for a
given set of human-defined objectives
Application Programming Interface (API): set of tools and protocols that allow different software systems to
communicate and interact
data poisoning: type of adversarial attack where malicious data is introduced into training datasets to compromise the
AI system's performance or behaviour
guardrails: predefined constraints or rules implemented to control and limit an AI system's outputs and behaviours,
ensuring safety, reliability, and alignment with ethical or operational guidelines
ETSI
8 ETSI EN 304 223 V2.1.1 (2025-12)
inference: reasoning by which conclusions are derived from known premises
NOTE 1: In AI, a premise is either a fact, a rule, a model, a feature or raw data.
NOTE 2: The term "inference" refers both to the process and its result.
input data: data for which an AI system calculates a predicted output or inference
machine learning algorithm: algorithm to determine parameters of a machine learning model from data according to
given criteria
machine learning model: mathematical construct that generates an inference or prediction based on input data or
information
model inversion: privacy attack where an adversary infers sensitive information about the training data by analysing
the AI model's outputs
model training: process to determine or to improve the parameters of a machine learning model, based on a machine
learning algorithm, by using training data
prediction: primary output of an AI system when provided with input data or information
NOTE 1: Predictions can be followed by additional outputs, such as recommendations, decisions and actions.
NOTE 2: Prediction does not necessarily refer to predicting something in the future.
NOTE 3: Predictions can refer to various kinds of data analysis or production applied to new data or historical data
(including translating text, creating synthetic images or diagnosing a previous power failure).
prompt: input provided to an AI model, often in the form of text, that directs or guides its response
NOTE: Prompts can include questions, instructions, or context for the desired output.
risk assessment: process of identifying, analysing and mitigating potential threats to the security or functionality of an
AI system
sanitisation: process of cleaning and validating data or inputs to remove errors, inconsistencies and malicious content,
ensuring data integrity and security
system prompt: predefined input or set of instructions provided to guide the behaviour of an AI model, often used to
define its tone, rules, or operational context
threat modelling: process to identify and address potential security threats to a system during its design and
development phases
training: process of teaching an AI model to recognize patterns, make decisions, or generate outputs by exposing it to
labelled data and adjusting its parameters to minimize errors
training data: data used to train a machine learning model
3.2 Symbols
Void.
3.3 Abbreviations
For the purposes of the present document, the following abbreviations apply:
AI Artificial Intelligence
API Application Programming Interface
ETSI
9 ETSI EN 304 223 V2.1.1 (2025-12)
4 Stakeholders
The present clause defines the stakeholder groups that form the AI supply chain. An indication is given for each
principle on which stakeholders are primarily responsible for its implementation. A single entity can hold multiple
stakeholder roles, as defined in the present document, in addition to any responsibilities arising from different
regulatory regimes.
EXAMPLE: Under data protection law, when processing personal data, organizations can have a role of
controller and/or joint controller and/or processor, depending on their role in creating and setting
up AI systems.
All stakeholders included in Table 4-1 should note that they can have data protection obligations. Additionally, senior
leaders in an organization also have responsibilities to help protect their staff and infrastructure. Some provisions for
Developers in the present document are less applicable to AI systems involving open-source models. Further
infor
...


SLOVENSKI STANDARD
oSIST prEN 304 223 V2.0.0:2025
01-november-2025
Zavarovanje umetne inteligence (SAI) - Osnovne zahteve za kibernetsko varnost za
modele in sisteme umetne inteligence
Securing Artificial Intelligence (SAI) - Baseline Cyber Security Requirements for AI
Models and Systems
Ta slovenski standard je istoveten z: ETSI EN 304 223 V2.0.0 (2025-09)
ICS:
35.030 Informacijska varnost IT Security
oSIST prEN 304 223 V2.0.0:2025 en
2003-01.Slovenski inštitut za standardizacijo. Razmnoževanje celote ali delov tega standarda ni dovoljeno.

oSIST prEN 304 223 V2.0.0:2025

oSIST prEN 304 223 V2.0.0:2025
Draft ETSI EN 304 223 V2.0.0 (2025-09)

EUROPEAN STANDARD
Securing Artificial Intelligence (SAI);
Baseline Cyber Security Requirements for
AI Models and Systems
oSIST prEN 304 223 V2.0.0:2025
2 Draft ETSI EN 304 223 V2.0.0 (2025-09)

Reference
REN/SAI-0022
Keywords
artificial intelligence, security
ETSI
650 Route des Lucioles
F-06921 Sophia Antipolis Cedex - FRANCE

Tel.: +33 4 92 94 42 00  Fax: +33 4 93 65 47 16

Siret N° 348 623 562 00017 - APE 7112B
Association à but non lucratif enregistrée à la
Sous-Préfecture de Grasse (06) N° w061004871

Important notice
The present document can be downloaded from the
ETSI Search & Browse Standards application.
The present document may be made available in electronic versions and/or in print. The content of any electronic and/or
print versions of the present document shall not be modified without the prior written authorization of ETSI. In case of any
existing or perceived difference in contents between such versions and/or in print, the prevailing version of an ETSI
deliverable is the one made publicly available in PDF format on ETSI deliver repository.
Users should be aware that the present document may be revised or have its status changed,
this information is available in the Milestones listing.
If you find errors in the present document, please send your comments to
the relevant service listed under Committee Support Staff.
If you find a security vulnerability in the present document, please report it through our
Coordinated Vulnerability Disclosure (CVD) program.
Notice of disclaimer & limitation of liability
The information provided in the present deliverable is directed solely to professionals who have the appropriate degree of
experience to understand and interpret its content in accordance with generally accepted engineering or
other professional standard and applicable regulations.
No recommendation as to products and services or vendors is made or should be implied.
In no event shall ETSI be held liable for loss of profits or any other incidental or consequential damages.

Any software contained in this deliverable is provided "AS IS" with no warranties, express or implied, including but not
limited to, the warranties of merchantability, fitness for a particular purpose and non-infringement of intellectual property
rights and ETSI shall not be held liable in any event for any damages whatsoever (including, without limitation, damages
for loss of profits, business interruption, loss of information, or any other pecuniary loss) arising out of or related to the use
of or inability to use the software.
Copyright Notification
No part may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying and
microfilm except as authorized by written permission of ETSI.
The content of the PDF version shall not be modified without the written authorization of ETSI.
The copyright and the foregoing restriction extend to reproduction in all media.

© ETSI 2025.
All rights reserved.
ETSI
oSIST prEN 304 223 V2.0.0:2025
3 Draft ETSI EN 304 223 V2.0.0 (2025-09)
Contents
Intellectual Property Rights . 4
Foreword . 4
Modal verbs terminology . 4
Introduction . 5
1 Scope . 6
2 References . 6
2.1 Normative references . 6
2.2 Informative references . 6
3 Definition of terms, symbols and abbreviations . 7
3.1 Terms . 7
3.2 Symbols . 8
3.3 Abbreviations . 8
4 Stakeholders . 9
5 AI Security Principles and Provisions . 10
5.1 Secure Design . 10
5.1.1 Principle 1: Raise awareness of AI security threats and risks . 10
5.1.2 Principle 2: Design the AI system for security as well as functionality and performance . 10
5.1.3 Principle 3: Evaluate the threats and manage the risks to the AI system . 11
5.1.4 Principle 4: Enable human responsibility for AI systems . 11
5.2 Secure Development. 12
5.2.1 Principle 5: Identify, track and protect the assets . 12
5.2.2 Principle 6: Secure the infrastructure . 12
5.2.3 Principle 7: Secure the supply chain . 13
5.2.4 Principle 8: Document data, models and prompts . 13
5.2.5 Principle 9: Conduct appropriate testing and evaluation . 13
5.3 Secure Deployment . 14
5.3.1 Principle 10: Communication and processes associated with End-users and Affected Entities . 14
5.4 Secure Maintenance . 14
5.4.1 Principle 11: Maintain regular security updates, patches and mitigations . 14
5.4.2 Principle 12: Monitor the system's behaviour . 15
5.5 Secure End of Life . 15
5.5.1 Principle 13: Ensure proper data and model disposal . 15
History . 16

ETSI
oSIST prEN 304 223 V2.0.0:2025
4 Draft ETSI EN 304 223 V2.0.0 (2025-09)
Intellectual Property Rights
Essential patents
IPRs essential or potentially essential to normative deliverables may have been declared to ETSI. The declarations
pertaining to these essential IPRs, if any, are publicly available for ETSI members and non-members, and can be
found in ETSI SR 000 314: "Intellectual Property Rights (IPRs); Essential, or potentially Essential, IPRs notified to
ETSI in respect of ETSI standards", which is available from the ETSI Secretariat. Latest updates are available on the
ETSI IPR online database.
Pursuant to the ETSI Directives including the ETSI IPR Policy, no investigation regarding the essentiality of IPRs,
including IPR searches, has been carried out by ETSI. No guarantee can be given as to the existence of other IPRs not
referenced in ETSI SR 000 314 (or the updates on the ETSI Web server) which are, or may be, or may become,
essential to the present document.
Trademarks
The present document may include trademarks and/or tradenames which are asserted and/or registered by their owners.
ETSI claims no ownership of these except for any which are indicated as being the property of ETSI, and conveys no
right to use or reproduce any trademark and/or tradename. Mention of those trademarks in the present document does
not constitute an endorsement by ETSI of products, services or organizations associated with those trademarks.
DECT™, PLUGTESTS™, UMTS™ and the ETSI logo are trademarks of ETSI registered for the benefit of its
Members. 3GPP™, LTE™ and 5G™ logo are trademarks of ETSI registered for the benefit of its Members and of the
3GPP Organizational Partners. oneM2M™ logo is a trademark of ETSI registered for the benefit of its Members and of ®
the oneM2M Partners. GSM and the GSM logo are trademarks registered and owned by the GSM Association.
Foreword
This draft European Standard (EN) has been produced by ETSI Technical Committee Securing Artificial Intelligence
(SAI), and is now submitted for the combined Public Enquiry and Vote phase of the ETSI EN Approval Procedure
(ENAP).
Proposed national transposition dates
Date of latest announcement of this EN (doa): 3 months after ETSI publication
Date of latest publication of new National Standard
or endorsement of this EN (dop/e): 6 months after doa
Date of withdrawal of any conflicting National Standard (dow): 6 months after doa

Modal verbs terminology
In the present document "shall", "shall not", "should", "should not", "may", "need not", "will", "will not", "can" and
"cannot" are to be interpreted as described in clause 3.2 of the ETSI Drafting Rules (Verbal forms for the expression of
provisions).
"must" and "must not" are NOT allowed in ETSI deliverables except when used in direct citation.
ETSI
oSIST prEN 304 223 V2.0.0:2025
5 Draft ETSI EN 304 223 V2.0.0 (2025-09)
Introduction
Artificial Intelligence (AI) is transforming our daily lives. As the technology continues to evolve and be embedded in
people's lives, it is crucial that efforts are taken to protect AI systems from growing cyber security threats. Special focus
on the cybersecurity of Artificial Intelligence (AI) is important due to its distinct differences compared to traditional
software. These characteristics include security risks such as data poisoning, model obfuscation, indirect prompt
injection and operational differences associated with data management.
The present document utilizes existing good practice in the AI and cyber security landscape alongside novel measures
to provide a set of targeted high-level principles and provisions at each stage of the AI lifecycle. The objective of the
present document is to provide stakeholders in the AI supply chain with clear baseline security requirements to help
protect AI systems.
The present document separates principles and requirements into five phases. These are secure design, secure
development, secure deployment, secure maintenance and secure end of life. Relevant standards and publications are
signposted at the start of each principle to highlight links between the various documents and the present document.
This is not an exhaustive list.
NOTE: The principles, described in the present document, can also be mapped to the life cycle stages described in
ISO/IEC 22989 [i.3] as follows:
 the secure design and development principles can be applied during the Design and development
life cycle stage;
 the secure deployment principles can be applied during the Deployment stage;
 secure maintenance to the Operations and monitoring stage; and
 secure end of life during the Retirement stage.
Information on the AI lifecycle and implementation examples are given in ETSI TR 104 128 [i.1].
ETSI TS 104 216 [i.4] provides guidance on how to evaluate and validate AI systems in accordance with the provisions
of the present document.
ETSI
oSIST prEN 304 223 V2.0.0:2025
6 Draft ETSI EN 304 223 V2.0.0 (2025-09)
1 Scope
The present document defines baseline security requirements for AI models and systems. The present document
includes in its scope systems that incorporate deep neural networks, such as generative AI. For consistency, the term
"AI systems" is used throughout the present document when framing the scope of provisions and the term "AI security",
which is considered a subset of cybersecurity, is used when addressing any cybersecurity issues in the scope of the
provisions. The present document is not designed for academics who are creating and testing AI systems only for
research purposes (AI systems which are not going to be deployed).
2 References
2.1 Normative references
References are either specific (identified by date of publication and/or edition number or version number) or
non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the
referenced document (including any amendments) applies.
Referenced documents which are not found to be publicly available in the expected location might be found in the
ETSI docbox.
NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee
their long-term validity.
The following referenced documents are necessary for the application of the present document.
Not applicable.
2.2 Informative references
References are either specific (identified by date of publication and/or edition number or version number) or
non-specific. For specific references, only the cited version applies. For non-specific references, the latest version of the
referenced document (including any amendments) applies.
NOTE: While any hyperlinks included in this clause were valid at the time of publication, ETSI cannot guarantee
their long-term validity.
The following referenced documents may be useful in implementing an ETSI deliverable or add to the reader's
understanding, but are not required for conformance to the present document.
[i.1] ETSI TR 104 128: "Securing Artificial Intelligence (SAI); Guide to Cyber Security for AI Models
and Systems".
[i.2] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying
down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008,
(EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and
Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act).
[i.3] ISO/IEC 22989:2022: "Information technology — Artificial intelligence — Artificial intelligence
concepts and terminology".
[i.4] ETSI TS 104 216: "Securing Artificial Intelligence TC (SAI); Conformance assessment for AI
(EN 304 223)".
[i.5] ISO/IEC 27001:2022: "Information security, cybersecurity and privacy protection — Information
security management systems — Requirements".
[i.6] CISA: "Software Bill of Materials (SBOM)".
[i.7] NIST: "AI Risk Management Framework: Second Draft", 2022.
ETSI
oSIST prEN 304 223 V2.0.0:2025
7 Draft ETSI EN 304 223 V2.0.0 (2025-09)
[i.8] NIST AI 100-1: "AI Risk Management Framework (AI RMF 1.0)", 2023.
[i.9] Australian Signals Directorate: "An introduction to Artificial Intelligence", 2023.
[i.10] World Economic Forum, IBM: "Presidio AI Framework: Towards Safe Generative AI Models",
2024.
[i.11] OWASP: "OWASP AI Exchange".
[i.12] MITRE ATLAS™: "Mitigations". ®
[i.13] Google : "Secure AI Approach Framework: A quick guide to implementing the Secure AI
Framework (SAIF)", 2023.
[i.14] ELSA: "ELSA - European Lighthouse on Secure and Safe AI", 2023.
[i.15] Cisco: "The Cisco Responsible AI Framework", 2024.
[i.16] Amazon: "AWS Cloud Adoption Framework for Artificial Intelligence, Machine Learning, and
Generative AI", Amazon White Paper, 2024.
[i.17] NIST AI 100-2 E2023: "Adversarial Machine Learning: A Taxonomy and Terminology of Attacks
and Mitigations".
[i.18] ENISA: "Multilayer Framework for Good Cybersecurity Practices for AI", 2023.
[i.19] NCSC: "Guidelines for secure AI system development", 2023.
[i.20] Federal Office for Information Security: "AI Security Concerns in a Nutshell", 2023.
[i.21] G7 Hiroshima Summit: "Hiroshima Process International Code of Conduct for Organizations
Developing Advanced AI Systems", 2023.
[i.22] United States Department of Health and Human Services: "Trustworthy AI (TAI) Playbook:
Executive Summary", 2021.
[i.23] OpenAI: "Preparedness Framework (Beta)", 2023.
[i.24] Information Commissioner's Office (ICO): "Guidance on the AI Auditing Framework", 2020.
[i.25] Nvidia: "NeMo-Guardrails", 2023.
3 Definition of terms, symbols and abbreviations
3.1 Terms
For the purposes of the present document, the following terms apply:
adversarial attack: attempt to manipulate an AI model by introducing specially crafted inputs to cause the model to
produce errors or unintended outcomes
AI system: engineered system that generates outputs such as content, forecasts, recommendations or decisions for a
given set of human-defined objectives
Application Programming Interface (API): set of tools and protocols that allow different software systems to
communicate and interact
data poisoning: type of adversarial attack where malicious data is introduced into training datasets to compromise the
AI system's performance or behaviour
guardrails: predefined constraints or rules implemented to control and limit an AI system's outputs and behaviours,
ensuring safety, reliability, and alignment with ethical or operational guidelines
ETSI
oSIST prEN 304 223 V2.0.0:2025
8 Draft ETSI EN 304 223 V2.0.0 (2025-09)
inference: reasoning by which conclusions are derived from known premises
NOTE 1: In AI, a premise is either a fact, a rule, a model, a feature or raw data.
NOTE 2: The term "inference" refers both to the process and its result.
input data: data for which an AI system calculates a predicted output or inference
machine learning algorithm: algorithm to determine parameters of a machine learning model from data according to
given criteria
machine learning model: mathematical construct that generates an inference or prediction based on input data or
information
model inversion: privacy attack where an adversary infers sensitive information about the training data by analysing
the AI model's outputs
model training: process to determine or to improve the parameters of a machine learning model, based on a machine
learning algorithm, by using training data
prediction: primary output of an AI system when provided with input data or information
NOTE 1: Predictions can be followed by additional outputs, such as recommendations, decisions and actions.
NOTE 2: Prediction does not necessarily refer to predicting something in the future.
NOTE 3: Predictions can refer to various kinds of data analysis or production applied to new data or historical data
(including translating text, creating synthetic images or diagnosing a previous power failure).
prompt: input provided to an AI model, often in the form of text, that directs or guides its response
NOTE: Prompts can include questions, instructions, or context for the desired output.
risk assessment: process of identifying, analysing and mitigating potential threats to the security or functionality of an
AI system
sanitisation: process of cleaning and validating data or inputs to remove errors, inconsistencies and malicious content,
ensuring data integrity and security
system prompt: predefined input or set of instructions provided to guide the behaviour of an AI model, often used to
define its tone, rules, or operational context
threat modelling: process to identify and address potential security threats to a system during its design and
development phases
training: process of teaching an AI model to recognize patterns, make decisions, or generate outputs by exposing it to
labelled data and adjusting its parameters to minimize errors
training data: data used to train a machine learning model
3.2 Symbols
Void.
3.3 Abbreviations
For the purposes of the present document, the following abbreviations apply:
AI Artificial Intelligence
API Application Programming Interface
ETSI
oSIST prEN 304 223 V2.0.0:2025
9 Draft ETSI EN 304 223 V2.0.0 (2025-09)
4 Stakeholders
The present clause defines the stakeholder groups that form the AI supply chain. An indication is given for each
principle on which stakeholders are primarily responsible for its implementation. A single entity can hold multiple
stakeholder roles, as defined in the present document, in addition to any responsibilities arising from different
regulatory regimes.
EXAMPLE: Under data protection law, when processing personal data, organizations can have a role of
controller and/or joi
...

Questions, Comments and Discussion

Ask us and Technical Secretary will try to provide an answer. You can facilitate discussion about the standard in here.

Loading comments...