ISO/IEC 25059:2023
(Main)Software engineering - Systems and software Quality Requirements and Evaluation (SQuaRE) - Quality model for AI systems
Software engineering - Systems and software Quality Requirements and Evaluation (SQuaRE) - Quality model for AI systems
This document outlines a quality model for AI systems and is an application-specific extension to the standards on SQuaRE. The characteristics and sub-characteristics detailed in the model provide consistent terminology for specifying, measuring and evaluating AI system quality. The characteristics and sub-characteristics detailed in the model also provide a set of quality characteristics against which stated quality requirements can be compared for completeness.
Ingénierie du logiciel — Exigences de qualité et évaluation des systèmes et du logiciel (SQuaRE) — Modèle de qualité pour les systèmes d'IA
General Information
- Status
- Published
- Publication Date
- 27-Jun-2023
- Technical Committee
- ISO/IEC JTC 1/SC 42 - Artificial intelligence
- Drafting Committee
- ISO/IEC JTC 1/SC 42/WG 3 - Trustworthiness
- Current Stage
- 9092 - International Standard to be revised
- Start Date
- 31-Oct-2023
- Completion Date
- 30-Oct-2025
Overview - ISO/IEC 25059:2023, Quality model for AI systems
ISO/IEC 25059:2023 is an application-specific extension to the SQuaRE family that defines a quality model for AI systems. Published by ISO/IEC JTC 1/SC 42, the standard provides consistent terminology and a structured set of characteristics and sub‑characteristics to help specify, measure and evaluate AI system quality. It complements ISO/IEC 25010 (SQuaRE) by addressing AI‑specific properties such as probabilistic behaviour, data dependence, continuous learning and human‑in‑the‑loop needs.
Key topics and technical requirements
- Purpose: Provide a vocabulary and model for stating and comparing quality requirements for AI systems and for assessing completeness of requirements.
- Product quality model (Clause 5): Extends ISO/IEC 25010 with AI‑specific characteristics and sub‑characteristics, including:
- User controllability - ability of users to intervene in an AI system in a timely manner.
- Functional adaptability - system’s capacity to acquire new information (including continuous learning) and use it for future predictions.
- Functional correctness - degree to which results meet required precision, noting that AI systems may not guarantee correctness in all circumstances.
- Robustness - ability to maintain functional correctness under varied conditions.
- Transparency - degree to which appropriate information (features, design choices, assumptions) is communicated to stakeholders.
- Intervenability - capability for operators to intervene to prevent harm or hazards.
- Quality in use model (Clause 6):
- Societal and ethical risk mitigation - measures addressing accountability, fairness, privacy, human control and other societal impacts.
- Transparency in use - user‑facing and societal transparency requirements.
- Measurement and evaluation: The model supports specifying measures and indicators (base and derived) for evaluation, without prescribing particular metrics.
Practical applications and target users
ISO/IEC 25059:2023 is intended for organizations and practitioners who design, develop, deploy or evaluate AI systems:
- AI developers and architects - to define quality requirements and design choices (robustness, adaptability, transparency).
- Quality assurance and test teams - to derive evaluation criteria and measurement plans aligned with SQuaRE concepts.
- Risk managers and compliance officers - to map societal and ethical risk mitigation into measurable quality goals.
- Procurement and auditors - to compare vendor claims against a standardized quality model.
- Regulators and policy makers - to reference consistent terminology for AI system assessment.
Related standards
- ISO/IEC 25010:2011 (SQuaRE system and software quality models)
- ISO/IEC 22989:2022 (AI concepts and terminology)
- ISO/IEC 23053:2022 (Framework for AI systems using ML)
- ISO/IEC TR 24028 (trustworthiness of AI systems) and ISO/IEC 29119‑11 (testing of AI systems)
- ISO/IEC 25012:2008 and the emerging ISO/IEC 5259 series (data quality for AI)
Using ISO/IEC 25059:2023 helps organizations create clearer, measurable and ethically aware quality requirements for AI systems, improving evaluation, procurement and oversight.
Frequently Asked Questions
ISO/IEC 25059:2023 is a standard published by the International Organization for Standardization (ISO). Its full title is "Software engineering - Systems and software Quality Requirements and Evaluation (SQuaRE) - Quality model for AI systems". This standard covers: This document outlines a quality model for AI systems and is an application-specific extension to the standards on SQuaRE. The characteristics and sub-characteristics detailed in the model provide consistent terminology for specifying, measuring and evaluating AI system quality. The characteristics and sub-characteristics detailed in the model also provide a set of quality characteristics against which stated quality requirements can be compared for completeness.
This document outlines a quality model for AI systems and is an application-specific extension to the standards on SQuaRE. The characteristics and sub-characteristics detailed in the model provide consistent terminology for specifying, measuring and evaluating AI system quality. The characteristics and sub-characteristics detailed in the model also provide a set of quality characteristics against which stated quality requirements can be compared for completeness.
ISO/IEC 25059:2023 is classified under the following ICS (International Classification for Standards) categories: 35.080 - Software. The ICS classification helps identify the subject area and facilitates finding related standards.
ISO/IEC 25059:2023 is available in PDF format for immediate download after purchase. The document can be added to your cart and obtained through the secure checkout process. Digital delivery ensures instant access to the complete standard document.
Standards Content (Sample)
INTERNATIONAL ISO/IEC
STANDARD 25059
First edition
2023-06
Software engineering — Systems and
software Quality Requirements and
Evaluation (SQuaRE) — Quality model
for AI systems
Reference number
© ISO/IEC 2023
© ISO/IEC 2023
All rights reserved. Unless otherwise specified, or required in the context of its implementation, no part of this publication may
be reproduced or utilized otherwise in any form or by any means, electronic or mechanical, including photocopying, or posting on
the internet or an intranet, without prior written permission. Permission can be requested from either ISO at the address below
or ISO’s member body in the country of the requester.
ISO copyright office
CP 401 • Ch. de Blandonnet 8
CH-1214 Vernier, Geneva
Phone: +41 22 749 01 11
Email: copyright@iso.org
Website: www.iso.org
Published in Switzerland
ii
© ISO/IEC 2023 – All rights reserved
Contents Page
Foreword .iv
Introduction .v
1 Scope . 1
2 Normative references . 1
3 Terms and definitions . 1
3.1 General . 1
3.2 Product quality . 2
3.3 Quality in use . 3
4 Abbreviated terms . 3
5 Product quality model .3
5.1 General . 3
5.2 User controllability . 4
5.3 Functional adaptability. 4
5.4 Functional correctness . 4
5.5 Robustness . 4
5.6 Transparency . 5
5.7 Intervenability . 5
6 Quality in use model .6
6.1 General . 6
6.2 Societal and ethical risk mitigation . 6
6.3 Transparency . 7
Annex A (informative) SQuaRE. 8
Annex B (informative) How a risk-based approach relates to a quality-based approach and
quality models .10
Annex C (informative) Performance .13
Bibliography .14
iii
© ISO/IEC 2023 – All rights reserved
Foreword
ISO (the International Organization for Standardization) and IEC (the International Electrotechnical
Commission) form the specialized system for worldwide standardization. National bodies that are
members of ISO or IEC participate in the development of International Standards through technical
committees established by the respective organization to deal with particular fields of technical
activity. ISO and IEC technical committees collaborate in fields of mutual interest. Other international
organizations, governmental and non-governmental, in liaison with ISO and IEC, also take part in the
work.
The procedures used to develop this document and those intended for its further maintenance
are described in the ISO/IEC Directives, Part 1. In particular, the different approval criteria
needed for the different types of document should be noted. This document was drafted in
accordance with the editorial rules of the ISO/IEC Directives, Part 2 (see www.iso.org/directives or
www.iec.ch/members_experts/refdocs).
ISO and IEC draw attention to the possibility that the implementation of this document may involve the
use of (a) patent(s). ISO and IEC take no position concerning the evidence, validity or applicability of
any claimed patent rights in respect thereof. As of the date of publication of this document, ISO and IEC
had not received notice of (a) patent(s) which may be required to implement this document. However,
implementers are cautioned that this may not represent the latest information, which may be obtained
from the patent database available at www.iso.org/patents and https://patents.iec.ch. ISO and IEC shall
not be held responsible for identifying any or all such patent rights.
Any trade name used in this document is information given for the convenience of users and does not
constitute an endorsement.
For an explanation of the voluntary nature of standards, the meaning of ISO specific terms and
expressions related to conformity assessment, as well as information about ISO's adherence to
the World Trade Organization (WTO) principles in the Technical Barriers to Trade (TBT) see
www.iso.org/iso/foreword.html. In the IEC, see www.iec.ch/understanding-standards.
This document was prepared by Joint Technical Committee ISO/IEC JTC 1, Information technology,
Subcommittee SC 42, Artificial intelligence.
Any feedback or questions on this document should be directed to the user’s national standards
body. A complete listing of these bodies can be found at www.iso.org/members.html and
www.iec.ch/national-committees.
iv
© ISO/IEC 2023 – All rights reserved
Introduction
High-quality software products and computer systems are crucial to stakeholders. Quality models,
quality requirements, quality measurement, and quality evaluation are standardized within the
International Standards on SQuaRE, see Annex A for further information.
AI systems require additional properties and characteristics of systems to be considered, and
stakeholders have varied needs. AI systems have different properties and characteristics. For example,
AI systems can:
— replace human decision-making;
— be based on noisy, or incomplete data;
— be probabilistic;
— adapt during operation.
[2]
According to ISO/IEC TR 24028, trustworthiness has been understood and treated as both an ongoing
organizational process as well as a non-functional requirement specifying emergent properties of a
system — that is, a set of inherent characteristics with their attributes — within the context of quality
of use as indicated in ISO/IEC 25010.
ISO/IEC TR 24028 discusses the applicability to AI systems of that have been developed for conventional
software. According to ISO/IEC TR 24028, does not sufficiently address the data-driven unpredictable
nature of AI systems. While considering the existing body of work, ISO/IEC TR 24028 identifies the
need for developing new International Standards for AI systems that can go beyond the characteristics
and requirements of conventional software development.
ISO/IEC TR 24028 contains a related discussion on different approaches to testing and evaluation
of AI systems. It states that for testing of an AI system, modified versions of existing software and
hardware verification and validation techniques are needed. It identifies several conceptual differences
between many AI systems and conventional systems and concludes that “the ability of the [AI] system
to achieve the planned and desired result … may not always be measurable by conventional approaches
[3]
to software testing”. Testing of AI systems is addressed in ISO/IEC TR 29119-11:2020.
This document outlines an application-specific AI system extension to the SQuaRE quality model
specified in ISO/IEC 25010.
AI systems perform tasks. One or more tasks can be defined for an AI system. Quality requirements can
be specified for the evaluation of task fulfilment.
The quality model is considered from two perspectives, product quality as described in Clause 5 and
quality in use in Clause 6. The relevance of these terms is explained, and links to other standardization
[4][5]
deliverables (e.g. the ISO/IEC 24029 series ) are highlighted.
[6]
ISO/IEC 25012:2008 contains a model for data quality that is complementary to the model defined in
[7]
this document. ISO/IEC 25012:2008 is being extended for AI systems by the ISO/IEC 5259 series.
v
© ISO/IEC 2023 – All rights reserved
INTERNATIONAL STANDARD ISO/IEC 25059:2023(E)
Software engineering — Systems and software Quality
Requirements and Evaluation (SQuaRE) — Quality model
for AI systems
1 Scope
This document outlines a quality model for AI systems and is an application-specific extension to
the standards on SQuaRE. The characteristics and sub-characteristics detailed in the model provide
consistent terminology for specifying, measuring and evaluating AI system quality. The characteristics
and sub-characteristics detailed in the model also provide a set of quality characteristics against which
stated quality requirements can be compared for completeness.
2 Normative references
The following documents are referred to in the text in such a way that some or all of their content
constitutes requirements of this document. For dated references, only the edition cited applies. For
undated references, the latest edition of the referenced document (including any amendments) applies.
ISO/IEC 25010:2011, Systems and software engineering — Systems and software Quality Requirements
and Evaluation (SQuaRE) — System and software quality models
ISO/IEC 22989:2022, Information technology — Artificial intelligence — Artificial intelligence concepts
and terminology
ISO/IEC 23053:2022, Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML)
3 Terms and definitions
For the purposes of this document, the terms and definitions given in ISO/IEC 22989:2022,
ISO/IEC 23053:2022 and the following apply.
ISO and IEC maintain terminology databases for use in standardization at the following addresses:
— ISO Online browsing platform: available at https:// www .iso .org/ obp
— IEC Electropedia: available at https:// www .electropedia .org/
3.1 General
3.1.1
measure, noun
variable to which a value is assigned as the result of measurement
Note 1 to entry: The term “measures” is used to refer collectively to base measures, derived measures, and
indicators.
[SOURCE: ISO/IEC/IEEE 15939:2017, 3.15]
3.1.2
measure, verb
make a measurement
[SOURCE: ISO/IEC 25010:2011, 4.4.6]
© ISO/IEC 2023 – All rights reserved
3.1.3
software quality measure
measure of internal software quality, external software quality or software quality in use
Note 1 to entry: Internal measure of software quality, external measure of software quality or software quality
in use measure are described in the quality model in ISO/IEC 25010.
[SOURCE: ISO/IEC 25040:2011, 4.61]
3.1.4
risk treatment measure
protective measure
action or means to eliminate hazards or reduce risks
[SOURCE: ISO/IEC Guide 51:2014, 3.13, modified — change reduction to treatment.]
3.1.5
transparency
degree to which appropriate information about the AI system is communicated to relevant stakeholders
Note 1 to entry: Appropriate information for AI system transparency can include aspects such as features,
components, procedures, measures, design goals, design choices and assumptions.
3.2 Product quality
3.2.1
user controllability
degree to which a user can appropriately intervene in an AI system’s functioning in a timely manner
3.2.2
functional adaptability
degree to which an AI system can accurately acquire information from data, or the result of previous
actions, and use that information in future predictions
3.2.3
functional correctness
degree to which a product or system provides the correct results with the needed degree of precision
Note 1 to entry: AI systems, and particularly those using machine learning models, do not usually provide
functional correctness in all observed circumstances.
[SOURCE: ISO/IEC 25010:2011, 4.2.1.2, modified — Note to entry added.]
3.2.4
intervenability
degree to which an operator can intervene in an AI system’s functioning in a timely manner to prevent
harm or hazard
3.2.5
robustness
degree to which an AI system can maintain its level of functional correctness under any circumstances
© ISO/IEC 2023 – All rights reserved
3.3 Quality in use
3.3.1
societal and ethical risk mitigation
degree to which an AI system mitigates potential risk to society
Note 1 to entry: Societal and ethical risk mitigation includes accountability, fairness, transparency and
explainability, professional responsibility, promotion of human value, privacy, human control of technology,
community involvement and development, respect for the rule of law, respect for international norms of
behaviour and labour practices.
4 Abbreviated terms
AI artificial intelligence
ML machine learning
5 Product quality model
5.1 General
An AI system product quality model is detailed in Figure 1. The model is based on a modified version of
a general system model provided in ISO/IEC 25010. New and modified sub-characteristics are identified
using a lettered footnote. Some of the sub-characteristics have different meanings or contexts as
compared to the ISO/IEC 25010 model. The modifications, additions and differences are described in
this clause. The unmodified original characteristics are part of the AI system product model and shall
be interpreted in accordance with ISO/IEC 25010.
a
New sub-characteristics.
m
Modified sub-characteristics.
Figure 1 — AI system product quality model
© ISO/IEC 2023 – All rights reserved
Each of these modified or new sub-characteristics are listed in the remainder of this clause.
5.2 User controllability
User controllability is a new sub-characteristic of usability. User controllability is a property of an AI
system such that a human or another external agent can intervene in its functioning in a timely manner.
Enhanced controllability is helpful if unexpected behaviour cannot be completely avoided and that can
lead to negative consequences.
User controllability is related to controllability, which is described in ISO/IEC 22989:2022, 5.12.
5.3 Functional adaptability
Functional adaptability is a new sub-characteristic of functional suitability. Functional adaptability of
an AI system is the ability of the system to adapt itself to a changing dynamic environment it is deployed
in. AI systems can learn from new training data, production data and the results of previous actions
taken by the system. The concept of functional adaptability subsumes that of continuous learning, as
defined in ISO/IEC 22989:2022, 5.11.9.2.
Continuous learning is not a mandatory requirement for functional adaptability. For example, a
system that switches classification models based on events in its environment can also be considered
functionally adaptive.
Functional adaptability in AI systems is unlike other quality characteristics as there are system specific
consequences that cannot be interpreted using a straight-line linear scale (e.g. bad to good). Generally,
higher functional adaptability can result in improvements for the outcomes enacted by AI systems.
For some systems, high functional adaptability can cause additional unhelpful outcomes to become
more likely based on the system’s previous choices. Weightings of a decision path with relatively
high uncertainty, reinforced based on previous AI system decisions, can result in higher likelihood of
unintended negative outcomes. In this fashion, functional adaptability can reinforce negative human
cognitive biases.
While conventional algorithms usually produce the same result for the same set of inputs, AI systems,
due to continuous learning, can exhibit different behaviour and therefore can produce different results.
5.4 Functional correctness
Functional correctness exists in ISO/IEC 25010. The AI system product quality model amends the
description since AI systems, and particularly probabilistic ML methods, do not usually provide
functional correctness because a certain error rate is expected in their outputs. Therefore, it is
necessary to measure correctness and incorrectness carefully. Numerous measurements exist for these
purposes in the context of ML methods and examples of these as applicable to a classification model can
[11]
be found in ISO/IEC TS 4213.
[12]
Additionally, there can be a trade-off between characteristics such as performance efficiency,
[13]
robustness and functional correctness.
Annex C provides further information about why functional correctness is preferred to other terms
such as the more general performance to describe the correctness of the model.
5.5 Robustness
Robustness is a new sub-characteristic of reliability. It is used to describe the ability of a system to
maintain its level of functional correctness (see Annex C for discussion on the term performance) under
any circumstances including:
— the presence of unseen, biased, adversarial or invalid data inputs;
— external i
...




Questions, Comments and Discussion
Ask us and Technical Secretary will try to provide an answer. You can facilitate discussion about the standard in here.
Loading comments...