Information technology - Artificial intelligence - Guidance for AI applications

This document provides guidance for identifying the context, opportunities and processes for developing and applying AI applications. The guidance provides a macro-level view of the AI application context, the stakeholders and their roles, relationship to the life cycle of the system, and common AI application characteristics and considerations.

Technologies de l'information — Intelligence artificielle — Recommandations relatives aux applications de l'IA

General Information

Status
Published
Publication Date
14-Jan-2024
Current Stage
6060 - International Standard published
Start Date
15-Jan-2024
Due Date
19-Aug-2023
Completion Date
15-Jan-2024

Overview

ISO/IEC 5339:2024 - Information technology - Artificial intelligence - Guidance for AI applications provides macro‑level guidance to identify the context, opportunities and processes for developing and applying AI applications. The standard scopes the AI application context, maps stakeholders and their roles to the AI system life cycle, and describes common functional and non‑functional characteristics (for example, trustworthiness, risk management and ethics). ISO/IEC 5339:2024 complements technical AI standards by offering a high‑level framework to support multi‑stakeholder communication and acceptance.

Key topics and technical focus

  • AI application context: Defines approaches for describing Who, What, Where, When and How across life‑cycle stages, including methods, data sources and deployment models.
  • Stakeholder mapping: Identifies AI stakeholders (AI producer, AI developer, data provider, AI customer/user, AI subject, community, regulators) and their responsibilities across life‑cycle stages.
  • Functional characteristics: Describes common capabilities and decision‑support roles of AI applications (e.g., augmentation vs. autonomy).
  • Non‑functional considerations: Emphasizes trustworthiness, risk and risk management, ethics and societal concerns, and other quality attributes relevant to AI deployment.
  • Framework and perspectives: Presents a common AI application framework organized around make, use and impact perspectives to guide design, use and governance.
  • Processes and lifecycle alignment: Maps processes to ISO/IEC 22989 life‑cycle stages to ensure coherent stakeholder engagement and traceability.
  • Support materials: Includes informative annexes with use cases to illustrate practical application contexts.

Note: ISO/IEC 5339:2024 is guidance‑oriented rather than prescriptive; it references terminology and life‑cycle definitions in ISO/IEC 22989:2022.

Applications and who should use it

This guidance is practical for anyone involved in planning, specifying, governing or procuring AI applications:

  • Standards developers and technical committees seeking a common macro‑level framing for AI application standards.
  • AI system architects, developers and integrators who need to map stakeholders, processes and non‑functional requirements.
  • AI producers, data providers and cloud service customers to align roles, data practices and deployment choices.
  • AI customers, users, auditors and regulators to assess trustworthiness, risk controls and societal impacts.
  • Policy makers and organizational leaders planning governance, ethics and risk management frameworks for AI adoption.

Related standards

  • ISO/IEC 22989:2022 - Artificial intelligence concepts and terminology (normative reference).
  • ISO/IEC 5338 (referenced for life‑cycle alignment) and other ISO/IEC AI standards addressing technical and conformity aspects.

ISO/IEC 5339:2024 is targeted at improving cross‑stakeholder understanding of AI applications, supporting safer, more transparent and better‑governed AI deployment.

Standard

ISO/IEC 5339:2024 - Information technology — Artificial intelligence — Guidance for AI applications Released:15. 01. 2024

English language
27 pages
sale 15% off
Preview
sale 15% off
Preview

Frequently Asked Questions

ISO/IEC 5339:2024 is a standard published by the International Organization for Standardization (ISO). Its full title is "Information technology - Artificial intelligence - Guidance for AI applications". This standard covers: This document provides guidance for identifying the context, opportunities and processes for developing and applying AI applications. The guidance provides a macro-level view of the AI application context, the stakeholders and their roles, relationship to the life cycle of the system, and common AI application characteristics and considerations.

This document provides guidance for identifying the context, opportunities and processes for developing and applying AI applications. The guidance provides a macro-level view of the AI application context, the stakeholders and their roles, relationship to the life cycle of the system, and common AI application characteristics and considerations.

ISO/IEC 5339:2024 is classified under the following ICS (International Classification for Standards) categories: 35.020 - Information technology (IT) in general. The ICS classification helps identify the subject area and facilitates finding related standards.

You can purchase ISO/IEC 5339:2024 directly from iTeh Standards. The document is available in PDF format and is delivered instantly after payment. Add the standard to your cart and complete the secure checkout process. iTeh Standards is an authorized distributor of ISO standards.

Standards Content (Sample)


International
Standard
ISO/IEC 5339
First edition
Information technology — Artificial
2024-01
intelligence — Guidance for AI
applications
Technologies de l'information — Intelligence artificielle —
Recommandations relatives aux applications de l'IA
Reference number
© ISO/IEC 2024
All rights reserved. Unless otherwise specified, or required in the context of its implementation, no part of this publication may
be reproduced or utilized otherwise in any form or by any means, electronic or mechanical, including photocopying, or posting on
the internet or an intranet, without prior written permission. Permission can be requested from either ISO at the address below
or ISO’s member body in the country of the requester.
ISO copyright office
CP 401 • Ch. de Blandonnet 8
CH-1214 Vernier, Geneva
Phone: +41 22 749 01 11
Email: copyright@iso.org
Website: www.iso.org
Published in Switzerland
© ISO/IEC 2024 – All rights reserved
ii
Contents Page
Foreword .iv
Introduction .v
1 Scope . 1
2 Normative references . 1
3 Terms and definitions . 1
4 Motivations and objectives . 2
5 AI application context and characteristics . 2
5.1 Establishing approach for AI application context .2
5.2 AI application context .3
5.3 Stakeholders and processes .5
5.3.1 General .5
5.3.2 AI stakeholders .5
5.3.3 Other stakeholders .6
5.3.4 Processes .7
5.4 AI application functional characteristics .8
5.5 AI application non-functional characteristics and considerations .8
5.5.1 General .8
5.5.2 Trustworthiness .9
5.5.3 Risks and risk management .10
5.5.4 Ethics and societal concerns .10
6 Stakeholders’ perspectives and AI application framework .11
6.1 General .11
6.2 Stakeholders’ perspectives .11
6.3 AI application framework . 12
7 Guidance for AI applications .15
7.1 General . 15
7.1.1 General . 15
7.1.2 AI producer perspective . 15
7.1.3 Data provider perspective .16
7.1.4 AI developer perspective .16
7.1.5 AI application provider perspective .17
7.2 Use perspective .18
7.2.1 General .18
7.2.2 AI customer and AI user perspective .18
7.3 Impact perspective .18
7.3.1 General .18
7.3.2 Community perspective.18
7.3.3 Regulator and policy maker perspective.19
Annex A (informative) Use cases .20
Bibliography .27

© ISO/IEC 2024 – All rights reserved
iii
Foreword
ISO (the International Organization for Standardization) and IEC (the International Electrotechnical
Commission) form the specialized system for worldwide standardization. National bodies that are
members of ISO or IEC participate in the development of International Standards through technical
committees established by the respective organization to deal with particular fields of technical activity.
ISO and IEC technical committees collaborate in fields of mutual interest. Other international organizations,
governmental and non-governmental, in liaison with ISO and IEC, also take part in the work.
The procedures used to develop this document and those intended for its further maintenance are described
in the ISO/IEC Directives, Part 1. In particular, the different approval criteria needed for the different types
of document should be noted. This document was drafted in accordance with the editorial rules of the ISO/
IEC Directives, Part 2 (see www.iso.org/directives or www.iec.ch/members_experts/refdocs).
ISO and IEC draw attention to the possibility that the implementation of this document may involve the
use of (a) patent(s). ISO and IEC take no position concerning the evidence, validity or applicability of any
claimed patent rights in respect thereof. As of the date of publication of this document, ISO and IEC had not
received notice of (a) patent(s) which may be required to implement this document. However, implementers
are cautioned that this may not represent the latest information, which may be obtained from the patent
database available at www.iso.org/patents and https://patents.iec.ch. ISO and IEC shall not be held
responsible for identifying any or all such patent rights.
Any trade name used in this document is information given for the convenience of users and does not
constitute an endorsement.
For an explanation of the voluntary nature of standards, the meaning of ISO specific terms and expressions
related to conformity assessment, as well as information about ISO's adherence to the World Trade
Organization (WTO) principles in the Technical Barriers to Trade (TBT) see www.iso.org/iso/foreword.html.
In the IEC, see www.iec.ch/understanding-standards.
This document was prepared by Joint Technical Committee ISO/IEC JTC 1, Information technology,
Subcommittee SC 42, Artificial intelligence.
Any feedback or questions on this document should be directed to the user’s national standards
body. A complete listing of these bodies can be found at www.iso.org/members.html and
www.iec.ch/national-committees.

© ISO/IEC 2024 – All rights reserved
iv
Introduction
Artificial intelligence (AI) systems have the potential to create incremental changes and achieve new levels
of performance and capability in domains such as agriculture, transportation, fintech, education, energy,
healthcare and manufacturing. However, the potential risks related to lack of trustworthiness can impact AI
implementations and their acceptance. AI applications can involve and impact many stakeholders, including
individuals, organizations and society as a whole. The impact of AI applications can evolve over time, in
some cases due to the nature of the underlying data or legal environment. The stakeholders should be made
aware of their roles and responsibilities in their engagement. While detailed AI-related standards can serve
the interest of technical experts involved in engineering and development, this document provides a macro-
level context of the AI application life cycle, to facilitate multi-stakeholder communication, engagement and
acceptance.
This document contains guidance for AI applications based on a common framework, to provide multiple
macro-level perspectives. The framework incorporates “make”, “use” and “impact” perspectives. It also
incorporates AI characteristics and non-functional characteristics such as trustworthiness and risk
management. The guidance can be used by standards developers, application developers and other
interested parties to provide answers to the question: “What are the characteristics and considerations of
an AI application?”. The stakeholders are mapped to various stages of the AI system life cycle, highlighting
their roles and responsibilities and making them aware of the processes to follow to enable a coherent
stakeholder engagement for the AI application. These stakeholders can have various levels of AI expertise
and knowledge. Since AI applications can differ from non-AI software applications due to their continuously
evolving nature and aspects of trustworthiness, all stakeholders should be made aware of AI-specific
characteristics.
This document provides:
— this document’s motivation and objectives (Clause 4);
— an approach to identifying an AI application’s stakeholders, context, functional characteristics and non-
functional characteristics (Clause 5);
— an AI application framework that can be used to answer the question: “What are the characteristics and
considerations of an AI application?” (Clause 6);
— guidance for AI applications based on the make, use and impact perspectives (Clause 7).

© ISO/IEC 2024 – All rights reserved
v
International Standard ISO/IEC 5339:2024(en)
Information technology — Artificial intelligence — Guidance
for AI applications
1 Scope
This document provides guidance for identifying the context, opportunities and processes for developing
and applying AI applications. The guidance provides a macro-level view of the AI application context,
the stakeholders and their roles, relationship to the life cycle of the system, and common AI application
characteristics and considerations.
2 Normative references
The following documents are referred to in the text in such a way that some or all of their content constitutes
requirements of this document. For dated references, only the edition cited applies. For undated references,
the latest edition of the referenced document (including any amendments) applies.
ISO/IEC 22989:2022, Information technology — Artificial intelligence — Artificial intelligence concepts and
terminology
3 Terms and definitions
For the purposes of this document, the terms and definitions given in ISO/IEC 22989:2022 and the following
apply.
ISO and IEC maintain terminology databases for use in standardization at the following addresses:
— ISO Online browsing platform: available at https:// www .iso .org/ obp
— IEC Electropedia: available at https:// www .electropedia .org/
3.1
AI application
use of AI with functional characteristics that operates in stakeholder contexts to deliver an intended result
3.2
cloud service
one or more capabilities offered via cloud computing (3.6) invoked using a defined interface
[SOURCE: ISO/IEC 22123-1:2023, 3.1.2]
3.3
private cloud
cloud deployment model (3.5) where cloud services (3.2) are used exclusively by a single cloud service customer
(3.4) and resources are controlled by that cloud service customer
[SOURCE: ISO/IEC 22123-1:2023, 3.2.4]
3.4
cloud service customer
party that is in a business relationship for the purpose of using cloud services (3.2)
Note 1 to entry: A business relationship does not necessarily imply financial agreements.

© ISO/IEC 2024 – All rights reserved
[SOURCE: ISO/IEC 22123-1:2023, 3.3.2, modified —"acting in a cloud service customer role" changed to "in a
business relationship for the purpose of using cloud services", Note 1 to entry added]
3.5
cloud deployment model
way in which cloud computing (3.6) can be organized based on the control and sharing of physical or virtual
resources
Note 1 to entry: The cloud deployment models include community cloud, hybrid cloud, private cloud and public cloud.
[SOURCE: ISO/IEC 22123-1:2023, 3.2.1]
3.6
cloud computing
paradigm for enabling network access to a scalable and elastic pool of shareable physical or virtual resources
with self-service provisioning and administration on-demand
Note 1 to entry: Examples of resources include servers, operating systems, networks, software, applications, and
storage equipment.
[SOURCE: ISO/IEC 22123-1:2023, 3.1.1, modified — Note 2 to entry deleted]
4 Motivations and objectives
This document establishes guidance based on the question: “What are the characteristics and considerations
of an AI application?” It provides a basis for a common understanding among stakeholders to promote
communication, engagement and acceptance of an AI application.
The formulation of this document is as follows:
— the context of an AI application described with respect to Who (stakeholders), What, When, Where, Why
and How at various stages of an AI system life cycle;
— the stakeholders – AI stakeholder roles such as AI provider, AI producer, AI customer, AI partner, AI
subject, consumers, community and relevant authorities;
— common AI application functional and non-functional characteristics and considerations.
5 AI application context and characteristics
5.1 Establishing approach for AI application context
This clause describes the approach for establishing the AI application context. This document uses the AI
[1]
system life cycle stages in accordance with ISO/IEC 22989:2022, Clause 6 and ISO/IEC 5338 . For each of
the stages, various stakeholders, processes and relationships are defined and mapped thus:
— Who: The stakeholders (e.g. entities, persons or groups) associated with the context whose interests and
values can be served, and whose concerns can be addressed.
— What: Activities associated with the context, such as
— AI system and application capabilities
— types of decisions being supported by the AI application.
— How: Specific methods associated with the context, such as
— degree of human involvement in decision-making (e.g. autonomous or semi-autonomous)
— AI system in an augmentation role (e.g. decision support, human-system collaboration)

© ISO/IEC 2024 – All rights reserved
— algorithmic processes
— sources, collection and provision of data
— deployment as a product or service
— When: Associated with the temporal context, i.e. a process in a particular stage of the AI system life
cycle, or a temporal activation of a process, such as the frequency of an application. This depends on the
context established in “What”.
— Where: Location associated with the context, i.e. where the AI application is used; internal to the
organization (e.g. for operations) or external to the organization (e.g. with customers); the deployment
mode of the application (e.g. on-premise, as a cloud service or through third parties).
— Why: The external causal and explanatory structures associated with the context, i.e. part of the value
proposition to the “Who” such as the customers, users and community, shows the application’s rationale,
objectives, benefits, considerations and impacts including economic, social, societal, etc.
5.2 AI application context
Figure 1 shows a typical AI application context with its stakeholders, processes and relationships together
with the different stages of an AI system life cycle.

© ISO/IEC 2024 – All rights reserved
Key
AI stakeholder
other stakeholder
process
AI system life cycle stage
stage transition
communication
organizational boundary
AI application characteristics (see 5.4)
Figure 1 — Typical AI application context
Other stakeholders are those in the community who are not involved with the development or use of the
AI application but are still impacted or regulators and policy makers who have impact on the deployment
of the application. The relationships between stakeholders include communication and exchanges. The
organizational boundary is used to delineate what is inside or outside of the producer’s organization
(e.g. pre-deployment vs. post-deployment). In certain cases, the AI application provider can be part of the
producer’s organization but have an external role. The three AI application characteristics (see 5.4) are also
reflected in Figure 1.
© ISO/IEC 2024 – All rights reserved
5.3 Stakeholders and processes
5.3.1 General
Figure 1 shows the relationship among the stakeholders (Who), their roles (What), (Where) and (When) the
processes (What) are employed (How).
Figure 1 also shows that the producer, customer, regulators and community (Who) also have value
considerations (Why) at stake in this context.
5.3.2 AI stakeholders
5.3.2.1 General
The AI stakeholders described here play one or more different roles and sub-roles in various stages of the
AI system life cycle. The name of the stakeholder is also indicative of its role or sub-role as described in
ISO/IEC 22989:2022, 5.19.
5.3.2.2 AI producer
An AI producer (Who) is an organization or entity that designs, develops, tests and deploys products or
services that use one or more AI systems. The AI producer takes on these roles as part of its organization’s
objective (Why, e.g. profit as well as value creation for its customers). These roles span the whole AI system
life cycle (When) and include management decisions about the inception and termination or retirement of
the AI system.
5.3.2.3 AI developer
An AI developer (Who) is an organization or entity that is concerned with the development of AI products
and services for the producer. The roles can include model and system design, development, implementation,
verification and validation (What) in the pre-deployment stages of the AI system life cycle (When). An
individual AI developer can be a member of the producer’s organization or a contractor or partner.
5.3.2.4 AI customer
An AI customer (Who) is an organization or entity that uses an AI product or service either directly or by its
provision to AI users. There is a business relationship between an AI application provider (see 5.3.2.6) and
an AI customer, e.g. engagement, product purchase or service subscription. The customers’ role spans the AI
system life cycle (When) since they create the demand, realize the value and sustain the viability of the AI
product (Why). They are often consulted by the AI producer during the inception to determine requirements
and participate in the verification and validation, deployment, operation and monitoring, retirement stages
of the AI system life cycle.
An AI customer or AI user (see 5.3.2.5) can be part of the AI application provider’s organization (internal, e.g.
a business function department) or have an arms-length relationship (external, e.g. the application provider
is a third-party service provider) (Where).
5.3.2.5 AI user
An AI user (Who) is an organization or entity that uses AI products or services. An AI user can be an
individual from the community (Who) or a member of the customer organization or entity. A customer can
also be a user. An AI user does not have to be an AI customer [i.e. has a business relationship with the AI
application provider (see 5.3.2.6)]. An AI user’s role is usually centred around the operation and monitoring
stage of the AI system life cycle (When) to realize value from use of the AI product or service (Why).

© ISO/IEC 2024 – All rights reserved
5.3.2.6 AI application provider
In general, an AI application provider is an organization or entity that provides products or services that uses
one or more AI systems. In the AI application context, an AI application provider (Who) is an organization or
entity that provides the capabilities from an AI system (such as reasoning and decision-making) in the form
of an AI application (What) as a product or service (How) to internal or external customers as described in
ISO/IEC 22989:2022.
NOTE An AI application provider in this document is analogous to an AI product or service provider in
ISO/IEC 22989:2022.
An AI application provider can be internal (part of the AI producer’s organization) or external (such as
a third-party product or service provider). An AI application provider’s role is usually centred around
the deployment stage in the AI system life cycle (When). They can also participate in earlier stages by
contributing about potential application domains, locations, customers and users, decision types and the
particularities of the deployment environment.
5.3.2.7 AI partner
An AI partner is an organization or entity that provides services to the AI producer and AI application
provider as part of a business relationship.
5.3.2.8 Data provider
A data provider (Who) is an organization or entity that is concerned with providing data used by AI products
or services. A data provider either collects or prepares data (What), or both for use by the AI producer’s AI
model. The data provider can be a partner of the AI producer.
The role of a data provider is usually centred around pre-deployment stages (When). In certain
circumstances, such as where the AI system employs machine learning models, the data provider can also be
involved in the post-deployment stages to collect and prepare data for continuous validation (When).
5.3.3 Other stakeholders
5.3.3.1 General
Other stakeholders include those in the community that are not involved in the production or use of the AI
application but are still impacted, e.g. consumers. Regulators and policy makers whose mandate can have an
impact in the AI application context are also in this category.
5.3.3.2 Community
The use of AI technology can have impacts beyond the individual customer and user and affect other
community members (Who) (e.g. consumers, family, neighbours, work colleagues, social circle, affiliates).
5.3.3.3 Regulators and policy makers
A regulator (Who) is an authority in the locality where the AI application is deployed and operated, and
which has jurisdiction governing the use of AI technology based on existing legal requirements. Even though
compliance to legal requirements is assessed by regulators in the deployment, operation and monitoring
stages, the AI provider and other early stage stakeholders should identify applicable risks and regulation
and provide solutions to avoid barriers to achieve original objectives.
AI applications can be deployed in jurisdictions that have different regulations related to the collection and
use of data, as well as their operations.
A policy maker (Who) is an authority in the locality where the AI application is deployed and operated that
sets the legal requirements governing the use of AI technology.

© ISO/IEC 2024 – All rights reserved
5.3.4 Processes
5.3.4.1 General
A process is a function or activity that transforms a specified input into a desired output. In the context of an
AI application, the processes described in this clause are related to input data (What) that are transformed
(How) by the AI system into its capabilities (What). These capabilities are to be deployed (How, Where) in an
AI application (What) to augment a user’s decision-making (What, How, When).
5.3.4.2 AI system
An AI system is an engineered system that is designed, developed, verified and validated by the AI producer
to perform certain functions such as reasoning and decision-making, as described in ISO/IEC 22989:2022.
These functions produce “What the AI system can do”, i.e. the AI system capabilities. How these capabilities
are produced depends on the configuration and construction of the AI model (see 5.3.4.3) and the field
involved, e.g. computer vision, image recognition, natural language processing, machine translation, speech
synthesis, data mining and planning.
5.3.4.3 AI model and development
An AI model is a mathematical representation of a process (What) that forms the core of an AI system (see
5.3.4.2). The AI model can be developed from different technologies, such as neural networks, decision
trees, Bayesian networks, logic sentences and ontologies. These models are used to make predictions or to
compute decisions to support the functions of the AI system as described in ISO/IEC 22989:2022, 8.3.
Information required for the model’s development can be derived from machine learning by processing
prepared data with an algorithm (How). The data are prepared from sources appropriate for the domain
and decision-making environment of the AI application (see 5.3.4.4). Alternatively, information can be
derived from human-engineered declarative or procedural knowledge and human expertise can be used in
[13]
logic programming and rule-based systems for making inferences (How) (see ISO/IEC 23053:2022, 7.1 ).
5.3.4.4 AI application
The AI system capabilities (see 5.3.4.2) are applied to a decision-making environment in a particular domain,
including agriculture, transportation, fintech, education, energy, healthcare, manufacturing and many
others. This “application” can include incorporating other non-AI capabilities, features and customizations to
meet system specifications as well as the needs of customers or users. The AI application provider packages
AI system capabilities into a deployable AI application (What), which in turn performs its own unique AI
application capabilities. An AI application can be deployed as a product or service by the AI application
provider. The level of automation of the AI application is discussed in 5.3.4.6. The level of automation can be
found in ISO/IEC 22989:2022, 5.13.
An example is an AI system that provides natural language understanding (NLU) capabilities that are
packaged with a conversation builder and sentiment analysis functions to create a chatbot deployed as a
cloud service for online interaction with users (e.g. the use case in Clause A.3). Another example is an AI
system that provides image processing capabilities with deep learning that is used in a medical diagnostic
environment as an anomaly detecting AI application for biomedical imaging. The AI application uses
visualization of the training and evaluation data built around the AI system capabilities and interfaces for
[3]
pathologists in evaluating images .
5.3.4.5 AI service
An AI service is an activity performed for the AI customer or user (How) that is based on an AI application’s
capabilities. The deployment of the service can be on-premise (e.g. Clause A.2) or as a cloud service (Where)
(e.g. Clause A.3).
© ISO/IEC 2024 – All rights reserved
5.3.4.6 AI-augmented decision-making
In a typical decision-making scenario, the decision maker is faced with the following: a set of uncertain
events each with a probability of happening; a set of actions that can be taken in response to the events; a
set of outcomes based on actions taken and certain events actually happening. To reduce the uncertainty
of predicting events, the decision maker can seek to gain a more accurate estimate of the probability of
events happening. This can be done by collecting pertinent data about the decision environment and process
them within the context of the decision into information to form predictions. Given these predictions and
the decision criteria such as to maximize the expected value of the decision, the decision maker’s expertise
can then be used to determine which action to take based on the expected value of each outcome and take
the best course of action. In some cases, external intelligence is also consulted in making the decision.
In the context of an AI application, it is the tool that a decision maker uses to perform some of these tasks. As
shown in Figure 1, the collection and preparation of data to be processed by the AI model (see 5.3.4.3) are
done for the AI system (see 5.3.4.2). The AI model makes predictions that can contribute to decision-making
and courses of action.
The AI application is designed for a certain level of automation. On one end of the spectrum, the decision and
action are taken by the AI application without human intervention. On the other end, the recommendation
of the AI application is used to augment the knowledge and expertise of the decision maker who ultimately
makes the decision and takes the action. The decision-making can occur when scheduled, triggered by
sensor or event when needed (When).
5.4 AI application functional characteristics
An AI application can be distinguished from a non-AI application by its possession of one or more of the
following functional characteristics:
a) An AI application is built with the capabilities of an AI system that implements a model to acquire
information and processes with or without human intervention by algorithm or programming. The
model can be implemented with supervised, semi-supervised, or unsupervised machine learning or
programmed rules. The acquisition of information to build the model can also include the processes
related to how the information is used.
b) An AI application applies optimizations or inferences made with the model to augment decisions,
predictions or recommendations in a timely manner to meet specific objectives. Other capabilities,
features and customization are usually added to uniquely fit the needs of the specific domain and
decision environment as well as that of the AI customers and AI users. The form of optimization and
inference output to be applied depends on the model being built. The output is used to augment the
intelligence of the user in making decisions, predictions and recommendations. The application can
react and respond in a dynamic environment, including in real-time.
c) An AI application is updated in some cases and the model, system or application are improved by
evaluation of interaction outcomes. The outcomes from interaction with users are evaluated based on
the performance metrics of the model and used for continuous learning and improvement.
The relationship between these three numbered characteristics and the stakeholders within the context of
an AI application are reflected in Figure 1.
5.5 AI application non-functional characteristics and considerations
5.5.1 General
The AI application functional characteristics are described in 5.4. The non-functional characteristics of an AI
application should also be considered by the stakeholders when making decisions about the AI application.
This clause introduces AI application-specific non-functional requirements.

© ISO/IEC 2024 – All rights reserved
5.5.2 Trustworthiness
5.5.2.1 General
Trustworthiness is a non-functional and essential characteristic of an AI system. It refers to the
characteristic that signifies that the system meets the expectation of its stakeholders in a verifiable way as
[2]
described in ISO/IEC 22989:2022, 5.15; as well as expressing its quality as being dependable and reliable .
The trustworthiness of an AI application is based on the trustworthiness of its AI system and any additional
incorporated capabilities, features and customization. The different trustworthiness perspectives of
stakeholders (make, use, impact) are discussed in Clause 7.
[2]
The elements of trustworthiness are briefly introduced in this clause (from ISO/IEC TR 24028:2020 ).
Further development of trustworthiness has been undertaken in other International Standards such as
[4]
ISO/IEC 25059 .
5.5.2.2 AI robustness
AI robustness is the ability of an AI system to maintain its level of performance, as intended by its developers,
and required by its customers and users, under any circumstances (see ISO/IEC 22989:2022, 5.15.2 and
[5] [6]
ISO/IEC TR 24029-1 , ISO/IEC 24029-2 for robustness of neural networks).
5.5.2.3 AI reliability
AI reliability is the ability of an AI system or any of its subcomponents to perform its required functions
under stated conditions for a specific period of time (see ISO/IEC 22989:2022, 5.15.3).
5.5.2.4 AI resilience
AI resilience is the ability of an AI system to recover operational condition quickly following a fault or
disruptive incident. Some fault tolerant systems can operate continuously after such an incident, albeit with
degraded capabilities (see ISO/IEC 22989:2022, 5.15.4).
5.5.2.5 AI controllability
AI controllability is the characteristic of an AI system whose functioning can be intervened by an external
agent (see ISO/IEC 22989:2022, 5.15.5).
5.5.2.6 AI explainability
AI explainability is the characteristic of an AI system which can express important factors influencing a
decision, prediction or recommendation in a way that humans can understand (see ISO/IEC 22989:2022
5.15.6).
5.5.2.7 AI predictability
AI predictability is the characteristic of an AI system that enables reliable assumptions by stakeholders of its
[2]
behaviour and the output as described in ISO/IEC 22989:2022, 5.15.7. ISO/IEC TR 24028:2020 discusses
this from the perspective of unpredictability.
5.5.2.8 AI transparency
AI transparency enables the stakeholders to be informed of the purpose of the AI system, how it was
developed and deployed (see ISO/IEC 22989:2022, 5.15.8). This involves communicating information such
as goals, limitations, definitions, assumptions, algorithms, data sources and collection, security, privacy and
confidentiality protection and level of automation. A discussion of traceability, an element of transparency,
[7]
as a potential source of ethical concern can be found in ISO/IEC TR 24368:2022 .

© ISO/IEC 2024 – All rights reserved
5.5.2.9 AI verification and validation
AI verification is the confirmation that an AI system was built right and fulfils specified requirements. AI
validation is the confirmation with objective evidence that the requirements for a specific intended use of
[4]
the AI application have been fulfilled (see ISO/IEC 22989:2022, 5.16). ISO/IEC 25059 describes software
engineering validation and verification methods that are applicable to an AI system.
5.5.2.10 AI bias and fairness
A biased AI system can behave unfairly to humans (or certain subgroups). Fairness is a human perception
and is based on personal and societal norms and beliefs. Unfair behaviour of AI systems can have negative,
even harmful and devastative, impact on individuals or groups (see ISO/IEC 22989:2022, 5.15.9, see also
[8]
discussion in ISO/IEC TR 24027:2021 ).
5.5.3 Risks and risk management
5.5.3.1 General
Risks, as with trustworthiness, are non-functional property of AI systems. AI systems, such as traditional
[9]
software systems, operate within a spectrum of risk , which is determined by the severity of the potential
[2]
impact of a failure or unexpected behaviour and the impacted individuals or societies. Risks can be
mitigated by risk management practices. The extent of risk management undertaken by an organization
depends on its “risk appetite”. In some cases where the adversity level is high the “concern” can become a
risk management “objective” at all stages of the AI system life cycle. In this clause, the elements of risks and
[9]
risk management are introduced (see ISO/IEC 23894 ).
5.5.3.2 Risk management framework and processing
A risk management framework “to assist the organization in integrating risk management into specific
activities and functions” that are specific to the development, provision or offering, or use of AI systems
[9]
is introduced in ISO/IEC 23894:2023, Clause 5 . The processes associated with the AI-specific activities
and functions include systematic application of policies, processing and practices in assessing, treating,
[9]
reporting and mitigating the risks are detailed in ISO/IEC 23894:2023, Clause 6 .
5.5.4 Ethics and societal concerns
5.5.4.1 General
On the one hand, AI technology has the potential to provide huge benefits to societies, organizations and
individuals. On the other hand, the application of AI technology also gives rise to potential and wide-ranging
ethical and societal concerns. Common ethical concerns relate to the means of collecting, processing and
disclosing of personal data, conceivably with biased opinions, that feed opaque machine learning decision-
[7]
making algorithms which are not explainable (see ISO/IEC TR 24368:2022, 6, 8 ).
5.5.4.2 Ethical framework
An AI ethical framework can be built on existing ethical frameworks such as virtual ethics, utilitarianism,
[7]
deontology and others (see considerations in ISO/IEC TR 24368:2022 ). This clause describes the approach
[7]
for establishing an AI application context as described in ISO/IEC TR 24368:2022, 7.2 ). Organizations
contemplating the development and use of AI in responsible ways can consider adoption of various AI
[7]
principles (ISO/IEC TR 24368:2022, 6.2 and further discussed in Clause 8). Key themes associated with
the AI principles include accountability, safety and security, transparency and explainability, fairness and
non-discrimination, human control of technology, professional responsibility, promotion of human values,
international human rights, respect for international norms of behaviour, community involvement and
development, respect for the rule of law, sustainable environment and labour practices (see also examples of
[7]
how to build socially acceptable AI in ISO/IEC TR 24368:2022, Clause 8 ).

© ISO/IEC 2024 – All rights reserved
5.5.4.3 Societal concerns
The use of AI technology has the potential to impact a wide range of societal stakeholders beyond the
customers and users. These stakeholders can be members of the community where the AI technology is
deployed or even future generations who will live with the impact of the technology on the quality-of-
life of the physical and work environment. It is the responsibility of the organization contemplating the
development and use of AI technology to recognize its social responsibility and undertake stakeholder
[10]
identification and engagement to address the impact of the technology (see ISO 26000:2010 ).
5.5.4.4 Legal requirements and issues
AI technology is new and the legal requirements associated with its development, deployment and use are
not yet widely defined. Some regions have instituted legal requirements governing certain aspects of AI
technology and applications (e.g. facial recognition for law enforcement), and a wide range of proposals has
been made and debated. Currently there are no coordinated and cohesive legal requirements at the domain,
regional, national or international levels concerning AI technology.
6 Stakeholders’ perspectives and AI application framework
6.1 General
The AI application context described in 5.2 forms the foundation of the AI application framework defined in
this clause. The AI application framework can be used to answer the question: “What are the characteristics
and considerations of an AI application?”
6.2 Stakeholders’ perspectives
The AI application framework described here incorporates the perspectives of different groups of
stakeholders. These groups have dissimilar perspectives on the AI applicat
...

Questions, Comments and Discussion

Ask us and Technical Secretary will try to provide an answer. You can facilitate discussion about the standard in here.

Loading comments...