ISO/IEC TR 24027:2021
(Main)Information technology - Artificial intelligence (AI) - Bias in AI systems and AI aided decision making
Information technology - Artificial intelligence (AI) - Bias in AI systems and AI aided decision making
This document addresses bias in relation to AI systems, especially with regards to AI-aided decision-making. Measurement techniques and methods for assessing bias are described, with the aim to address and treat bias-related vulnerabilities. All AI system lifecycle phases are in scope, including but not limited to data collection, training, continual learning, design, testing, evaluation and use.
Technologie de l'information — Intelligence artificielle (IA) — Biais dans les systèmes d’IA et dans la prise de décision assistée par IA
General Information
- Status
- Published
- Publication Date
- 04-Nov-2021
- Technical Committee
- ISO/IEC JTC 1/SC 42 - Artificial intelligence
- Drafting Committee
- ISO/IEC JTC 1/SC 42/WG 3 - Trustworthiness
- Current Stage
- 6060 - International Standard published
- Start Date
- 05-Nov-2021
- Completion Date
- 05-Nov-2021
Overview
ISO/IEC TR 24027:2021 is a Technical Report from ISO/IEC that addresses bias in AI systems with a particular focus on AI-aided decision-making. It describes measurement techniques and methods for assessing bias and provides guidance to identify, evaluate and treat bias-related vulnerabilities across the entire AI system lifecycle - from data collection and design through training, continual learning, testing, deployment and use. The report is intended to help practitioners understand sources of unwanted bias and apply systematic risk-reduction practices.
Key Topics and Technical Highlights
- Sources of bias: human cognitive biases (e.g., implicit, confirmation, group attribution), societal bias, data bias (sampling, labelling, processing), and engineering decisions (feature engineering, algorithm selection, hyperparameter tuning).
- Assessment methods: foundational tools such as the confusion matrix and a set of fairness metrics discussed in the report - including equalized odds, equality of opportunity, demographic parity, and predictive equality - along with other applicable metrics.
- Lifecycle treatment: practical guidance for handling bias during inception (requirements and stakeholder identification), design and development (data representation, training, adversarial mitigation), verification and validation (static data analysis, label checks, internal/external validity testing), and deployment (continuous monitoring and transparency tools).
- Supporting material: informative annexes offering examples of bias, related open-source tools for bias assessment/mitigation, and a mapping example to ISO 26000 (social responsibility).
Practical Applications - Who Should Use It
ISO/IEC TR 24027:2021 is relevant to:
- Data scientists & ML engineers designing models and preparing datasets to reduce unwanted bias.
- Product managers & system architects setting requirements for fair AI-aided decision-making.
- AI auditors, compliance officers and risk managers evaluating fairness, transparency and governance controls.
- Regulators and procurement teams assessing vendor claims about bias mitigation and model validation.
- QA and validation teams implementing testing, monitoring and continuous validation pipelines.
Use cases include fairness testing in hiring systems, credit scoring, healthcare triage tools, automated decision support, and any domain where biased outcomes create ethical, legal or reputational risk.
Related Standards and Resources
- Produced by ISO/IEC JTC 1/SC 42 (Artificial Intelligence) - aligns with broader AI governance work.
- Annex C maps themes to ISO 26000 (social responsibility).
- Annex B lists open-source tools for measurement and mitigation of bias.
ISO/IEC TR 24027:2021 is a practical reference to help organizations operationalize bias assessment and mitigation across AI lifecycles, improve AI fairness, and build more transparent, accountable AI systems.
Frequently Asked Questions
ISO/IEC TR 24027:2021 is a technical report published by the International Organization for Standardization (ISO). Its full title is "Information technology - Artificial intelligence (AI) - Bias in AI systems and AI aided decision making". This standard covers: This document addresses bias in relation to AI systems, especially with regards to AI-aided decision-making. Measurement techniques and methods for assessing bias are described, with the aim to address and treat bias-related vulnerabilities. All AI system lifecycle phases are in scope, including but not limited to data collection, training, continual learning, design, testing, evaluation and use.
This document addresses bias in relation to AI systems, especially with regards to AI-aided decision-making. Measurement techniques and methods for assessing bias are described, with the aim to address and treat bias-related vulnerabilities. All AI system lifecycle phases are in scope, including but not limited to data collection, training, continual learning, design, testing, evaluation and use.
ISO/IEC TR 24027:2021 is classified under the following ICS (International Classification for Standards) categories: 35.020 - Information technology (IT) in general. The ICS classification helps identify the subject area and facilitates finding related standards.
You can purchase ISO/IEC TR 24027:2021 directly from iTeh Standards. The document is available in PDF format and is delivered instantly after payment. Add the standard to your cart and complete the secure checkout process. iTeh Standards is an authorized distributor of ISO standards.
Standards Content (Sample)
TECHNICAL ISO/IEC TR
REPORT 24027
First edition
2021-11
Information technology — Artificial
intelligence (AI) — Bias in AI systems
and AI aided decision making
Technologie de l'information — Intelligence artificielle (IA) —
Tendance dans les systèmes de l'IA et dans la prise de décision assistée
par l'IA
Reference number
© ISO/IEC 2021
© ISO/IEC 2021
All rights reserved. Unless otherwise specified, or required in the context of its implementation, no part of this publication may
be reproduced or utilized otherwise in any form or by any means, electronic or mechanical, including photocopying, or posting on
the internet or an intranet, without prior written permission. Permission can be requested from either ISO at the address below
or ISO’s member body in the country of the requester.
ISO copyright office
CP 401 • Ch. de Blandonnet 8
CH-1214 Vernier, Geneva
Phone: +41 22 749 01 11
Email: copyright@iso.org
Website: www.iso.org
Published in Switzerland
ii
© ISO/IEC 2021 – All rights reserved
Contents Page
Foreword .v
Introduction . vi
1 Scope . 1
2 Normative references . 1
3 Terms and definitions . 1
3.1 Artificial intelligence . 1
3.2 Bias . 2
4 Abbreviations . 3
5 Overview of bias and fairness . 3
5.1 General . 3
5.2 Overview of bias . 3
5.3 Overview of fairness. 5
6 Sources of unwanted bias in AI systems . 6
6.1 General . 6
6.2 Human cognitive biases. 7
6.2.1 General . 7
6.2.2 Automation bias . 7
6.2.3 Group attribution bias . 8
6.2.4 Implicit bias . . 8
6.2.5 Confirmation bias . 8
6.2.6 In-group bias . 8
6.2.7 Out-group homogeneity bias . 8
6.2.8 Societal bias . 9
6.2.9 Rule-based system design . . 9
6.2.10 Requirements bias . 10
6.3 Data bias . 10
6.3.1 General . 10
6.3.2 Statistical bias. 10
6.3.3 Data labels and labelling process . 11
6.3.4 Non-representative sampling . 11
6.3.5 Missing features and labels . 11
6.3.6 Data processing .12
6.3.7 Simpson's paradox .12
6.3.8 Data aggregation . 12
6.3.9 Distributed training . 12
6.3.10 Other sources of data bias .12
6.4 Bias introduced by engineering decisions .12
6.4.1 General .12
6.4.2 Feature engineering .12
6.4.3 Algorithm selection .13
6.4.4 Hyperparameter tuning. 13
6.4.5 Informativeness . 14
6.4.6 Model bias . 14
6.4.7 Model interaction . 14
7 Assessment of bias and fairness in AI systems .14
7.1 General . 14
7.2 Confusion matrix . 15
7.3 Equalized odds . 16
7.4 Equality of opportunity . 16
7.5 Demographic parity . 17
7.6 Predictive equality . 17
7.7 Other metrics . 17
iii
© ISO/IEC 2021 – All rights reserved
8 Treatment of unwanted bias throughout an AI system life cycle .17
8.1 General . 17
8.2 Inception . 17
8.2.1 General . 17
8.2.2 External requirements. 18
8.2.3 Internal requirements . 19
8.2.4 Trans-disciplinary experts . 19
8.2.5 Identification of stakeholders . 19
8.2.6 Selection and documentation of data sources . 20
8.2.7 External change . 20
8.2.8 Acceptance criteria . 21
8.3 Design and development . 21
8.3.1 General . 21
8.3.2 Data representation and labelling . 21
8.3.3 Training and tuning .22
8.3.4 Adversarial methods to mitigate bias . 23
8.3.5 Unwanted bias in rule-based systems . 24
8.4 Verification and validation . 24
8.4.1 General . 24
8.4.2 Static analysis of training data and data preparation . 25
8.4.3 Sample checks of labels .25
8.4.4 Internal validity testing .25
8.4.5 External validity testing . 25
8.4.6 User testing . 26
8.4.7 Exploratory testing .26
8.5 Deployment . 26
8.5.1 General .26
8.5.2 Continuous monitoring and validation. 26
8.5.3 Transparency tools . 27
Annex A (informative) Examples of bias .28
Annex B (informative) Related open source tools .31
Annex C (informative) ISO 26000 – Mapping example .32
Bibliography .36
iv
© ISO/IEC 2021 – All rights reserved
Foreword
ISO (the International Organization for Standardization) is a worldwide federation of national standards
bodies (ISO member bodies). The work of preparing International Standards is normally carried out
through ISO technical committees. Each member body interested in a subject for which a technical
committee has been established has the right to be represented on that committee. International
organizations, governmental and non-governmental, in liaison with ISO, also take part in the work.
ISO collaborates closely with the International Electrotechnical Commission (IEC) on all matters of
electrotechnical standardization.
The procedures used to develop this document and those intended for its further maintenance are
described in the ISO/IEC Directives, Part 1. In particular, the different approval criteria needed for the
different types of ISO documents should be noted. This document was drafted in accordance with the
editorial rules of the ISO/IEC Directives, Part 2 (see www.iso.org/directives).
Attention is drawn to the possibility that some of the elements of this document may be the subject of
patent rights. ISO shall not be held responsible for identifying any or all such patent rights. Details of
any patent rights identified during the development of the document will be in the Introduction and/or
on the ISO list of patent declarations received (see www.iso.org/patents).
Any trade name used in this document is information given for the convenience of users and does not
constitute an endorsement.
For an explanation of the voluntary nature of standards, the meaning of ISO specific terms and
expressions related to conformity assessment, as well as information about ISO's adherence to
the World Trade Organization (WTO) principles in the Technical Barriers to Trade (TBT), see
www.iso.org/iso/foreword.html.
This document was prepared by Technical Committee ISO/IEC JTC 1 Information technology,
Subcommittee SC 42, Artificial intelligence.
Any feedback or questions on this document should be directed to the user’s national standards body. A
complete listing of these bodies can be found at www.iso.org/members.html.
v
© ISO/IEC 2021 – All rights reserved
Introduction
Bias in artificial intelligence (AI) systems can manifest in different ways. AI systems that learn patterns
from data can potentially reflect existing societal bias against groups. While some bias is necessary
to address the AI system objectives (i.e. desired bias), there can be bias that is not intended in the
objectives and thus represent unwanted bias in the AI system.
Bias in AI systems can be introduced as a result of structural deficiencies in system design, arise from
human cognitive bias held by stakeholders or be inherent in the datasets used to train models. That
means that AI systems can perpetuate or augment existing bias or create new bias.
Developing AI systems with outcomes free of unwanted bias is a challenging goal. AI system function
behaviour is complex and can be difficult to understand, but the treatment of unwanted bias is
possible. Many activities in the development and deployment of AI systems present opportunities
for identification and treatment of unwanted bias to enable stakeholders to benefit from AI systems
according to their objectives.
Bias in AI systems is an active area of research. This document articulates current best practices to
detect and treat bias in AI systems or in AI-aided decision-making, regardless of source. The document
covers topics such as:
— an overview of bias (5.2) and fairness (5.3);
— potential sources of unwanted bias and terms to specify the nature of potential bias (Clause 6);
— assessing bias and fairness (Clause 7) through metrics;
— addressing unwanted bias through treatment strategies (Clause 8).
vi
© ISO/IEC 2021 – All rights reserved
TECHNICAL REPORT ISO/IEC TR 24027:2021(E)
Information technology — Artificial intelligence (AI) —
Bias in AI systems and AI aided decision making
1 Scope
This document addresses bias in relation to AI systems, especially with regards to AI-aided decision-
making. Measurement techniques and methods for assessing bias are described, with the aim to
address and treat bias-related vulnerabilities. All AI system lifecycle phases are in scope, including but
not limited to data collection, training, continual learning, design, testing, evaluation and use.
2 Normative references
1)
ISO/IEC 22989 , Information technology — Artificial intelligence — Artificial intelligence concepts and
terminology
2)
ISO/IEC 23053 , Framework for Artificial Intelligence (AI) Systems Using Machine Learning (ML)
3 Terms and definitions
For the purposes of this document, the following terms and definitions given in ISO/IEC 22989 and ISO/
IEC 23053 and the following apply.
ISO and IEC maintain terminological databases for use in standardization at the following addresses:
— ISO Online browsing platform: available at https:// www .iso .org/ obp
— IEC Electropedia: available at https:// www .electropedia .org/
3.1 Artificial intelligence
3.1.1
maximum likelihood estimator
estimator assigning the value of the parameter where the likelihood function attains or approaches its
highest value
Note 1 to entry: Maximum likelihood estimation is a well-established approach for obtaining parameter
estimates where a distribution has been specified [for example, normal, gamma, Weibull and so forth]. These
estimators have desirable statistical properties (for example, invariance under monotone transformation) and in
many situations provide the estimation method of choice. In cases in which the maximum likelihood estimator is
biased, a simple bias correction sometimes takes place.
[SOURCE: ISO 3534-1:2006, 1.35]
3.1.2
rule-based systems
knowledge-based system that draws inferences by applying a set of if-then rules to a set of facts
following given procedures
[SOURCE: ISO/IEC 2382:2015, 2123875]
1) Under preparation. Stage at the time of publication: ISO/DIS 22989:2021.
2) Under preparation. Stage at the time of publication: ISO/DIS 23053:2021.
© ISO/IEC 2021 – All rights reserved
3.1.3
sample
subset of a population made up of one or more sampling units
Note 1 to entry: The sampling units could be items, numerical values or even abstract entities depending on the
population of interest.
Note 2 to entry: A sample from a normal, a gamma, an exponential, a Weibull, a lognormal or a type I extreme
value population will often be referred to as a normal, a gamma, an exponential, a Weibull, a lognormal or a type
I extreme value sample, respectively.
[SOURCE: ISO 16269-4:2010, 2.1, modified - added domain]
3.1.4
knowledge
information about objects, events, concepts or rules, their relationships and properties, organized for
goal-oriented systematic use
Note 1 to entry: Information can exist in numeric or symbolic form.
Note 2 to entry: Information is data that has been contextualized, so that it is interpretable. Data are created
through abstraction or measurement from the world.
3.1.5
user
individual or group that interacts with a system or benefits from a system during its utilization
[SOURCE: ISO/IEC/IEEE 15288:2015, 4.1.52]
3.2 Bias
3.2.1
automation bias
propensity for humans to favour suggestions from automated decision-making systems and to ignore
contradictory information made without automation, even if it is correct
3.2.2
bias
systematic difference in treatment of certain objects, people, or groups in comparison to others
Note 1 to entry: Treatment is any kind of action, including perception, observation, representation, prediction or
decision
3.2.4
human cognitive bias
bias (3.2.2) that occurs when humans are processing and interpreting information
Note 1 to entry: human cognitive bias influences judgement and decision-making.
3.2.5
confirmation bias
type of human cognitive bias (3.2.4) that favours predictions of AI systems that confirm pre-existing
beliefs or hypotheses
3.2.6
convenience sample
sample of data that is chosen because it is easy to obtain, rather than because it is representative
3.2.7
data bias
data properties that if unaddressed lead to AI systems that perform better or worse for different groups
(3.2.8)
© ISO/IEC 2021 – All rights reserved
3.2.8
group
subset of objects in a domain that are linked because they have shared characteristics
3.2.10
statistical bias
type of consistent numerical offset in an estimate relative to the true underlying value, inherent to most
estimates
[SOURCE: ISO 20501:2019, 3.3.9]
4 Abbreviations
AI artificial intelligence
ML machine learning
5 Overview of bias and fairness
5.1 General
In this document, the term bias is defined as a systematic difference in the treatment of certain objects,
people, or groups in comparison to others, in its generic meaning beyond the context of AI or ML. In
a social context, bias has a clear negative connotation as one of the main causes of discrimination
and injustice. Nevertheless, it is the systematic differences in human perception, observation and the
resultant representation of the environment and situations that make the operation of ML algorithms
possible.
This document uses the term bias to characterize the input and the building blocks of AI systems in
terms of their design, training and operation. AI systems of different types and purposes (such as for
labelling, clustering, making predictions or decisions) rely on those biases for their operation.
To characterize the AI system outcome or, more precisely, its possible impact on society, this document
uses the terms unfairness and fairness, instead. Fairness can be described as a treatment, a behaviour
or an outcome that respects established facts, beliefs and norms and is not determined by favouritism
or unjust discrimination.
While certain biases are essential for proper AI system operation, unwanted biases can be introduced
into an AI system unintentionally and can lead to unfair system results.
5.2 Overview of bias
AI systems are enabling new experiences and capabilities for people around the globe. AI systems can
be used for various tasks, such as recommending books and television shows, predicting the presence
and severity of a medical condition, matching people to jobs and partners or identifying if a person is
crossing the street. Such computerized assistive or decision-making systems have the potential to be
fairer and the risk of being less fair than existing systems or humans that they will be augmenting or
replacing.
AI systems often learn from real-world data; hence an ML model can learn or even amplify problematic
pre-existing data bias. Such bias can potentially favour or disfavour certain groups of people, objects,
concepts or outcomes. Even given seemingly unbiased data, the most rigorous cross-functional training
and testing can still result in an ML model with unwanted bias. Furthermore, the removal or reduction
of one kind of bias (e.g. societal bias) can involve the introduction or increase of another kind of bias
[3]
(e.g. statistical bias) , see positive impact described in this clause. Bias can have negative, positive or
neutral impact.
© ISO/IEC 2021 – All rights reserved
Before discussing aspects of bias in AI systems, it is necessary to describe the operation of AI systems
and what unwanted bias means in this context. An AI system can be characterized as using knowledge
to process input data to make predictions or take actions. The knowledge within an AI system is often
built through a learning process from training data; it consists of statistical correlations observed in
the training dataset. It is essential for both the production data and the training data to relate to the
same area of interest.
The predictions made by AI systems can be highly varied, depending on the area of interest and the
type of the AI system. However, for classification systems, it is useful to think of the AI predictions as
processing the set of input data presented to it and predicting that the input belongs to a desired set
or not. A simple example is that of making a prediction relating to a loan application as to whether the
applicant represents an acceptable financial risk or not to the lending organization.
A desirable AI system would correctly predict whether the application represents an acceptable risk
without contributing to systemic exclusion of certain groups. This can mean in some circumstances
taking into account considerations of certain groups, such as ethnicity and gender. There can be an
effect of bias on the resulting environment where the prediction can change the results of subsequent
predictions. Examples of how to determine whether an algorithm has unwanted bias according to the
metrics defined in Clause 7, are given in Annex A.
Uncovering bias can involve defining appropriate criteria and analysing trade-offs associated with
these criteria. Given particular criteria, this document describes methodologies and mechanisms for
uncovering and treating bias in AI systems.
Classification (a type of supervised learning) and clustering (a type of unsupervised learning)
algorithms cannot function without bias. If all subgroups are to be treated equally, then these kinds of
algorithms would have to label all outputs the same (resulting in only one class or cluster). However,
investigation would be necessary to assess whether the impact of this bias is positive, neutral or
negative according to the system goals and objectives.
Examples of positive, neutral and negative effects of bias are as follows:
— Positive effect: AI developers can introduce bias to ensure a fair result. For example, an AI system
used for hiring a specific type of worker can introduce a bias towards one gender over another in
the decision phase to compensate for societal bias inherited from the data, which reflects their
historical underrepresentation in this profession.
— Neutral effect: The AI system for processing images for a self-driving car system can systematically
misclassify “mailboxes” as “fire hydrants”. However, this statistical bias will have neutral impact, as
long as the system has an equally strong preference for avoiding each type of obstacle.
— Negative effect: Examples of negative impacts include AI hiring systems favouring candidates
of one gender over another and voice-based digital assistants failing to recognize people with
speech impairments. Each of these instances can have unintended consequences of limiting the
opportunities of those affected. While such examples can be categorized as unethical, bias is a
wider concept that applies even in scenarios with no adverse effect on stakeholders, for example, in
the classification of galaxies by astrophysicists.
One challenge with determining the relevance of bias is that what constitutes negative effect can depend
on the specific use case or application domain. For example, age-based profiling can be considered
unacceptable in job application decisions. However, age can play a critical role in evaluation of medical
procedures and treatment. Appropriate customization specific to the use case or application domain
can be considered.
In ML systems, the outcome of any single operation is based upon correlations between features in
the input domain and previously observed outputs. Any incorrect outputs (including for example,
automated decisions, classifications and predicted continuous variables) are potentially due to poor
generalization, the outputs used to train the ML model and the hyperparameters used to calibrate it.
Statistical bias in the ML model can be introduced inadvertently or due to bias in the data collection
and modelling process. In symbolic AI systems, human cognitive bias can lead to specifying explicit
© ISO/IEC 2021 – All rights reserved
knowledge inaccurately, for example specifying rules that apply to oneself, but not the target user, due
to in-group bias.
Another concern about bias is the ease with which it can be propagated into a system, after which it can
be challenging to recognize and mitigate. An example of this is where data reflects a bias that exists
already in society and this bias becomes part of a new AI system that then propagates the original bias.
Organisations can consider the risk of unwanted bias in datasets and algorithms, including those that at
first glance appear harmless and safe. In addition, once attempts at removing unwanted bias have been
made, unintended categorisation and unsophisticated algorithms have the potential to perpetuate or
amplify existing bias. As a consequence, unwanted bias mitigation is not a “set-and-forget” process.
For example, a resume review algorithm that favours candidates with years of continuous service
would automatically disadvantage carers who are returning to the workforce after having taken time
off work for caring responsibilities. A similar algorithm can also downgrade casual workers whose
working history consists of many short contracts for a wide variety of employers: a characteristic that
can be misinterpreted as negative. Careful re-evaluation of the newly achieved outcomes can follow any
unwanted bias reduction and retraining of the algorithm.
The more automated the system and the less effective the human oversight, the likelihood of unintended
negative consequences is heightened. This situation is compounded when multiple AI applications
contribute to the automation of a given task. In such multi-application AI systems, greater demand
for transparency and explainability regarding the outcomes it produces can be anticipated by the
organisations deploying them.
5.3 Overview of fairness
Fairness is a concept that is distinct from, but related to bias. Fairness can be characterized by the
effects of an AI system on individuals, groups of people, organizations and societies that the system
influences. However, it is not possible to guarantee universal fairness. Fairness as a concept is complex,
highly contextual and sometimes contested, varying across cultures, generations, geographies and
political opinions. What is considered fair can be inconsistent across these contexts. This document
thus does not define the term fairness because of its highly socially and ethically contextual nature.
Even within the context of AI, it is difficult to define fairness in a manner that will apply equally well
to all AI systems in all contexts. An AI system can potentially affect individuals, groups of people,
organizations and societies in many undesirable ways. Common categories of negative impacts that can
be perceived as “unfair” include:
— Unfair allocation: occurs when an AI system unfairly extends or withholds opportunities or
resources in ways that have negative effects on some parties as compared to others.
— Unfair quality of service: occurs when an AI system performs less well for some parties than for
others, even if no opportunities or resources are extended or withheld.
— Stereotyping: occurs when an AI system reinforces existing societal stereotypes.
— Denigration: occurs when an AI system behaves in ways that are derogatory or demeaning.
— “Over“ or “under“ representation and erasure: occurs when an AI system over-represents or under-
represents some parties as compared to others, or even fails to represent their existence.
Bias is just one of many elements that can influence fairness. It has been observed that biased inputs do
not always result in unfair predictions and actions and unfair predictions and actions are not always
caused by bias.
An example of a biased decision system that can nonetheless be considered fair is a university hiring
policy that is biased in favour of people with relevant qualifications, in that it hires a far greater
proportion of holders of relevant qualifications than the proportion of relevant qualification holders
in the population. As long as the determination of relevant qualifications does not discriminate against
particular demographics, such a system can be considered fair.
© ISO/IEC 2021 – All rights reserved
An example of an unbiased system that can be considered unfair, is a policy that indiscriminately
rejected all candidates. Such a policy would indeed be unbiased, as not differentiating between any
categories. But it would be perceived as unfair by people with relevant qualifications.
This document distinguishes between bias and fairness. Bias can be societal or statistical, can be
reflected in or arise from different system components (see Clause 6) and can be introduced or
propagated at different stages of the AI development and deployment life cycle (see Clause 8).
Achieving fairness in AI systems often means making trade-offs. In some cases, different stakeholders
can have legitimately conflicting priorities that cannot be reconciled by an alternative system design.
As an example, consider an AI system that decides the award of scholarships to some of the graduate
programme applicants in a university. The diversity stakeholder in the admissions office wants the AI
system to provide a fair distribution of such awards to applications from various geographic regions.
On the other hand, a professor, who is another stakeholder, wants a particular deserving student
interested in a particular research area to be awarded the scholarship. In such a case, there is a
possibility that the AI system denies a deserving candidate from a particular region in order to meet the
research objectives. Thus, meeting the fairness expectations of all stakeholders is not always possible.
It is therefore important to be explicit and transparent about those priorities and any underlying
assumptions, in order to correctly select the relevant metrics (see Clause 7).
6 Sources of unwanted bias in AI systems
6.1 General
This clause describes possible sources of unwanted bias in AI systems. This includes human cognitive
bias, data bias and bias introduced by engineering decisions. Figure 1 shows the relationship between
these high-level groups of biases. The human cognitive biases (6.2) can cause bias to be introduced
through engineering decisions (6.4), or data bias (6.3).
Figure 1 — Relationship between high-level groups of bias
For example, written or spoken language contains societal bias which can be amplified by word
[4]
embedding models . Because societal bias is reflected in existing language that is used as training
data, it in turn causes non-representative sampling data bias (described in 6.3.4), which can lead to
unwanted bias. This relationship is shown in Figure 2.
© ISO/IEC 2021 – All rights reserved
Figure 2 — Example of societal bias manifesting as unwanted bias
Systems are likely to exhibit multiple sources of bias simultaneously. Analysing a system to detect
one source of bias is unlikely to uncover all. In the same example, multiple models are used for
natural language processing. The outputs of the word embedding model that may be affected by
non-representative sampling bias are then further processed by a secondary model. In this case, the
secondary model is vulnerable to bias in feature engineering because a choice was made to use word
embeddings as features of this model.
Not all sources of bias start with human cognitive biases, bias can be caused exclusively by data
characteristics. For example, sensors that are attached to a system may fail and produce signals that
can be considered outliers (see 6.3.10). This data, when used for training or reinforcement learning, can
introduce unwanted bias. This is shown in Figure 3.
Figure 3 — Example of data characteristics manifesting as unwanted bias
6.2 Human cognitive biases
6.2.1 General
Human beings can be biased in different ways, both consciously and unconsciously, and are influenced
[5]
by the data, information and experiences available to them for making decisions . Thinking is often
based on opaque processes that lead humans to make decisions without always knowing what leads
to them. These human cognitive biases affect decisions about data collection and processing, system
design, model training and other development decisions that individuals make, as well as decisions
about how a system is used.
6.2.2 Automation bias
AI assists automation of analysis and decision-making in various systems, for example in self-driving
cars and health-care systems, that can invite automation bias. Automation bias occurs when a human
© ISO/IEC 2021 – All rights reserved
decision-maker favours recommendations made by an automated decision-making system over
information made without automation, even when the automation makes errors.
6.2.3 Group attribution bias
Group attribution bias occurs when a human assumes that what is true for an individual or object is
also true for everyone, or all objects, in that group. For example, the effects of group attribution bias
can be exacerbated if a convenience sample is used for data collection. In a non-representative sample,
attributions can be made that do not reflect reality. This is also a type of statistical bias.
6.2.4 Implicit bias
Implicit bias occurs when a human makes an association or assumption based on their mental models
and memories. For example, when building a classifier to identify wedding photos, an engineer can use
the presence of a white dress in a photo as a feature. However, white dresses have been customary only
during certain eras and in certain cultures.
6.2.5 Confirmation bias
Confirmation bias occurs when hypotheses, regardless of their veracity, are more likely to be confirmed
by the intentional or unintentional interpretation of information.
For example, ML developers can inadvertently collect or label data in ways that influence an outcome
supporting their existing beliefs. Confirmation bias is a form of implicit bias.
Experimenter's bias is a form of confirmation bias where an experimenter continues training models
until a pre-existing hypothesis is confirmed.
Human cognitive bias, in particular this confirmation bias can cause various other biases, for example
selection bias (6.3.2) or bias in data labels (6.3.3).
Another example is “What You See Is All There Is” (WYSIATI) bias. This occurs when a human looks
for information that confirms their beliefs, overlooks contradicting information and draws conclusions
[6]
based on what is familiar .
6.2.6 In-group bias
In-group bias occurs when showing partiality to one's own group or own characteristics. For example,
if testers or raters consist of the system developer's friends, family or colleagues, then in-group bias can
invalidate product testing or the datas
...
La norme ISO/IEC TR 24027:2021 apporte des contributions cruciales dans le domaine des technologies de l'information, en se concentrant spécifiquement sur le biais dans les systèmes d'intelligence artificielle (IA) et la prise de décision assistée par l'IA. Son champ d'application est d'une grande pertinence, car elle aborde les questions de biais dans toutes les phases du cycle de vie des systèmes d'IA, depuis la collecte de données jusqu'à l'évaluation finale des décisions. L'un des grands points forts de cette norme réside dans sa capacité à définir et à décrire des techniques de mesure et des méthodes pour évaluer le biais. En fournissant un cadre méthodologique rigoureux, la norme vise non seulement à identifier les vulnérabilités liées au biais, mais aussi à proposer des solutions concrètes pour les atténuer. Cela est particulièrement important dans un contexte où les décisions assistées par l'IA peuvent avoir des conséquences profondes sur les individus et les sociétés. De plus, la norme reconnaît l'importance d'un apprentissage continu, un aspect essentiel pour s'assurer que les systèmes d'IA restent justes et équitables à mesure qu'ils évoluent. En intégrant ces dimensions dans la conception, le test et l'utilisation des systèmes d'IA, la norme ISO/IEC TR 24027:2021 contribue biais à une meilleure transparence et à une plus grande responsabilité dans l'utilisation de l'intelligence artificielle. En résumé, ISO/IEC TR 24027:2021 offre un cadre structuré pour aborder le biais dans les systèmes IA et la prise de décision, montrant une forte volonté de promouvoir des pratiques éthiques et responsables dans le développement de l'intelligence artificielle. Sa pertinence et son application pratique en font un document essentiel pour les professionnels travaillant avec des systèmes d'IA.
ISO/IEC TR 24027:2021 표준은 인공지능(AI) 시스템과 AI 지원 의사결정에서의 편향 문제를 다루고 있습니다. 이 문서는 AI 시스템의 생애 주기 전반에 걸친 데이터 수집, 교육, 지속 학습, 설계, 테스트, 평가 및 사용을 포함하는 다양한 단계에서 편향 관련 취약성을 다루기 위한 측정 기법과 방법을 제시합니다. 이러한 포괄적인 접근 방식은 AI 시스템이 어떻게 편향에 영향을 받을 수 있는지를 이해하는 데 필수적이며, 이를 통해 보다 공정하고 투명한 AI 개발을 촉진할 수 있습니다. 이 문서의 주요 강점은 편향을 측정하고 평가하기 위한 구체적인 방법론을 제공한다는 점입니다. 이는 AI 시스템이 의사결정에 미치는 영향을 줄이기 위한 실질적인 가이드를 제공하며, 개발자와 기업들이 AI 솔루션을 보다 책임감 있게 설계하고 운영할 수 있도록 도와줍니다. 또한, ISO/IEC TR 24027:2021은 인공지능 기술이 사회에 미치는 영향에 대해 인식하고 이를 관리하기 위한 국제적인 기준을 수립함으로써, 기술 이용자와 소비자 간의 신뢰를 구축하는 데 기여합니다. 결론적으로, ISO/IEC TR 24027:2021은 AI 시스템 내의 편향 문제를 해결하기 위한 중요한 표준으로, AI 기술의 발전과 함께 공정하고 신뢰할 수 있는 의사결정 과정의 마련에 필수적인 역할을 합니다. 이는 AI의 윤리적 사용과 함께, 모든 AI 시스템 개발자들에게 필수적으로 적용되어야 할 기준으로 자리잡고 있습니다.
Die Norm ISO/IEC TR 24027:2021 ist ein bedeutendes Dokument im Bereich der Informationstechnologie, das sich intensiv mit dem Thema der Voreingenommenheit in KI-Systemen sowie in KI-unterstützten Entscheidungsprozessen auseinandersetzt. Der Umfang dieser Norm ist umfassend und schließt alle Phasen des Lebenszyklus von KI-Systemen ein, was von der Datenerhebung über das Training bis hin zur kontinuierlichen Anpassung, dem Design, der Prüfung, der Bewertung und der Nutzung reicht. Diese ganzheitliche Betrachtung ist von entscheidender Bedeutung, da sie sicherstellt, dass Voreingenommenheit nicht nur in einer bestimmten Phase des Systems erkannt und behandelt wird, sondern durchgängig berücksichtigt wird. Ein herausragendes Merkmal der ISO/IEC TR 24027:2021 ist ihr Fokus auf die Entwicklung von Messmethoden und Techniken zur Bewertung von Voreingenommenheit. Dieses Vorgehen ist besonders relevant, da Voreingenommenheit in KI-Rechenmodellen weitreichende soziale und ethische Implikationen mit sich bringen kann. Durch die Bereitstellung von klaren Bewertungsmethoden ermöglicht die Norm den Entwicklern und Stakeholdern in der KI-Branche, systematisch und fundiert gegen Voreingenommenheit vorzugehen, was dazu beiträgt, potenzielle Schwächen in den neu geschaffenen KI-Anwendungen zu identifizieren und zu beseitigen. Ein weiterer wichtiger Punkt ist die Relevanz der Norm im aktuellen Kontext, in dem KI-Technologien zunehmend in kritischen Entscheidungsprozessen integriert werden. Da KI-Systeme in sensiblen Bereichen wie Gesundheit, Finanzen und Strafjustiz Anwendung finden, ist das Verständnis und die Behandlung von Voreingenommenheit unerlässlich, um faire und transparente Entscheidungen zu gewährleisten. Die ISO/IEC TR 24027:2021 gibt den Akteuren in diesen Bereichen wertvolle Werkzeuge an die Hand, um ein hohes Maß an Verantwortung und Ethik sicherzustellen. Insgesamt hebt sich die ISO/IEC TR 24027:2021 als eine grundlegende Norm hervor, die nicht nur den aktuellen Stand der Technologie adressiert, sondern auch einen zukunftsorientierten Rahmen bietet, um die Herausforderungen der Voreingenommenheit in der KI zu bewältigen. Sowohl die umfassende Reichweite als auch die pragmatischen Ansätze zur Messung und Behandlung von Voreingenommenheit machen diese Norm zu einer zukunftsweisenden Ressource für die gesamte Branche.
The ISO/IEC TR 24027:2021 standard is a critical document that effectively addresses bias within the context of artificial intelligence (AI) systems, particularly in AI-aided decision-making processes. Its comprehensive scope spans all phases of the AI system lifecycle, ensuring that organizations can identify and mitigate bias from the onset through to implementation and continual learning. One of the key strengths of this standard is its focus on measurement techniques and methods for assessing bias. By providing clear guidelines on how to evaluate bias-related vulnerabilities, ISO/IEC TR 24027:2021 equips stakeholders with practical tools necessary for achieving fairness in AI systems. This is particularly relevant as businesses increasingly rely on AI solutions that impact significant decisions, thereby highlighting the importance of elevated ethical standards and accountability. Moreover, the standard emphasizes the critical phases of data collection, training, design, testing, evaluation, and use of AI systems. This holistic approach ensures that bias is not only recognized but actively managed throughout each stage of the AI lifecycle. It encourages organizations to adopt a proactive stance in mitigating bias, elevating the quality and fairness of AI decisions. Given the growing scrutiny of AI technologies in both public and private sectors, the relevance of ISO/IEC TR 24027:2021 cannot be overstated. As AI continues to evolve, this standard serves as a foundational document that offers much-needed guidance in developing unbiased AI systems and fostering trust in AI-aided decision-making. In summary, ISO/IEC TR 24027:2021 is an essential standard that addresses bias within AI systems and provides comprehensive methods for assessment and remediation. Its focus on the entire lifecycle of AI, coupled with actionable measurement approaches, makes it an invaluable resource for organizations aiming to harness the benefits of AI while upholding ethical standards.
ISO/IEC TR 24027:2021は、人工知能(AI)システムにおけるバイアスおよびAI支援の意思決定に関する重要な標準です。このドキュメントは、AIシステムにおけるバイアスを測定・評価するための技術や方法を詳述しており、バイアス関連の脆弱性に対処し、改善することを目的としています。そのため、非常に広範な適用範囲を持つことが特筆されます。 この標準の強みは、AIシステムのライフサイクルすべての段階を考慮している点です。具体的には、データ収集、トレーニング、継続的学習、設計、テスト、評価、使用に至るまで、全てのプロセスに関連するバイアスの測定と管理方法が提示されています。この包括的なアプローチにより、開発者や企業は、AIシステムの初期段階からバイアスの影響を軽減することが可能となります。 さらに、ISO/IEC TR 24027:2021は、AIにおける公正性を確保するための実践的なガイドラインを提供します。そのため、企業や組織がAIを導入する際に直面する倫理的課題にも積極的に対応できるようになります。このような特性は、特にデータのバイアスによる影響が問題となる現代の技術環境において、非常に貴重です。 結論として、ISO/IEC TR 24027:2021は、AIシステムのバイアスに関する理解を深め、効果的な管理を促進するための必須の標準であり、すべての関連ステークホルダーにとって重視されるべき文書です。










Questions, Comments and Discussion
Ask us and Technical Secretary will try to provide an answer. You can facilitate discussion about the standard in here.
Loading comments...