ISO/IEC 25389:2025
(Main)Information technology — The safe framework
Information technology — The safe framework
This document provides a framework of recommendations for organizations that offer a public-facing digital product or service for which they conduct trust and safety operations to control or manage content- and conduct-related risks. This document also includes recommendations for assessing the implementation of practices for addressing content- and conduct-related risks.
Titre manque
General Information
Standards Content (Sample)
International
Standard
ISO/IEC 25389
First edition
Information technology — The safe
2025-06
framework
Reference number
© ISO/IEC 2025
All rights reserved. Unless otherwise specified, or required in the context of its implementation, no part of this publication may
be reproduced or utilized otherwise in any form or by any means, electronic or mechanical, including photocopying, or posting on
the internet or an intranet, without prior written permission. Permission can be requested from either ISO at the address below
or ISO’s member body in the country of the requester.
ISO copyright office
CP 401 • Ch. de Blandonnet 8
CH-1214 Vernier, Geneva
Phone: +41 22 749 01 11
Email: copyright@iso.org
Website: www.iso.org
Published in Switzerland
© ISO/IEC 2025 – All rights reserved
ii
Contents Page
Foreword .iv
Introduction .v
1 Scope . 1
2 Normative references . 1
3 Terms and definitions . 1
4 Digital trust and safety . 3
5 Commitments and practices . 4
5.1 Product development .5
5.2 Product governance .6
5.3 Product enforcement .6
5.4 Product improvement .7
5.5 Product transparency .7
6 Assessment framework . 8
6.1 Scoping .8
6.2 Tailoring .8
6.2.1 Tailoring methodology .8
6.2.2 Evaluating the organization’s size and scale .9
6.2.3 Evaluating the impact of the product or digital service .10
6.2.4 Determine the initial recommended assessment level .10
6.2.5 Factor in additional business landscape considerations .11
6.3 Assessment Execution . 12
6.3.1 Assessment Methodology . 12
6.3.2 Discover . 13
6.3.3 Identify .14
6.3.4 Assess .14
6.3.5 Test . . . 15
6.3.6 Report. 15
Annex A (informative) Illustrative examples of the tailoring framework .16
Annex B (informative) Risk Profile Questionnaire. 17
Annex C (informative) Summary of differences between L1, L2, and L3 Assessments . 19
Annex D (informative) Sample information discovery form .20
Annex E (informative) Question Bank .22
Annex F (informative) Illustrative example: product area report template .25
Bibliography .26
© ISO/IEC 2025 – All rights reserved
iii
Foreword
ISO (the International Organization for Standardization) and IEC (the International Electrotechnical
Commission) form the specialized system for worldwide standardization. National bodies that are
members of ISO or IEC participate in the development of International Standards through technical
committees established by the respective organization to deal with particular fields of technical activity.
ISO and IEC technical committees collaborate in fields of mutual interest. Other international organizations,
governmental and non-governmental, in liaison with ISO and IEC, also take part in the work.
The procedures used to develop this document and those intended for its further maintenance are described
in the ISO/IEC Directives, Part 1. In particular, the different approval criteria needed for the different types
of document should be noted (see www.iso.org/directives or www.iec.ch/members_experts/refdocs).
ISO and IEC draw attention to the possibility that the implementation of this document may involve the
use of (a) patent(s). ISO and IEC take no position concerning the evidence, validity or applicability of any
claimed patent rights in respect thereof. As of the date of publication of this document, ISO and IEC had not
received notice of (a) patent(s) which may be required to implement this document. However, implementers
are cautioned that this may not represent the latest information, which may be obtained from the patent
database available at www.iso.org/patents and https://patents.iec.ch. ISO and IEC shall not be held
responsible for identifying any or all such patent rights.
Any trade name used in this document is information given for the convenience of users and does not
constitute an endorsement.
For an explanation of the voluntary nature of standards, the meaning of ISO specific terms and expressions
related to conformity assessment, as well as information about ISO's adherence to the World Trade
Organization (WTO) principles in the Technical Barriers to Trade (TBT), see www.iso.org/iso/foreword.html.
In the IEC, see www.iec.ch/understanding-standards.
This document was prepared by the Digital Trust & Safety Partnership (DTSP) (as The Safe Framework
Specification) and drafted in accordance with its editorial rules. It was adopted, under the JTC 1 PAS
procedure, by Joint Technical Committee ISO/IEC JTC 1, Information technology.
Any feedback or questions on this document should be directed to the user’s national standards
body. A complete listing of these bodies can be found at www.iso.org/members.html and
www.iec.ch/national-committees.
© ISO/IEC 2025 – All rights reserved
iv
Introduction
Digital services are increasingly central to our daily lives, facilitating social discourse, economic activity,
and much more. These services provide powerful tools for users across the globe to engage in a wide range
of valuable online activity. But like any tool, they can also be misused to facilitate harmful behavior and
content. Awareness of and action against this misuse has grown in recent years, which has led to increasing
urgency in understanding, supporting, and evaluating effective ways to reduce harms associated with
online content and behavior, while also protecting people’s ability to express themselves, carry out business,
access information, associate, work, study, and participate in their communities through digital services.
Striking this balance presents a considerable challenge. To begin, there is no one-size-fits-all approach to
handling online content and associated behavioral risks or, more generally, to organizations’ trust and safety
operations. Depending on the nature of the digital service, each may face unique risks relative to the various
products or features they provide – different threats, different vulnerabilities, and different consequences.
Products or features may engage with end users directly or indirectly, as well as with other services or
businesses. What is an effective practice for one digital service may not suit another, and highly prescriptive
or rigid approaches to defining trust and safety practices are likely to be too broad, too narrow or have
negative unintended consequences. Further, risks change over time and so approaches to mitigating them
must also have room to evolve.
Given the diversity of digital services, it is important to define an overall framework and set of aims for what
constitutes a responsible approach to managing content- and conduct-related risks, to which digital services
can then map their specific practices. This flexible approach has been deployed in other domains, such as
information security, yet existing frameworks are not sufficiently concrete to be applied when it comes to
addressing harmful behavior and content online.
This document aims to fill this need by offering a framework of commitments to address content- and
conduct-related risks. While the overarching commitments are uniform, the method by which they are
fulfilled – whether by application of the illustrative practices in this document or alternatives – will vary by
digital product or feature and evolve with both the challenges faced and advances made in the field of trust
and safety.
This document also provides recommendations for organizations to evaluate the maturity of their
implementation of these commitments through a rigorous and flexible approach to assessment.
This document is for the internal use of the organization responsible for trust and safety operations for
a digital product or service. Recommendations for public reporting about the commitments and their
implementation are outside the scope of this document.
This document is neither a management system standard, nor does it consider the issues of information
security, privacy, and other aspects of security, privacy, and data management that are addressed by existing
international standards.
© ISO/IEC 2025 – All rights reserved
v
International Standard ISO/IEC 25389:2025(en)
Information technology — The safe framework
1 Scope
This document provides a framework of recommendations for organizations that offer a public-facing digital
product or service for which they conduct trust and safety operations to control or manage content- and
conduct-related risks.
This document also includes recommendations for assessing the implementation of practices for addressing
content- and conduct-related risks.
2 Normative references
There are no normative references in this document.
3 Terms and definitions
For the purposes of this document, the following terms and definitions apply.
ISO and IEC maintain terminology databases for use in standardization at the following addresses:
— ISO Online browsing platform: available at https:// www .iso .org/ obp
— IEC Electropedia: available at https:// www .electropedia .org/
3.1
abuse
use of a product or service in a way that violates the provider’s product governance (3.13), generally because
it creates or increases the risk of harm to a person or group or tends to undermine the purpose, function or
quality of the service.
3.2
assessments
methods to evaluate policies and operations for accuracy, changing user practices, emerging harms,
effectiveness and process improvement
3.3
best practices
examples of practices embodying the commitments to product development, product governance,
enforcement, improvement, and transparency
3.4
commitment
the actions taken by an organization to identify and address content- and conduct-related risk (3.8)
3.5
community guidelines
content policy
acceptable use policy
the set of conditions and limitations governing use of a digital service that a user must agree to as a
condition of use.
Note 1 to entry: These are generally written in plain and concrete language compared to legal language used in terms
of service (3.18).
© ISO/IEC 2025 – All rights reserved
3.6
conduct
user behavior facilitated by a digital product or service
Note 1 to entry: behavior may take place entirely online, or take place offline but is mediated by the use of the digital
product or service.
3.7
content
text, images, audio and videos which are accessed by users via a digital product or service
Note 1 to entry: content may be created or generated by other users, AI, publishers or other entities before being made
available for a user to access.
3.8
content- and conduct-related risk(s)
the possibility of certain illegal, dangerous, or otherwise harmful content or behavior, including risks to
human rights, which are prohibited by relevant policies and terms of service
3.9
control
measure that maintains and/or modifies risk
Note 1 to entry: Risk controls include, but are not limited to, any process, policy, device, practice, or other conditions
and/or actions which maintain and/or modify risk.
Note 2 to entry: Risk controls do not always exert the intended or assumed modifying effect.
[SOURCE: ISO 31073:2022, 3.3.33]
3.10
digital product
a product offered by one party to another party by means of digital hardware or software technology, or
both, including communication over a network
3.11
digital service
a service offered by one party to another party by means of digital hardware or software technology, or
both, including communication over a network
[SOURCE: ISO/IEC TS 5928:2023]
3.12
monthly active registered users
the number of users with a registered account who logged in or otherwise authenticated to visit the
product website, mobile website, desktop or mobile application, within the last 30 days, from the date of
measurement.
3.13
product governance
the set of agreements, rules, and guidelines mediating user interaction with the digital service and
structuring conduct related to the product (examples include terms of service, privacy policy, community
guidelines, content policy, acceptable use policy, codes of conduct, and any organizational processes by
which these governing statements are created, adopted, or iterated).
3.14
question bank
set of questions that can be used by an organization to understand and identify the specific practices it uses
to mitigate content- and conduct-related risks.
© ISO/IEC 2025 – All rights reserved
3.15
risk
effect of uncertainty on objectives
Note 1 to entry: In the context of this document, risk can be expressed as effect of uncertainty on implementation of
commitments
Note 2 to entry: in the context of this document, risk is associated with content- and conduct-related risk (3.8)
[SOURCE: ISO/IEC 27000:2018, 3.61]
3.16
risk assessment
overall process of risk identification, risk analysis and risk evaluation
[SOURCE: 31073:2022, 3.3.8]
3.17
risk profile questionnaire
yes/no questions used to measure the extent to which product or service features and policies implicate
content- or conduct-related risks (3.8).
3.18
terms of service
rules by which users agree to abide in order to use a service
[SOURCE: ISO 32110:2023, 3.4.7]
3.19
trust and safety
The field and practices that manage challenges related to content- and conduct-related risk (3.8), including but
not limited to consideration of safety-by-design, product governance (3.13), risk assessment (3.16), detection,
and response, quality assurance, and transparency.
Note 1 to entry: “trust and safety” is a term used throughout the digital products and services industry that has a
definition that is distinct from ISO/IEC definitions for trustworthiness and safety, which have been referenced in this
document for completeness and to avoid confusion.
Note 2 to entry: for the ISO/IEC definition of trustworthiness, see ISO/IEC TS 5723.
Note 3 to entry: for the ISO/IEC definition of safety, see ISO/IEC Guide 51.
3.20
user volume
average monthly active registered users over the past twelve months.
4 Digital trust and safety
For an organization which provides a digital product or service, digital trust and safety refers to the part of
that organization that focuses on understanding and addressing the harmful content or conduct potentially
associated with that service.
Content- and conduct-related risks are sources of hazard and harm created by user generated content and
conduct, which are distinct from other risks in digital products and services, which can be managed through
existing international standards. These include but are not limited to information security, privacy, data
management, and artificial intelligence.
Each organization is guided by its own values, product aims, digital tools, and human-led processes to make
decisions about how to enable a broad range of human expression and conduct, while working to identify
and prevent harmful content or conduct. Despite these individual approaches, a shared framework of best
practices and assessments can help advance the development of industry best practices to ensure consumer
trust and safety when using digital products and services.
© ISO/IEC 2025 – All rights reserved
The diagram shows how commitments, practices, and assessment are used by an organization to ensure
digital trust and safety in a world of changing threats and regulatory expectations.
Figure 1 — The role of assessment in the development and ongoing maintenance of digital trust
and safety
NOTE 1 International standards in the ISO/IEC 27000 series provide requirements and guidance for information
security. ISO/IEC 29100 provides a high-level privacy framework. And for artificial intelligence, see ISO/IEC 22989
and the ISO/IEC 42000 series of standards relating to AI management systems.
NOTE 2 ISO/IEC Guide 51 provides guidelines for inclusion of safety aspects in standards. The Safe Framework, in
the context of ISO/IEC Guide 51, is a group safety standard, “comprising safety aspects applicable to several products
or systems, or a family of similar products or systems,” applicable to digital products and services.
5 Commitments and practices
The organization should account for content- and conduct-related risk in the domains of product
development, governance, enforcement, improvement, and transparency, and assign responsibilities and
resources in each domain.
The organization should demonstrate its commitments through investment in and development of relevant
personnel and technology; adoption of rights-respecting trust and safety principles and considerations in the
development, governance, enforcement, and improvement of products; and the appropriate documentation
of digital products and services.
Across the commitments, 35 best practices have been identified that are non-exhaustive examples of the
kinds of activities and processes that an organization may have in place to mitigate risk and achieve tolerable
risk for the service. These sample practices are summarized in the following Figure:
© ISO/IEC 2025 – All rights reserved
Figure 2 — Inventory of 35 Best Practices
5.1 Product development
The organization should identify, evaluate, and adjust for content- and conduct-related risks in product
development.
The organization may consider the following examples of practices embodying a commitment to evaluate
and adjust for content- and conduct-related risks in product development:
a) Developing insight and analysis capabilities to understand patterns of abuse and identify preventive
mitigations that can be integrated into products
b) Including trust and safety team or equivalent stakeholder in the product development process at an
early stage, including through communication and meetings, soliciting and incorporating feedback as
appropriate
c) Designating a team or manager as accountable for integrating trust and safety feedback
d) Evaluating trust and safety considerations of product features balancing useability and the ability to
resist abuse
e) Using in-house or third-party teams to conduct risk assessments to better understand potential Risks
f) Providing for ongoing pre-launch feedback related to trust and safety considerations
g) Providing for post-launch evaluation by the team accountable for managing risks and those responsible
for managing the product or in response to specific incidents
h) Iterating product in light of trust and safety considerations including based on user feedback or
other observed effects, including ensuring that the perspectives of minority and underrepresented
communities are represented
© ISO/IEC 2025 – All rights reserved
i) Adopting appropriate technical measures that help users to control their own product experience where
appropriate (such as blocking or muting)
5.2 Product governance
The organization should adopt explainable processes for product governance including which team is
responsible for creating rules, and how rules are evolved.
Product governance includes external and internal rules and processes by which an organization fosters
certain activities and discourages others in relation to its product(s). This function exists in addition to
compliance with and mitigation of risk related to applicable laws. One embodiment of product governance
is an organization’s terms of service (and for multi-product companies, sometimes multiple terms) — the
contract between users and the organization that sets forth underlying expectations and boundaries.
Additionally, some organizations may maintain additional rules that more directly address acceptable
conduct, often in more plain and concrete language. These may be called rules, community guidelines,
acceptable use policies, or content policies. Their drafting and evolution may draw on user communities, or a
combination of stakeholders with varied relationships to the organization.
The organization may consider the following examples of practices embodying a commitment to adopt
explainable processes for product governance:
a) Establishing a team or function that develops, maintains, and updates the organization’s corpus of
content, conduct, or acceptable use policies
b) Instituting processes for taking user considerations into account when drafting and updating relevant
product governance
c) Developing user-facing policy descriptions and explanations in easy-to-understand language
d) Creating mechanisms to incorporate user community input and user research into policy rules
e) Working with recognized third-party civil society groups and experts for input on policies
f) Documenting for internal use the interpretation of policy rules and their application based on precedent
or other forms of investigation, research, and analysis
g) Facilitating self-regulation by the user or community to occur where appropriate, for example by
providing forums for community-led governance or tools for community moderation, and finding
opportunities to educate users on policies, for example, when they violate the rules
5.3 Product enforcement
The organization should conduct enforcement operations to implement product governance.
The organization may consider the following examples of practices embodying a commitment to conduct
enforcement operations to implement product governance:
a) Ensuring the organization has personnel and technological infrastructure to manage content- and
conduct-related risks, to which end the organization may:
— Constitute roles and teams within the organization accountable for policy creation, evaluation,
implementation, and operations
— Develop and review operational infrastructure facilitating the sorting of reports of violations and
escalation paths for more complex issues
— Determine how technology tools related to trust and safety will be provisioned (i.e., build, buy, adapt,
collaborate)
b) Formalizing training and awareness programs to keep pace with dynamic online content and related
issues, to inform the design of associated solutions
© ISO/IEC 2025 – All rights reserved
c) Investing in wellness and resilience of teams dealing with sensitive materials, such as tools and
processes to reduce exposure, employee training, rotations on/off content review, and benefits like
counseling
d) Where feasible and appropriate, identifying areas where advance detection, and potentially intervention,
is warranted
e) Implementing method(s) by which content, conduct, or a user account can be easily reported as
potentially violating policy (such as in-product reporting flows, easily findable forms, or designated
email address)
f) Operationalizing enforcement actions at scale where:
— Standards are set for timely response and prioritization based on factors including the context of the
product, the nature, urgency, and scope of potential harm, likely efficacy of intervention, and source
of report
— Appeals of decisions or other appropriate access to remedy are available
— Appropriate reporting is done outside the organization, such as to law enforcement, in cases of
credible and imminent threat to life
g) Ensuring relevant processes exist that enable users or others to "flag" or report content, conduct, or a
user account as potentially violating policy, and enforcement options on that basis
h) Working with recognized third parties (such as qualified fact checkers or human rights groups) to
identify meaningful enforcement responses
i) Working with industry partners and others to share useful information about risks, where consistent
with legal obligations and security best practices
5.4 Product improvement
The organization should assess and improve processes associated with content- and conduct-related risks.
The organization may consider the following examples of practices embodying a commitment to regularly
assess and improve processes associated with content- and conduct-related risks:
a) Developing assessment methods to evaluate policies and operations for accuracy, changing user
practices, emerging harms, effectiveness and process improvement
b) Establishing processes to ensure policies and operations align with these commitments
c) Using risk assessments to determine allocation of resources for emerging content- and conduct-related risks
d) Fostering communication pathways between the organization on the one hand, and users and other
stakeholders (such as civil society and human rights groups) to update on developments, and gather
feedback about the social impact of product and areas to improve
e) Establishing appropriate remedy mechanisms for users that have been directly affected by moderation
decisions such as content removal, account suspension or termination
5.5 Product transparency
The organization should ensure that appropriate trust and safety policies are published to the public, and
report periodically to the public and other stakeholders regarding actions taken.
© ISO/IEC 2025 – All rights reserved
The organization may consider the following examples of practices embodying a commitment to publishing
and reporting on relevant trust and safety policies:
a) Publishing periodic transparency reports including data on salient risks and relevant enforcement
practices, which may cover areas including abuses reported, processed, and acted on, and data requests
processed and fulfilled
b) Providing notice to users whose content or conduct is at issue in an enforcement action (with relevant
exceptions, such as legal prohibition or prevention of further harm)
c) Logging incoming complaints, decisions, and enforcement actions according to relevant data policies
d) Creating processes for supporting academic and other researchers working on relevant subject matter
(to the extent permitted by relevant law and consistent with relevant security and privacy standards, as
well as business considerations, such as trade secrets)
e) Where appropriate, creating in-product indicators of enforcement actions taken, including broad public
notice (e.g., icon noting removed content providing certain details), and updates to users who reported
violating content and access to remedies
6 Assessment framework
Organizations may use as necessary the assessment framework to evaluate relevant people, processes,
and technology that contribute to managing content- and conduct-related risks and that reflect existing
commitments and practices.
Assessment of a product or service through the framework takes place in 3 phases: scoping, tailoring, and
executing.
6.1 Scoping
The organization can choose to apply the assessment framework to people, processes, and technology that
contribute to managing content- and conduct-related risks and that reflect existing practices.
Each assessment should cover one public-facing digital product or service for which the organization
conducts trust and safety operations, or otherwise implements controls to manage content- or conduct-
related risks.
6.2 Tailoring
Given the diverse nature of the organizations and the digital services provided, there is no “one-size-fits-all”
approach to conducting assessments against the commitments and best practices.
Organizations may tailor the depth of self-assessment to be proportionate to the particular risks and
nuances of the product or service being assessed.
There are three proposed levels of assessment, referred to simply as “Level 1” (or L1), “Level 2” (or L2), and
“Level 3” (or L3) that an organization may undertake to examine trust and safety practices in support of a
particular product, digital service, or function. The Level 3 assessment is designed as the most in-depth in
terms of the breadth and depth of assessment procedures, while Level 1 is less detailed and provides for
more summary-level analysis.
Applying the tailoring framework, each organization can determine whether a Level 1, Level 2, or Level 3
assessment should be performed for a particular product or service during the assessment execution phase.
6.2.1 Tailoring methodology
An organization performs an assessment exercise for their practices related to the five commitments for the
service or product to be assessed. Multiple assessments may be required if the organizations offer multiple
products or services.
© ISO/IEC 2025 – All rights reserved
Application of the tailoring framework consists of four steps:
a) Evaluating the organization’s size and scale
b) Evaluating the impact of the product or digital service
c) Determining the initial recommended assessment level
d) Factoring additional business landscape considerations
Figure 3 — Elements of the tailoring framework
6.2.2 Evaluating the organization’s size and scale
It is important to establish a set of objective criteria for determining the size and scale of an organization.
This component of the tailoring framework defines inputs for consideration that are indicative of an
organization’s size and scale:
a) Scale - Previous year’s revenue (in Euros)
b) Size - Total number of employees for products/services in scope of assessment
Together, these inputs are measured to categorize each organization into a “low”, “medium”, and “high”
classification. The table below provides the proposed thresholds for defining these buckets and are subject
to refinement based on the initial self-assessment process.
Table 1 — Organizational size and scale inputs and proposed thresholds for organizational size and
scale classification
Input Low Medium High
Scale - Previous year’s total revenue (in Euros) < €25B €25B - €100B > €100B
Size - Total number of employees < 10,000 10,000 - 100,000 > 100,000
If either of the Inputs are High, then the overall categorization of the organization will be High. If both
Inputs are Low, then the overall categorization of the organization will be Low. Otherwise, the overall
categorization of the organization will be Medium.
See Annex A (informative) for illustrative examples of this categorization.
© ISO/IEC 2025 – All rights reserved
6.2.3 Evaluating the impact of the product or digital service
Once organizations have applied the size and scale criteria, they evaluate their product or digital service
risk drivers on two axes: user volume and risk profile.
User volume is measured as the average monthly active registered users over the past twelve months. The
broader the audience consuming the content or services of the product, the greater the impact of content-
and conduct-related risks.
Risk profile, measured by a Risk Profile Questionnaire, is used to measure the extent to which a product
or services features and policies implicate content- or conduct-related risks. Certain features, such as live
streaming or video sharing or hosting of user-generated content, can expand the risk landscape. In general, the
more of these features that a product makes available to users, the more complex and broader the set of risks.
The following yes/no questions are used to develop the risk profile. The questions are organized into five
groupings related to
a) Product purpose
b) Intended audience age ranges
c) Intended audience user type
d) Feature set
e) Target market
See Annex B (informative) for the risk profile questionnaire.
Each of the inputs is measured on a “low”, “medium”, and “high” scale. The proposed measurement thresholds
for these inputs are outlined in the figure below.
Table 2 — Thresholds for product/digital service impact inputs
Input Low Medium High
User Volume
<100 mil 100-500 mil >500 mil
Avg. monthly active registered users in the past 12 months
Risk Profile Questionnaire
<10 10-15 >15
Number of yes answers
The measurements for each of the inputs are aggregated to determine an overall categorization of the
product/service’s impact. The more inputs that are measured as “high”, the higher the impact or implied risk
that may be attributed to the product or service.
If either of the Inputs are High, then the overall categorization of the organization will be High. If both
Inputs are Low, then the overall categorization of the organization will be Low. Otherwise, the overall
categorization of the organization will be Medium.
See Annex A (informative) for illustrative examples of this categorization.
The resulting categorization derived in this step, along with the previous step (evaluating organization
size and scale) are combined to determine the initial recommended assessment level (explained in the next
section, Section 6.2.4).
6.2.4 Determine the initial recommended assessment level
The evaluations from 6.2.2 (organizational size and scale) and 6.2.3 (product or service impact) are combined
to determine the initial recommended level of assessment, using the matrix depicted in Table 4 Assessment
leveling matrix) below to determine which of the three levels of assessments is appropriate for the relevant
organization and product. Both the organizational size/scale and product impact should be factored in when
contemplating a proportionate level of assessment.
© ISO/IEC 2025 – All rights reserved
For example, if an organization is determined under organizational size and scale to be ‘high’ and its product
as “high impact”, it is placed in the top-right box of the matrix, which implies that a Level 3 assessment is
recommended, as shown in the following table.
Key
X product or digital service impact
Y organization size and scale
A low
B medium
C high
level 1 assessment
level 2 assessment
level 3 assessment
Figure 4 — Assessment leveling matrix
6.2.5 Factor in additional business landscape considerations
The final step in applying the tailoring framework involves integrating considerations related to the business
landscape in which an organization and product are operating. This is an optional internal measure, relating
to factors that are internal/non-public or proprietary, where businesses may be aware of a factor that may
justify a different level of assessment.
It is anticipated that these business landscape considerations would generally be used to increase the
recommended level of assessment, rather than decrease it. The level of assessment should be informed by
any unique circumstances or events that may impact the risks that a particular product or digital service
must navigate.
For example, an organization may be aware of factors that may impact the likelihood of risk and therefore
merit a more tailored assessment than would otherwise be indicated by the organizational size or product
or service impact (e.g., if the product was due to expand into new markets). In addition, an organization may
have information that does not become apparent in the initial determination of the assessment level, which
may impact the appropriate assessment level.
© ISO/IEC 2025 – All rights reserved
There are several factors that may impact the level of assessment chosen for a product or service. These
factors may include:
a) Service in new market (expansion): the product provides a new service or services a new geographic
region for the organization
b) Rapid product changes: significant changes to a product, or new features have been added to the
product, in the past year.
c) New merger and acquisition activity, joint venture, or partnership: a merger or acquisition completed or
joint venture/partnership entered into in the past year that impacts the product
d) Prior assessments/audits: recently completed assessments/audits provide information on strength or
weakness in controls similar to the practices in this document
e) User growth trajectory: measured as the percentage in growth of registered users over the past twelve
months. A higher growth rate may indicate increased exposure and a more rapidly evolving threat
landscape as it relates to content
f) Rapid social or political changes: can increase the likelihood, scope, or severity of content- and conduct-
related risks, including increases risks to users, political violence, social unrest
The specific impact and magnitude of these events can vary widely from organization to organization, and
from product to product. If one or more of these circumstances or events apply, the organization makes a
risk-based determination as to whether an adjustment in the level of assessment is warranted.
6.3 Assessment Execution
After applying the tailoring framework to determine the appropriate assessment approach (L1, L2, L3), the
assessment itself is executed. The assessment that the organization undertakes is according to the scoping
in Clause 6.1 above, and is assessed for adherence to the overarching commitments with specific focus on the
organization selected practices that underpin those commitment areas. The practices are then objectively
evaluated across three key dimensions: people, process, and technology.
The assessment is designed to help the organization develop a deeper understanding of the implementation
of selected practices to mitigate content- and conduct-related risks. The outcome of the assessment will
help the organization better understand the current state of their capabilities and their dependencies with
respect to people, processes, and technologies.
6.3.1 Assessment Methodology
This section details the methodology that the organization may follow to execute each assessment. There are
five stages or steps that make up the assessment process, from initial information gathering or discovery, to
reporting of findings and results, shown in the figure below.
Figure 5 — Assessment methodology
The corresponding activities or procedures performed within each step will differ based on the selected
level of depth for the assessment (L1, L2, or L3). For example, a Level 3 assessment may include detailed
testing of the effectiveness of specific process controls (e.g., are target turnaround times for user complaint
© ISO/IEC 2025 – All rights reserved
reviews being met?), while a Level 1 assessment may involve a higher-level review and understanding of
processes. Please reference Annex C for a summary of differentiating attributes for Level 1, Level 2, and
Level 3 assessments.
Table 3 — Assessment steps
Step Description Objectiv
...








Questions, Comments and Discussion
Ask us and Technical Secretary will try to provide an answer. You can facilitate discussion about the standard in here.
Loading comments...