This document provides guidance on how organizations that develop, produce, deploy or use products, systems and services that utilize artificial intelligence (AI) can manage risk specifically related to AI. The guidance also aims to assist organizations to integrate risk management into their AI-related activities and functions. It moreover describes processes for the effective implementation and integration of AI risk management.
The application of this guidance can be customized to any organization and its context.

  • Standard
    34 pages
    English language
    sale 10% off
    e-Library read for
    1 day

This document sets out a review of the current methods and practices (including tools, assets, and conditions of acceptability) for
conformity assessment in respect to, among others, products, services, processes, management systems, organizations, or persons,
as relevant for the development and use of AI systems. It includes an industry horizontal (vertical agnostic) perspective as well as an
industry vertical perspective.
This document focuses only on the process of assessment and gap analysis of conformity. It defines the objects of conformity
related to AI systems and all other related aspects of the process of conformity assessment. The document also reviews to what
extent AI poses specific challenges with respect to assessment of, for example, software engineering, data quality and engineering
processes.
This document takes into account requirements and orientations from policy frameworks such as the EU AI strategy and those from
CEN and CENELEC member countries.
This document is intended for technologists, standards bodies, regulators and interested parties.

  • Draft
    50 pages
    English language
    sale 10% off
    e-Library read for
    1 day