Home/Learn/ISO AI Standards
Back to Home
Compliance Framework

ISO AI Standards

Comprehensive guidance on International Organization for Standardization standards for artificial intelligence, including management systems, risk management, and governance frameworks.

Compliance Guide
16 min read

Overview of ISO AI Standards

The International Organization for Standardization has developed a comprehensive suite of standards addressing artificial intelligence systems, governance, and risk management. These standards provide internationally recognized frameworks for organizations seeking to implement responsible AI practices, achieve certification, and demonstrate compliance with evolving regulatory expectations.

ISO AI standards are developed through consensus processes involving experts from multiple countries, industries, and disciplines. This international collaboration ensures that standards reflect diverse perspectives and are applicable across jurisdictions and sectors. Organizations adopting ISO AI standards benefit from alignment with global best practices and recognition by regulators, customers, and business partners worldwide.

Key ISO standards relevant to artificial intelligence include ISO/IEC 42001 addressing AI management systems, ISO/IEC 23894 focusing on risk management, and ISO/IEC 38507 establishing governance frameworks. These standards are complemented by technical standards addressing specific AI capabilities and requirements. Organizations should identify which standards apply to their operations based on AI system characteristics, industry sectors, and stakeholder expectations.

ISO/IEC 42001 AI Management System

ISO/IEC 42001 establishes requirements for establishing, implementing, maintaining, and continually improving an artificial intelligence management system. Published in December 2023, this standard provides a structured approach to managing AI-related risks and opportunities within an organizational context. The standard is certifiable, meaning organizations can undergo independent audits to verify conformity and achieve ISO/IEC 42001 certification.

The standard follows a structure consistent with other ISO management system standards, facilitating integration with existing quality management, information security, and other management systems. Organizations already certified to ISO 9001, ISO/IEC 27001, or similar standards will find familiar high-level structures and terminology, simplifying implementation and reducing duplication of effort.

ISO/IEC 42001 requires organizations to establish the context of the AI management system, including internal and external factors affecting AI activities, interested parties and their requirements, and the scope of the management system. Organizations must define an AI policy that articulates commitments to responsible development and use of AI systems. Top management must demonstrate leadership and commitment by ensuring integration of AI management into business processes, providing necessary resources, and promoting continuous improvement.

The standard addresses planning requirements, including actions to address risks and opportunities, establishment of AI objectives, and planning of changes to the AI management system. Organizations must identify resources necessary for the AI management system, ensure competence of personnel involved in AI activities, raise awareness of the AI policy and relevant obligations, and establish communication processes for internal and external stakeholders.

Operational requirements specify controls for AI system planning, development, deployment, and operation. Organizations must conduct impact assessments for AI systems, implement data governance processes, establish mechanisms for human oversight, and ensure traceability and transparency. The standard requires monitoring and measurement of AI management system performance, internal audits, management reviews, and continual improvement processes.

ISO/IEC 23894 Risk Management

ISO/IEC 23894 provides guidance for managing risks specific to artificial intelligence systems. Published in February 2023, this standard adapts the general risk management principles of ISO 31000 to address unique characteristics and challenges of AI technologies. The standard is applicable to organizations of all types and sizes that develop, deploy, or use AI systems.

The standard identifies categories of AI-specific risks that organizations must consider. These include risks arising from the socio-technical nature of AI systems, which operate within complex interactions between technology, humans, and organizational processes. Organizations must assess risks related to data quality and bias, recognizing that AI systems reflect characteristics of training data and may perpetuate or amplify existing biases.

Transparency and explainability risks arise when stakeholders cannot understand how AI systems reach decisions or when opacity prevents effective oversight. Security risks include adversarial attacks designed to manipulate AI system behavior, data poisoning that corrupts training datasets, and model extraction or inversion that compromises confidential information. Organizations must evaluate risks associated with AI system autonomy, particularly where systems operate without continuous human oversight.

ISO/IEC 23894 describes risk management processes adapted for AI contexts. Risk identification should consider AI system characteristics, intended use, deployment environment, and potential impacts on individuals, organizations, and society. Risk analysis evaluates likelihood and consequences of identified risks, considering factors such as system complexity, level of autonomy, and reversibility of decisions. Risk evaluation compares estimated risk levels against organizational risk criteria to determine which risks require treatment.

Risk treatment options include avoiding risk by not developing or deploying certain AI systems, modifying likelihood or consequences through technical or organizational controls, sharing risk through insurance or contractual arrangements, or accepting risk where treatment is not feasible or cost effective. Organizations must document risk treatment decisions and rationales, monitor effectiveness of implemented controls, and review risk assessments periodically or when material changes occur.

ISO/IEC 38507 Governance

ISO/IEC 38507 provides guidance for members of governing bodies of organizations on the governance of AI use. This standard, published in draft form and under development, addresses the unique governance challenges presented by AI technologies. The standard is directed at boards of directors, trustees, and other governing bodies responsible for organizational strategy and oversight.

The standard establishes governance principles specific to AI systems. Governing bodies must ensure that use of AI aligns with organizational objectives, values, and strategy. AI governance should address responsibility, strategy, acquisition, performance, conformance, and human behavior related to AI systems. Governing bodies bear ultimate responsibility for AI-related decisions and their consequences, even when day-to-day management is delegated.

Evaluation responsibilities require governing bodies to assess current and future use of AI in relation to organizational objectives. This includes evaluating proposals for new AI initiatives, reviewing performance of deployed AI systems, and ensuring that AI activities conform to applicable obligations, policies, and standards. Governing bodies must obtain sufficient information to make informed decisions, including technical assessments translated into business terms, risk evaluations, and stakeholder perspectives.

Direction responsibilities involve setting expectations for AI use through policies, principles, and strategic objectives. Governing bodies should establish risk appetite for AI activities, approve significant AI investments and deployments, and provide guidance on ethical considerations. Clear direction enables management to make operational decisions consistent with governing body expectations.

Monitoring responsibilities require governing bodies to verify that management implements direction effectively and that AI systems perform as intended. This includes receiving regular reports on AI initiatives, reviewing significant incidents or issues, and ensuring corrective actions are taken when problems arise. Effective monitoring provides assurance that AI activities remain aligned with organizational strategy and within acceptable risk parameters.

Certification Process

Organizations seeking ISO/IEC 42001 certification must engage an accredited certification body to conduct independent audits of their AI management systems. The certification process typically begins with a Stage 1 audit that reviews documentation and readiness for full assessment. Auditors verify that the organization has established required policies, procedures, and controls, and that documentation adequately describes the AI management system.

Stage 2 audits involve detailed assessment of AI management system implementation and effectiveness. Auditors interview personnel, examine records, observe processes, and test controls to verify conformity with standard requirements. Auditors assess whether the AI management system is properly implemented throughout its defined scope, whether controls operate effectively, and whether the organization demonstrates continual improvement.

Audit findings are classified as conformities, minor nonconformities, or major nonconformities. Minor nonconformities represent isolated lapses or documentation gaps that do not fundamentally compromise system effectiveness. Major nonconformities indicate systematic failures or absence of required controls. Organizations must address identified nonconformities through corrective action plans that eliminate root causes and prevent recurrence.

Upon successful completion of Stage 2 audits and resolution of any nonconformities, certification bodies issue certificates valid for three years. Organizations must undergo surveillance audits at least annually to verify continued conformity. Recertification audits occur at the end of the three-year certification cycle. Organizations must notify certification bodies of significant changes to AI management systems that may affect conformity.

Documentation Requirements

ISO AI standards require comprehensive documentation to demonstrate conformity and enable effective operation of management systems. Required documentation includes:

  • AI policy statements that articulate organizational commitments to responsible AI development and use. Policies should address key principles such as fairness, transparency, accountability, safety, and respect for human rights.
  • Scope documentation that defines boundaries of the AI management system, including organizational units, locations, AI systems, and activities covered. Scope statements should explain any exclusions and their justifications.
  • Risk assessment methodologies describing approaches to identifying, analyzing, and evaluating AI-related risks. Methodologies should address likelihood and impact estimation, risk criteria, and evaluation frameworks.
  • Risk registers documenting identified risks, assessment results, treatment decisions, and control implementations. Risk registers should be maintained current and accessible to relevant personnel.
  • AI system documentation including specifications, architecture descriptions, data provenance, validation results, deployment conditions, and operating instructions. System documentation enables effective oversight, operation, and maintenance.
  • Impact assessment records evaluating effects of AI systems on individuals, groups, and society. Impact assessments should consider both intended benefits and potential adverse consequences.
  • Monitoring and measurement records demonstrating AI management system performance. Records should include metrics, measurement results, analysis findings, and management actions.

Documentation must be controlled through version management, access controls, and retention schedules. Organizations should establish document management procedures that ensure availability to authorized personnel, protect confidential information, and maintain document integrity over time.

Audit Preparation

Effective audit preparation enhances certification success and demonstrates organizational maturity. Organizations should conduct internal audits to identify and address gaps before external certification audits. Internal auditors should be competent in both auditing techniques and AI technologies, enabling them to evaluate technical implementations as well as management system conformity.

Management reviews prior to certification audits verify that top management understands AI management system performance, risks, and improvement opportunities. Reviews should assess whether the AI management system achieves intended outcomes, whether resources are adequate, and whether the system adapts effectively to changes in internal or external contexts.

Personnel training ensures that individuals understand their roles in the AI management system and can articulate how their activities contribute to conformity with standard requirements. Training should address AI management system policies and objectives, relevant risks and controls, and procedures for reporting issues or incidents.

Documentation reviews verify completeness, accuracy, and accessibility of required records. Organizations should test that personnel can efficiently locate and retrieve documentation during audits. Documentation should clearly demonstrate implementation of standard requirements, with explicit links between policies, procedures, controls, and evidence of operation.

Implementation Roadmap

Organizations implementing ISO AI standards should follow structured approaches that build capability progressively. A typical implementation roadmap includes:

  • Initial assessment evaluating current AI governance and risk management practices against standard requirements. Gap analysis identifies areas requiring development or enhancement, enabling realistic planning and resource allocation.
  • Scope definition determining which organizational units, AI systems, and activities will be included in the AI management system. Scope decisions balance comprehensiveness with manageability, often starting with pilot implementations before enterprise-wide rollout.
  • Policy and procedure development establishing frameworks for AI governance and risk management. Policies should be developed collaboratively with stakeholders and approved by appropriate levels of management.
  • Control implementation deploying technical and organizational measures to address identified risks. Implementation should be phased to manage complexity and enable learning from early implementations.
  • Training and awareness building ensuring personnel understand requirements and their roles. Training programs should address multiple audiences with content tailored to technical depth and responsibility levels.
  • Internal audit and management review validating that the AI management system operates effectively before engaging external certification bodies. Early audits identify issues requiring correction and build confidence in readiness for certification.
  • Certification audit engagement with accredited certification bodies leading to certificate issuance upon successful completion.
  • Continual improvement through ongoing monitoring, periodic reviews, and adaptation to changing circumstances. Certified organizations must maintain conformity through continuous operation of the AI management system and implementation of corrective and preventive actions.

Organizations should allocate sufficient time for implementation, typically nine to eighteen months for initial certification depending on organizational size, complexity, and existing management system maturity. Realistic planning accounts for resource constraints, competing priorities, and learning curves associated with new requirements and practices.

Need Help with ISO AI Standards Compliance?

Verdict AI automates compliance documentation and helps you achieve ISO certification for your AI systems.

Get Started