Compliance Framework

EU AI Act

Comprehensive guidance on the European Union Artificial Intelligence Act, the first comprehensive legal framework for artificial intelligence systems in the world.

Compliance Guide
18 min read

Overview of the Act

The European Union Artificial Intelligence Act represents a landmark regulatory framework designed to govern the development, deployment, and use of artificial intelligence systems within the European Union. Adopted by the European Parliament and Council, the Act establishes a comprehensive legal structure that balances innovation with the protection of fundamental rights, safety, and ethical considerations.

The Act employs a risk-based approach to AI regulation, categorizing systems according to the level of risk they pose to individuals and society. This framework imposes obligations proportional to identified risks, ensuring that high-risk systems face stringent requirements while allowing low-risk applications to operate with minimal regulatory burden.

The Act applies to providers placing AI systems on the EU market, deployers using such systems within the EU, and certain third-country operators whose AI system outputs are used within the European Union. Extraterritorial application ensures that AI systems affecting EU residents remain subject to the Act regardless of the location of the provider or deployer.

Key Definitions

The Act defines an AI system as a machine-based system designed to operate with varying levels of autonomy that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers from the input it receives how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.

A provider is defined as any natural or legal person, public authority, agency, or other body that develops an AI system or has an AI system developed and places it on the market or puts it into service under its own name or trademark, whether for payment or free of charge. A deployer is a natural or legal person, public authority, agency, or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.

General-purpose AI models are separately defined as AI models, including when trained with a large amount of data using self-supervision at scale, that display significant generality and are capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market, and that can be integrated into a variety of downstream systems or applications. Such models may include large language models, multimodal models, and foundation models.

Provider Versus Deployer Responsibilities

Providers of high-risk AI systems bear primary responsibility for ensuring compliance with the Act. Obligations include establishing and documenting a quality management system, conducting conformity assessments, preparing technical documentation, implementing logging capabilities, ensuring transparency and provision of information to deployers, and designing systems to enable human oversight. Providers must also establish post-market monitoring systems, report serious incidents to competent authorities, and maintain registration in the EU database for high-risk AI systems.

Deployers of high-risk AI systems have distinct obligations. They must use systems in accordance with the instructions of use provided by the provider. Deployers must assign human oversight to natural persons who have the necessary competence, training, and authority. They must monitor the operation of the AI system on the basis of the instructions of use and inform the provider or distributor and the relevant market surveillance authority of any serious incident or malfunctioning. Deployers must also conduct data protection impact assessments where required under applicable data protection law.

When a deployer modifies a high-risk AI system in a manner that changes its intended purpose or substantially modifies the system, the deployer becomes a provider and assumes all associated provider obligations. This transformation of roles ensures that parties making significant changes to AI systems bear appropriate responsibility for compliance.

Risk Categories and Classification

Unacceptable Risk

Certain AI practices are prohibited as they pose unacceptable risks to fundamental rights and freedoms. Prohibited practices include:

  • Placing on the market, putting into service, or using AI systems that deploy subliminal techniques beyond a person's consciousness to materially distort behavior in a manner that causes or is likely to cause physical or psychological harm.
  • Placing on the market, putting into service, or using AI systems that exploit vulnerabilities of specific groups of persons due to their age or physical or mental disability, to materially distort behavior in a manner that causes or is likely to cause physical or psychological harm.
  • Placing on the market, putting into service, or using AI systems for the evaluation or classification of natural persons or groups thereof over a certain period of time based on their social behavior or known, inferred, or predicted personal or personality characteristics, with the social score leading to detrimental or unfavorable treatment in social contexts unrelated to the context in which the data was originally generated or collected.
  • The use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes, unless specific exceptions apply and appropriate safeguards are in place.

High Risk

AI systems are classified as high risk if they fall within specific categories listed in an annex to the Act or if they are used as safety components of products covered by Union harmonization legislation. High-risk categories include:

  • Biometric identification and categorization of natural persons.
  • Management and operation of critical infrastructure.
  • Education and vocational training, particularly for determining access to educational institutions or assessing students.
  • Employment, workers management, and access to self-employment, including AI systems for recruitment, screening, evaluating candidates, making promotion or termination decisions, and allocating tasks based on individual behavior or personal traits.
  • Access to and enjoyment of essential private services and public services and benefits, including AI systems for evaluating creditworthiness or establishing credit scores, dispatching or establishing priority in emergency response services, and evaluating eligibility for public assistance benefits.
  • Law enforcement, including AI systems for individual risk assessments, polygraphs, emotion recognition, deep fake detection, and evaluation of the reliability of evidence in criminal proceedings.
  • Migration, asylum, and border control management, including AI systems for examining applications, assessing security risks posed by persons, and operating polygraphs.
  • Administration of justice and democratic processes, including AI systems for assisting judicial authorities in researching and interpreting facts and law and applying the law to concrete facts.

Limited Risk

AI systems with limited risk are subject to transparency obligations. Providers and deployers must ensure that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances. AI systems that generate or manipulate image, audio, or video content that constitutes deep fakes must disclose that the content has been artificially generated or manipulated. Emotion recognition systems and biometric categorization systems must inform natural persons that they are being subjected to such systems.

Minimal Risk

The majority of AI systems fall into the minimal risk category and are not subject to specific obligations under the Act beyond general legal requirements applicable to all products and services. Examples include AI-enabled video games, spam filters, and inventory management systems. Providers of minimal risk AI systems may voluntarily choose to apply codes of conduct or adhere to harmonized standards.

Conformity Assessments

Before placing a high-risk AI system on the market or putting it into service, providers must conduct a conformity assessment to demonstrate compliance with the Act. The conformity assessment may be based on internal control or involve a notified body, depending on the nature of the AI system and whether harmonized standards have been applied.

For AI systems intended to be used as safety components of products covered by Union harmonization legislation that requires involvement of a notified body, the conformity assessment must be conducted as part of the conformity assessment procedure required under that legislation. For other high-risk AI systems, providers may conduct an internal conformity assessment based on documented processes, provided that relevant harmonized standards or common specifications have been applied and documented compliance can be demonstrated.

Upon successful completion of the conformity assessment, providers must draw up an EU declaration of conformity and affix the CE marking to the AI system or its packaging. The declaration of conformity must contain information identifying the provider, the AI system, and the conformity assessment procedure followed. The CE marking must be affixed visibly, legibly, and indelibly.

Technical Documentation Requirements

Technical documentation for high-risk AI systems must be drawn up before the system is placed on the market or put into service and kept up to date. Documentation must demonstrate compliance with the requirements of the Act and provide competent authorities and notified bodies with sufficient information to assess compliance. Required elements include:

  • A general description of the AI system, including its intended purpose, the persons or groups of persons on which the system is intended to be used, versions, and how the AI system interacts with hardware or software that is not part of the AI system itself.
  • A detailed description of the elements of the AI system and the process for its development, including the methods and steps performed for development, the design specifications, the architecture, and which requirements the AI system is intended to comply with.
  • Detailed information about the data used for training, validation, and testing, including data provenance, relevance, representativeness, and measures taken to examine the presence of possible biases and to detect, prevent, and mitigate those biases.
  • A description of the risk management system, including the risk management plan and the procedures adopted to identify and analyze known and foreseeable risks, to estimate and evaluate risks emerging during testing and post-market use, and to adopt suitable risk management measures.
  • The validation and testing procedures used, including information about the validation and testing data, metrics used to measure accuracy, robustness, and cybersecurity, and test logs and reports dated and signed by responsible persons.
  • Information about the human oversight measures, including the identity of the persons or groups of persons in charge of human oversight, the technical measures put in place to facilitate human oversight, and how human oversight is integrated into the AI system.
  • The EU declaration of conformity and copies of any certificates issued by notified bodies.

Technical documentation must be kept for a period of ten years after the AI system has been placed on the market or put into service. It must be made available to competent authorities upon request.

Post-Market Monitoring

Providers of high-risk AI systems must establish and document a post-market monitoring system in a manner that is proportionate to the nature of the AI technologies and the risks of the high-risk AI system. The post-market monitoring system must actively and systematically collect, document, and analyze relevant data that may be provided by deployers or collected through other sources on the performance of high-risk AI systems throughout their lifetime. The system must enable providers to evaluate the continuous compliance of AI systems with the requirements of the Act.

Providers must plan and document post-market monitoring activities through a post-market monitoring plan. The plan must be part of the technical documentation and must describe the procedures in place for collecting and reviewing experience gained from use of high-risk AI systems, the measures in place to assess whether the system continues to perform as intended, and the actions to be taken when issues are identified.

When a provider becomes aware of a serious incident involving a high-risk AI system, the provider must immediately report the incident to the market surveillance authorities of the Member States where the incident occurred. Serious incidents include any incident or malfunctioning of an AI system that directly or indirectly leads to death or serious damage to health, a serious and irreversible disruption of the management or operation of critical infrastructure, or breaches of obligations under Union law intended to protect fundamental rights. Providers must also report incidents that result in the infringement of data protection rules.

Timeline for Enforcement

The Act follows a phased implementation timeline to allow organizations adequate time to achieve compliance. Prohibited AI practices become effective six months after the Act enters into force. Obligations for general-purpose AI models apply twelve months after entry into force. High-risk AI system requirements and conformity assessments apply twenty-four months after entry into force, with extended transition periods for certain systems already in use.

Organizations must monitor the publication of harmonized standards and common specifications by the European Commission. Application of these standards provides a presumption of conformity with corresponding requirements of the Act. Providers should align internal development processes with emerging technical specifications as they are released.

Penalties for non-compliance are substantial. Infringements of prohibited AI practices may result in administrative fines up to thirty million euros or six percent of total worldwide annual turnover, whichever is higher. Non-compliance with other obligations may result in fines up to fifteen million euros or three percent of total worldwide annual turnover. Supplying incorrect, incomplete, or misleading information to authorities may result in fines up to seven and a half million euros or one percent of total worldwide annual turnover.

Preparation for Audits and Compliance Verification

Organizations subject to the Act must prepare for audits and inspections by competent authorities. Comprehensive preparation includes:

  • Maintaining complete and current technical documentation that demonstrates compliance with all applicable requirements.
  • Ensuring that quality management system documentation accurately reflects implemented processes and controls.
  • Retaining records of conformity assessments, validation and testing procedures, and results.
  • Documenting post-market monitoring activities, incident reports, and corrective actions taken.
  • Training personnel on compliance requirements and their specific responsibilities under the Act.
  • Establishing clear internal protocols for responding to information requests from authorities.
  • Conducting internal audits to verify ongoing compliance and identify areas requiring remediation.

Organizations should designate individuals responsible for compliance oversight and establish escalation procedures for issues requiring executive attention. Legal counsel experienced in AI regulation should review compliance programs and documentation to ensure adequacy. Regular reviews of regulatory developments and guidance from supervisory authorities will ensure that compliance measures remain current as the regulatory landscape evolves.

Need Help with EU AI Act Compliance?

Verdict AI automates compliance documentation and helps you navigate complex regulatory requirements.

Get Started