Topics

Evaluation criterion
Legal article, Ai system, Legal obligation
Legal article, Technical requirement, Legal obligation
Legal obligation, Technical requirement, Ai system
Legal article, Documentation, Regulation
Market actor
Ai system
Data category
Technical requirement
Legal obligation, Legal article, Technical requirement
Institution
Institution
Legal obligation
Evaluation criterion, Legal article, Ai system
Legal obligation
Technical requirement
Ai system
Institution, Ai system, Technical requirement
Documentation, Legal article, Legislative body
Legal article
Legal article
Ai system, Legal obligation
Ai system
Ai system
Legal article, Regulation
Institution
Regulation
Technical requirement
Technical requirement
Legal obligation
Institution
Documentation
Ai system
Ai system
Ai system
Legal obligation
Ai system
Technical requirement
Technical requirement
Institution, Documentation
Ai system
Technical requirement
Technical requirement
Legal obligation
Legal article
Legal article
Technical requirement, Evaluation criterion, Regulation
Legal article
Legal article
Legal article, Technical requirement, Legal obligation
Technical requirement
Legal obligation
Legal obligation
Documentation
Institution
Legal obligation
Legal obligation
Technical requirement
Legal article
Documentation
Market actor
Market actor
Legal article
Evaluation criterion
Legal obligation
Legal obligation
Legal obligation
Evaluation criterion
Legal obligation
Data category
Data category
Data category
Data category
Data category
Evaluation criterion
Data category
Legal article
Technical requirement
Data category
Legal obligation, Legal article, Technical requirement
Legal article
Institution, Legal article, Technical requirement
Legal article
Documentation
Evaluation criterion
Market actor
Legal article
Technical requirement
Legal obligation
Legal article
Legal obligation
Legal obligation
Documentation, Institution
Legal article
Evaluation criterion
Legal article, Technical requirement
Evaluation criterion
Technical requirement
Institution
Documentation
Legal article
Documentation
Ai system
Ai system, Legal obligation
Evaluation criterion, Technical requirement, Legal obligation
Technical requirement
Data category
Institution
Institution
Legal article
Legal article
Legislative procedure
Legal article
Legal article
Institution, Legal article, Legal obligation
Institution
Documentation
Technical requirement
Legal obligation
Legal article
Legal obligation
Legal article
Legal article
Legal obligation
Legal obligation
Legal article
Legal article
Legal article
Legal article
Legal article
Legal article
Evaluation criterion
Regulation, Directive, Legal article
Documentation
Technical requirement
Documentation
Regulation
Regulation
Legal obligation
Institution
Technical requirement
Legal article
Documentation
Legal article
Regulation, Directive, Legal article
Ai system
Regulation
Regulation
Market actor, Regulation
Legal article, Legal obligation, Ai system

Content

Show original text

On June 13, 2024, the European Parliament and Council passed a new law called the Artificial Intelligence Act (Regulation EU 2024/1689). This law updates several existing EU regulations and directives related to transportation, aviation, and consumer protection. The main goal of this law is to create uniform rules across the European Union for developing, selling, and using artificial intelligence systems. The law aims to support the growth of trustworthy AI that puts humans first, while protecting people's health, safety, and fundamental rights—including democracy and the rule of law. By establishing these common standards, the law helps the EU's internal market function better.

<p>REGULATION (EU) 2024/1689 OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL<br /> of 13 June 2024<br /> laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, <br /> (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and <br /> Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)<br /> (Text with EEA relevance)<br /> THE EUROPEAN PARLIAMENT AND THE COUNCIL OF THE EUROPEAN UNION,<br /> Having regard to the Treaty on the Functioning of the European Union, and in particular Articles 16 and 114 thereof,<br /> Having regard to the proposal from the European Commission,<br /> After transmission of the draft legislative act to the national parliaments,<br /> Having regard to the opinion of the European Economic and Social Committee (1),<br /> Having regard to the opinion of the European Central Bank (2),<br /> Having regard to the opinion of the Committee of the Regions (3),<br /> Acting in accordance with the ordinary legislative procedure (4),<br /> Whereas:<br /> (1)<br /> The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal <br /> framework in particular for the development, the placing on the market, the putting into service and the use of <br /> artificial intelligence systems (AI systems) in the Union, in accordance with Union values, to promote the uptake of <br /> human centric and trustworthy artificial intelligence (AI) while ensuring a high level of protection of health, safety, <br /> fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union (the ‘Charter’), <br /> including democracy, the rule of law</p>
Show original text

This regulation protects people's health, safety, and fundamental rights in the European Union while allowing AI technology to develop and be used freely across EU countries. It prevents individual member states from blocking AI products and services, unless this regulation specifically allows it. The regulation is based on EU values including democracy, the rule of law, and environmental protection. It aims to make the EU a leader in trustworthy AI while supporting innovation and jobs. Since AI systems can easily spread across borders and different sectors, the EU needs uniform rules to prevent fragmentation of the market. Without consistent standards, different national rules would create confusion and uncertainty for companies developing, selling, or using AI systems. Therefore, this regulation establishes the same requirements for all operators throughout the Union to ensure AI is trustworthy, safe, and respects fundamental rights.

<p>artificial intelligence (AI) while ensuring a high level of protection of health, safety, <br /> fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union (the ‘Charter’), <br /> including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI <br /> systems in the Union, and to support innovation. This Regulation ensures the free movement, cross-border, of <br /> AI-based goods and services, thus preventing Member States from imposing restrictions on the development, <br /> marketing and use of AI systems, unless explicitly authorised by this Regulation.<br /> (2)<br /> This Regulation should be applied in accordance with the values of the Union enshrined as in the Charter, facilitating <br /> the protection of natural persons, undertakings, democracy, the rule of law and environmental protection, while <br /> boosting innovation and employment and making the Union a leader in the uptake of trustworthy AI.<br /> (3)<br /> AI systems can be easily deployed in a large variety of sectors of the economy and many parts of society, including <br /> across borders, and can easily circulate throughout the Union. Certain Member States have already explored the <br /> adoption of national rules to ensure that AI is trustworthy and safe and is developed and used in accordance with <br /> fundamental rights obligations. Diverging national rules may lead to the fragmentation of the internal market and <br /> may decrease legal certainty for operators that develop, import or use AI systems. A consistent and high level of <br /> protection throughout the Union should therefore be ensured in order to achieve trustworthy AI, while divergences <br /> hampering the free circulation, innovation, deployment and the uptake of AI systems and related products and <br /> services within the internal market should be prevented by laying down uniform obligations for operators and <br /> Official Journal <br /> of the European Union<br /> EN <br /> L series<br /> 2024/1689<br /> 12.7.</p>
Show original text

To ensure AI systems and related products are used safely and fairly across Europe, this regulation sets uniform rules for operators. It protects important public interests and individual rights throughout the European market based on EU law (Article 114 of the TFEU). The regulation includes specific protections for personal data, particularly regarding AI use in law enforcement for: remote biometric identification, risk assessments of individuals, and biometric categorization. These data protection rules are based on Article 16 of the TFEU. Because of these data protection provisions, the European Data Protection Board has been consulted on this regulation. This regulation was approved by the European Parliament on 13 March 2024 and by the Council on 21 May 2024.

<p>the uptake of AI systems and related products and <br /> services within the internal market should be prevented by laying down uniform obligations for operators and <br /> Official Journal <br /> of the European Union<br /> EN <br /> L series<br /> 2024/1689<br /> 12.7.2024<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 1/144<br /> (1)<br /> OJ C 517, 22.12.2021, p. 56.<br /> (2)<br /> OJ C 115, 11.3.2022, p. 5.<br /> (3)<br /> OJ C 97, 28.2.2022, p. 60.<br /> (4)<br /> Position of the European Parliament of 13 March 2024 (not yet published in the Official Journal) and decision of the Council of <br /> 21 May 2024.</p> <p>guaranteeing the uniform protection of overriding reasons of public interest and of rights of persons throughout the <br /> internal market on the basis of Article 114 of the Treaty on the Functioning of the European Union (TFEU). To the <br /> extent that this Regulation contains specific rules on the protection of individuals with regard to the processing of <br /> personal data concerning restrictions of the use of AI systems for remote biometric identification for the purpose of <br /> law enforcement, of the use of AI systems for risk assessments of natural persons for the purpose of law <br /> enforcement and of the use of AI systems of biometric categorisation for the purpose of law enforcement, it is <br /> appropriate to base this Regulation, in so far as those specific rules are concerned, on Article 16 TFEU. In light of <br /> those specific rules and the recourse to Article 16 TFEU, it is appropriate to consult the European Data Protection <br /> Board.</p>
Show original text

This regulation follows Article 16 of the Treaty on the Functioning of the European Union (TFEU) and requires consultation with the European Data Protection Board. Artificial Intelligence (AI) is a rapidly developing technology that offers significant benefits across many industries and sectors. AI improves forecasting, increases efficiency, optimizes resource use, and personalizes solutions for individuals and organizations. It has valuable applications in healthcare, agriculture, food safety, education, media, sports, culture, infrastructure, energy, transport, logistics, public services, security, justice, and environmental protection. However, AI can also create risks and cause harm to public interests and fundamental rights protected by EU law. This harm can be physical, psychological, social, or economic. Because AI has major societal impact, its development and regulation must align with EU values, fundamental rights, and freedoms as stated in the Treaty on European Union and the EU Charter of Fundamental Rights. AI must be human-centered, designed as a tool to serve people and improve their well-being.

<p>Regulation, in so far as those specific rules are concerned, on Article 16 TFEU. In light of <br /> those specific rules and the recourse to Article 16 TFEU, it is appropriate to consult the European Data Protection <br /> Board.<br /> (4)<br /> AI is a fast evolving family of technologies that contributes to a wide array of economic, environmental and societal <br /> benefits across the entire spectrum of industries and social activities. By improving prediction, optimising operations <br /> and resource allocation, and personalising digital solutions available for individuals and organisations, the use of AI <br /> can provide key competitive advantages to undertakings and support socially and environmentally beneficial <br /> outcomes, for example in healthcare, agriculture, food safety, education and training, media, sports, culture, <br /> infrastructure management, energy, transport and logistics, public services, security, justice, resource and energy <br /> efficiency, environmental monitoring, the conservation and restoration of biodiversity and ecosystems and climate <br /> change mitigation and adaptation.<br /> (5)<br /> At the same time, depending on the circumstances regarding its specific application, use, and level of technological <br /> development, AI may generate risks and cause harm to public interests and fundamental rights that are protected by <br /> Union law. Such harm might be material or immaterial, including physical, psychological, societal or economic <br /> harm.<br /> (6)<br /> Given the major impact that AI can have on society and the need to build trust, it is vital for AI and its regulatory <br /> framework to be developed in accordance with Union values as enshrined in Article 2 of the Treaty on European <br /> Union (TEU), the fundamental rights and freedoms enshrined in the Treaties and, pursuant to Article 6 TEU, the <br /> Charter. As a prerequisite, AI should be a human-centric technology. It should serve as a tool for people, with the <br /> ultimate aim of increasing human well-being.</p>
Show original text

The European Union needs to create unified rules for artificial intelligence (AI) to protect public interests while supporting innovation. AI should be designed to benefit people and improve their quality of life. The EU must establish common standards for high-risk AI systems that respect the Charter of Fundamental Rights, prevent discrimination, and align with international trade agreements. These standards should follow the European Declaration on Digital Rights and the trustworthy AI guidelines from the High-Level Expert Group on Artificial Intelligence. A comprehensive EU legal framework is needed to allow AI systems to be developed and used across the internal market while maintaining strong protections for public health, safety, fundamental rights, democracy, the rule of law, and environmental protection. This framework should set clear rules for how AI systems are sold, deployed, and used. These rules will help the internal market function smoothly, allow AI products to move freely between EU countries, protect fundamental rights, encourage innovation, and help European companies and organizations develop AI that reflects EU values. This approach will unlock the benefits of digital transformation across all regions of the Union.

<p>the Treaties and, pursuant to Article 6 TEU, the <br /> Charter. As a prerequisite, AI should be a human-centric technology. It should serve as a tool for people, with the <br /> ultimate aim of increasing human well-being.<br /> (7)<br /> In order to ensure a consistent and high level of protection of public interests as regards health, safety and <br /> fundamental rights, common rules for high-risk AI systems should be established. Those rules should be consistent <br /> with the Charter, non-discriminatory and in line with the Union’s international trade commitments. They should <br /> also take into account the European Declaration on Digital Rights and Principles for the Digital Decade and the <br /> Ethics guidelines for trustworthy AI of the High-Level Expert Group on Artificial Intelligence (AI HLEG).<br /> (8)<br /> A Union legal framework laying down harmonised rules on AI is therefore needed to foster the development, use <br /> and uptake of AI in the internal market that at the same time meets a high level of protection of public interests, such <br /> as health and safety and the protection of fundamental rights, including democracy, the rule of law and <br /> environmental protection as recognised and protected by Union law. To achieve that objective, rules regulating the <br /> placing on the market, the putting into service and the use of certain AI systems should be laid down, thus ensuring <br /> the smooth functioning of the internal market and allowing those systems to benefit from the principle of free <br /> movement of goods and services. Those rules should be clear and robust in protecting fundamental rights, <br /> supportive of new innovative solutions, enabling a European ecosystem of public and private actors creating AI <br /> systems in line with Union values and unlocking the potential of the digital transformation across all regions of the <br /> Union.</p>
Show original text

This regulation protects fundamental rights and supports innovation in artificial intelligence (AI) across Europe. It helps both public and private organizations create AI systems that reflect European values. The regulation includes special support for small and medium enterprises (SMEs) and startups to encourage innovation. It promotes Europe's human-centered approach to AI and aims to make Europe a global leader in developing secure, trustworthy, and ethical AI systems. This aligns with goals set by the European Council and meets the European Parliament's request to protect ethical principles. The regulation also establishes consistent rules for high-risk AI systems that are placed on the market, put into service, or used, following the framework established by Regulation (EC) No 765/2008, Decision No 768/2008/EC, and Regulation (EU) 2019/1020.

<p>clear and robust in protecting fundamental rights, <br /> supportive of new innovative solutions, enabling a European ecosystem of public and private actors creating AI <br /> systems in line with Union values and unlocking the potential of the digital transformation across all regions of the <br /> Union. By laying down those rules as well as measures in support of innovation with a particular focus on small and <br /> medium enterprises (SMEs), including startups, this Regulation supports the objective of promoting the European <br /> human-centric approach to AI and being a global leader in the development of secure, trustworthy and ethical AI as <br /> stated by the European Council (5), and it ensures the protection of ethical principles, as specifically requested by the <br /> European Parliament (6).<br /> EN<br /> OJ L, 12.7.2024<br /> 2/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> (5)<br /> European Council, Special meeting of the European Council (1 and 2 October 2020) — Conclusions, EUCO 13/20, 2020, p. 6.<br /> (6)<br /> European Parliament resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects <br /> of artificial intelligence, robotics and related technologies, 2020/2012(INL).</p> <p>(9)<br /> Harmonised rules applicable to the placing on the market, the putting into service and the use of high-risk AI <br /> systems should be laid down consistently with Regulation (EC) No 765/2008 of the European Parliament and of the <br /> Council (7), Decision No 768/2008/EC of the European Parliament and of the Council (8) and Regulation (EU) <br /> 2019/1020 of the European Parliament and of the Council (9) (New Legislative Framework).</p>
Show original text

This regulation creates uniform rules for artificial intelligence (AI) across all sectors and works alongside the New Legislative Framework. It does not replace existing EU laws on data protection, consumer protection, human rights, employment, worker safety, or product safety. All protections and compensation rights that consumers and others affected by AI systems have under current EU law remain fully in place, including damage compensation under Council Directive 85/374/EEC. The regulation also does not interfere with EU and national labor laws regarding employment, working conditions, workplace health and safety, or employer-worker relationships. It does not limit fundamental rights recognized in EU member states, such as the right to strike, take collective action, negotiate, or make collective agreements according to national law. The regulation does not affect rules designed to improve working conditions in platform work. Instead, this regulation strengthens existing protections by requiring AI systems to meet specific standards for transparency, technical documentation, and record-keeping.

<p>(7), Decision No 768/2008/EC of the European Parliament and of the Council (8) and Regulation (EU) <br /> 2019/1020 of the European Parliament and of the Council (9) (New Legislative Framework). The harmonised rules <br /> laid down in this Regulation should apply across sectors and, in line with the New Legislative Framework, should be <br /> without prejudice to existing Union law, in particular on data protection, consumer protection, fundamental rights, <br /> employment, and protection of workers, and product safety, to which this Regulation is complementary. As <br /> a consequence, all rights and remedies provided for by such Union law to consumers, and other persons on whom <br /> AI systems may have a negative impact, including as regards the compensation of possible damages pursuant to <br /> Council Directive 85/374/EEC (10) remain unaffected and fully applicable. Furthermore, in the context of <br /> employment and protection of workers, this Regulation should therefore not affect Union law on social policy and <br /> national labour law, in compliance with Union law, concerning employment and working conditions, including <br /> health and safety at work and the relationship between employers and workers. This Regulation should also not <br /> affect the exercise of fundamental rights as recognised in the Member States and at Union level, including the right or <br /> freedom to strike or to take other action covered by the specific industrial relations systems in Member States as well <br /> as the right to negotiate, to conclude and enforce collective agreements or to take collective action in accordance <br /> with national law. This Regulation should not affect the provisions aiming to improve working conditions in <br /> platform work laid down in a Directive of the European Parliament and of the Council on improving working <br /> conditions in platform work. Moreover, this Regulation aims to strengthen the effectiveness of such existing rights <br /> and remedies by establishing specific requirements and obligations, including in respect of the transparency, <br /> technical documentation and record-keeping of AI systems.</p>
Show original text

This regulation aims to improve working conditions in platform work and strengthen existing worker protections. It does this by setting specific requirements for AI systems, including rules about transparency, technical documentation, and record-keeping. The obligations apply to all companies involved in developing and using AI, but do not override national laws that protect workers or children (under 18 years old), as long as those laws serve other important public interests. These national protections, such as labor laws and child protection laws, remain valid alongside this regulation. The EU has separate laws that protect personal data and privacy rights. These laws (EU Regulations 2016/679 and 2018/1725, and EU Directives 2016/680 and 2002/58/EC) ensure that personal and non-personal data are processed responsibly and securely, including data stored on devices.

<p>on improving working <br /> conditions in platform work. Moreover, this Regulation aims to strengthen the effectiveness of such existing rights <br /> and remedies by establishing specific requirements and obligations, including in respect of the transparency, <br /> technical documentation and record-keeping of AI systems. Furthermore, the obligations placed on various <br /> operators involved in the AI value chain under this Regulation should apply without prejudice to national law, in <br /> compliance with Union law, having the effect of limiting the use of certain AI systems where such law falls outside <br /> the scope of this Regulation or pursues legitimate public interest objectives other than those pursued by this <br /> Regulation. For example, national labour law and law on the protection of minors, namely persons below the age of <br /> 18, taking into account the UNCRC General Comment No 25 (2021) on children’s rights in relation to the digital <br /> environment, insofar as they are not specific to AI systems and pursue other legitimate public interest objectives, <br /> should not be affected by this Regulation.<br /> (10)<br /> The fundamental right to the protection of personal data is safeguarded in particular by Regulations (EU) <br /> 2016/679 (11) and (EU) 2018/1725 (12) of the European Parliament and of the Council and Directive (EU) 2016/680 <br /> of the European Parliament and of the Council (13). Directive 2002/58/EC of the European Parliament and of the <br /> Council (14) additionally protects private life and the confidentiality of communications, including by way of <br /> providing conditions for any storing of personal and non-personal data in, and access from, terminal equipment. <br /> Those Union legal acts provide the basis for sustainable and responsible data processing, including where data sets <br /> include a mix of personal and non-personal data.</p>
Show original text

This regulation covers how personal and non-personal data are stored and accessed on devices. It is based on existing European Union laws that ensure data is processed responsibly and sustainably, even when data sets mix personal and non-personal information. This regulation does not change how current EU laws protect personal data or reduce the powers of independent authorities that monitor compliance. It also does not affect the responsibilities of AI system providers and operators when they handle personal data as part of designing, developing, or using AI systems. People whose data is processed continue to have all their rights under existing data protection laws. The regulation references EU Regulation 765/2008 on accreditation requirements and EU Decision 768/2008/EC on product marketing standards.

<p>for any storing of personal and non-personal data in, and access from, terminal equipment. <br /> Those Union legal acts provide the basis for sustainable and responsible data processing, including where data sets <br /> include a mix of personal and non-personal data. This Regulation does not seek to affect the application of existing <br /> Union law governing the processing of personal data, including the tasks and powers of the independent supervisory <br /> authorities competent to monitor compliance with those instruments. It also does not affect the obligations of <br /> providers and deployers of AI systems in their role as data controllers or processors stemming from Union or <br /> national law on the protection of personal data in so far as the design, the development or the use of AI systems <br /> involves the processing of personal data. It is also appropriate to clarify that data subjects continue to enjoy all the <br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 3/144<br /> (7)<br /> Regulation (EC) No 765/2008 of the European Parliament and of the Council of 9 July 2008 setting out the requirements for <br /> accreditation and repealing Regulation (EEC) No 339/93 (OJ L 218, 13.8.2008, p. 30).<br /> (8)<br /> Decision No 768/2008/EC of the European Parliament and of the Council of 9 July 2008 on a common framework for the <br /> marketing of products, and repealing Council Decision 93/465/EEC (OJ L 218, 13.8.2008, p. 82).</p>
Show original text

This text references several important European Union regulations and directives: (9) EU Regulation 2019/1020 from June 20, 2019, which sets rules for monitoring product quality and ensuring products meet standards. It updates earlier regulations on product marketing. (10) EU Directive 85/374/EEC from July 25, 1985, which harmonizes laws across member states regarding who is responsible when defective products cause harm. (11) EU Regulation 2016/679 from April 27, 2016, known as the General Data Protection Regulation (GDPR), which protects people's personal information and allows data to move freely across the EU. It replaced an earlier directive from 1995. (12) EU Regulation 2018/1725 from October 23, 2018, which protects personal data handled by EU institutions and agencies. It replaced an older regulation from 2001 and a decision from 2002.

<p>of 9 July 2008 on a common framework for the <br /> marketing of products, and repealing Council Decision 93/465/EEC (OJ L 218, 13.8.2008, p. 82).<br /> (9)<br /> Regulation (EU) 2019/1020 of the European Parliament and of the Council of 20 June 2019 on market surveillance and compliance <br /> of products and amending Directive 2004/42/EC and Regulations (EC) No 765/2008 and (EU) No 305/2011 (OJ L 169, 25.6.2019, <br /> p. 1).<br /> (10)<br /> Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the <br /> Member States concerning liability for defective products (OJ L 210, 7.8.1985, p. 29).<br /> (11)<br /> Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons <br /> with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General <br /> Data Protection Regulation) (OJ L 119, 4.5.2016, p. 1).<br /> (12)<br /> Regulation (EU) 2018/1725 of the European Parliament and of the Council of 23 October 2018 on the protection of natural <br /> persons with regard to the processing of personal data by the Union institutions, bodies, offices and agencies and on the free <br /> movement of such data, and repealing Regulation (EC) No 45/2001 and Decision No 1247/2002/EC (OJ L 295, 21.11.2018, p. 39).</p>
Show original text

This regulation references several EU laws that protect personal data and privacy: Regulation (EU) 2018/1807 (which allows data movement and replaced earlier regulations from 2001 and 2002), Directive (EU) 2016/680 (which protects personal data used by law enforcement for criminal investigations), and Directive 2002/58/EC (which protects privacy in electronic communications). The regulation ensures that AI systems follow harmonized rules for market placement and use, allowing people to exercise their rights under EU data protection laws. These rights include protection against automated decision-making and profiling. The regulation does not affect the liability rules for online service providers established in Regulation (EU) 2022/2065.

<p>movement of such data, and repealing Regulation (EC) No 45/2001 and Decision No 1247/2002/EC (OJ L 295, 21.11.2018, p. 39).<br /> (13)<br /> Directive (EU) 2016/680 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with <br /> regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or <br /> prosecution of criminal offences or the execution of criminal penalties, and on the free movement of such data, and repealing <br /> Council Framework Decision 2008/977/JHA (OJ L 119, 4.5.2016, p. 89).<br /> (14)<br /> Directive 2002/58/EC of the European Parliament and of the Council of 12 July 2002 concerning the processing of personal data <br /> and the protection of privacy in the electronic communications sector (Directive on privacy and electronic communications) (OJ <br /> L 201, 31.7.2002, p. 37).</p> <p>rights and guarantees awarded to them by such Union law, including the rights related to solely automated individual <br /> decision-making, including profiling. Harmonised rules for the placing on the market, the putting into service and <br /> the use of AI systems established under this Regulation should facilitate the effective implementation and enable the <br /> exercise of the data subjects’ rights and other remedies guaranteed under Union law on the protection of personal <br /> data and of other fundamental rights.<br /> (11)<br /> This Regulation should be without prejudice to the provisions regarding the liability of providers of intermediary <br /> services as set out in Regulation (EU) 2022/2065 of the European Parliament and of the Council (15).</p>
Show original text

This regulation does not affect the rules about liability for intermediary service providers under EU Regulation 2022/2065. The term 'AI system' in this regulation must be clearly defined and aligned with international AI standards to ensure legal certainty and global consistency, while remaining flexible for rapid technological changes. The definition should focus on what makes AI systems different from basic software or programming—specifically, their ability to learn and make inferences. AI systems can produce outputs like predictions, recommendations, or decisions that affect physical and digital environments. They do this by learning from data (machine learning) or by reasoning from encoded knowledge (logic and knowledge-based approaches). This inference capability goes beyond simple data processing because it involves learning, reasoning, and creating models. The term 'machine-based' means AI systems operate on computers. AI systems can work toward either explicit (clearly stated) or implicit (unstated) objectives, which may differ from their intended purpose in a specific situation.

<p>other fundamental rights.<br /> (11)<br /> This Regulation should be without prejudice to the provisions regarding the liability of providers of intermediary <br /> services as set out in Regulation (EU) 2022/2065 of the European Parliament and of the Council (15).<br /> (12)<br /> The notion of ‘AI system’ in this Regulation should be clearly defined and should be closely aligned with the work of <br /> international organisations working on AI to ensure legal certainty, facilitate international convergence and wide <br /> acceptance, while providing the flexibility to accommodate the rapid technological developments in this field. <br /> Moreover, the definition should be based on key characteristics of AI systems that distinguish it from simpler <br /> traditional software systems or programming approaches and should not cover systems that are based on the rules <br /> defined solely by natural persons to automatically execute operations. A key characteristic of AI systems is their <br /> capability to infer. This capability to infer refers to the process of obtaining the outputs, such as predictions, content, <br /> recommendations, or decisions, which can influence physical and virtual environments, and to a capability of AI <br /> systems to derive models or algorithms, or both, from inputs or data. The techniques that enable inference while <br /> building an AI system include machine learning approaches that learn from data how to achieve certain objectives, <br /> and logic- and knowledge-based approaches that infer from encoded knowledge or symbolic representation of the <br /> task to be solved. The capacity of an AI system to infer transcends basic data processing by enabling learning, <br /> reasoning or modelling. The term ‘machine-based’ refers to the fact that AI systems run on machines. The reference <br /> to explicit or implicit objectives underscores that AI systems can operate according to explicit defined objectives or <br /> to implicit objectives. The objectives of the AI system may be different from the intended purpose of the AI system <br /> in a specific context.</p>
Show original text

AI systems can work toward goals that are either clearly stated or implied. These goals may differ from what the AI system was originally intended to do. The environments are the settings where AI systems operate, and their outputs include predictions, recommendations, decisions, and generated content. AI systems are designed with different levels of independence from human control—some need human input while others can work on their own. Some AI systems can learn and adapt after being deployed, changing their behavior while in use. AI systems can work alone or be part of a larger product, either built into it or working separately. A 'deployer' is any person or organization, including government bodies, that uses an AI system, except for personal, non-professional use. The deployer's use of the system may affect other people. 'Biometric data' in this regulation refers to the definition found in EU Regulation 2016/679 (Article 4, point 14), EU Regulation 2018/1725 (Article 3, point 18), and EU Directive 2016/680 (Article 3, point 13).

<p>machines. The reference <br /> to explicit or implicit objectives underscores that AI systems can operate according to explicit defined objectives or <br /> to implicit objectives. The objectives of the AI system may be different from the intended purpose of the AI system <br /> in a specific context. For the purposes of this Regulation, environments should be understood to be the contexts in <br /> which the AI systems operate, whereas outputs generated by the AI system reflect different functions performed by <br /> AI systems and include predictions, content, recommendations or decisions. AI systems are designed to operate with <br /> varying levels of autonomy, meaning that they have some degree of independence of actions from human <br /> involvement and of capabilities to operate without human intervention. The adaptiveness that an AI system could <br /> exhibit after deployment, refers to self-learning capabilities, allowing the system to change while in use. AI systems <br /> can be used on a stand-alone basis or as a component of a product, irrespective of whether the system is physically <br /> integrated into the product (embedded) or serves the functionality of the product without being integrated therein <br /> (non-embedded).<br /> (13)<br /> The notion of ‘deployer’ referred to in this Regulation should be interpreted as any natural or legal person, including <br /> a public authority, agency or other body, using an AI system under its authority, except where the AI system is used <br /> in the course of a personal non-professional activity. Depending on the type of AI system, the use of the system may <br /> affect persons other than the deployer.<br /> (14)<br /> The notion of ‘biometric data’ used in this Regulation should be interpreted in light of the notion of biometric data <br /> as defined in Article 4, point (14) of Regulation (EU) 2016/679, Article 3, point (18) of Regulation (EU) 2018/1725 <br /> and Article 3, point (13) of Directive (EU) 2016/680.</p>
Show original text

Biometric data includes physical, physiological, and behavioral characteristics such as facial features, eye movement, body shape, voice, gait, posture, heart rate, blood pressure, smell, and keystroke patterns. This data can be used to authenticate, identify, or categorize people, and to recognize emotions. Biometric identification is defined as the automated process of recognizing these human features to establish someone's identity by comparing their biometric data against a database of stored biometric data. This applies regardless of whether the person has consented. However, biometric verification systems are excluded from this definition. These systems are designed solely to confirm that a person is who they claim to be for purposes such as accessing a service, unlocking a device, or gaining security access to a location. This definition is based on EU Regulations 2016/679, 2018/1725, 2016/680, and 2022/2065.

<p>Regulation (EU) 2016/679, Article 3, point (18) of Regulation (EU) 2018/1725 <br /> and Article 3, point (13) of Directive (EU) 2016/680. Biometric data can allow for the authentication, identification <br /> or categorisation of natural persons and for the recognition of emotions of natural persons.<br /> (15)<br /> The notion of ‘biometric identification’ referred to in this Regulation should be defined as the automated recognition <br /> of physical, physiological and behavioural human features such as the face, eye movement, body shape, voice, <br /> prosody, gait, posture, heart rate, blood pressure, odour, keystrokes characteristics, for the purpose of establishing an <br /> individual’s identity by comparing biometric data of that individual to stored biometric data of individuals in <br /> a reference database, irrespective of whether the individual has given its consent or not. This excludes AI systems <br /> intended to be used for biometric verification, which includes authentication, whose sole purpose is to confirm that <br /> a specific natural person is the person he or she claims to be and to confirm the identity of a natural person for the <br /> sole purpose of having access to a service, unlocking a device or having security access to premises.<br /> EN<br /> OJ L, 12.7.2024<br /> 4/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> (15)<br /> Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital <br /> Services and amending Directive 2000/31/EC (Digital Services Act) (OJ L 277, 27.10.2022, p. 1).</p>
Show original text

On October 19, 2022, the Digital Services Act was adopted to regulate a single market for digital services and amend Directive 2000/31/EC (published in Official Journal L 277, October 27, 2022, page 1). The regulation defines 'biometric categorisation' as sorting people into specific groups based on their biometric data. These groups can be based on characteristics like sex, age, hair color, eye color, tattoos, behavior, personality traits, language, religion, minority status, sexual orientation, or political orientation. However, this definition does not apply to biometric categorisation features that are secondary parts of another main commercial service and cannot technically work without that main service. For example, filters on online shopping sites that categorize facial or body features to help customers preview products on themselves are considered secondary features because they only work within the shopping service. Similarly, filters on social media that let users modify photos or videos by categorizing facial or body features are secondary features because they depend on the main social media service of sharing content online.

<p>19 October 2022 on a Single Market For Digital <br /> Services and amending Directive 2000/31/EC (Digital Services Act) (OJ L 277, 27.10.2022, p. 1).</p> <p>(16)<br /> The notion of ‘biometric categorisation’ referred to in this Regulation should be defined as assigning natural persons <br /> to specific categories on the basis of their biometric data. Such specific categories can relate to aspects such as sex, <br /> age, hair colour, eye colour, tattoos, behavioural or personality traits, language, religion, membership of a national <br /> minority, sexual or political orientation. This does not include biometric categorisation systems that are a purely <br /> ancillary feature intrinsically linked to another commercial service, meaning that the feature cannot, for objective <br /> technical reasons, be used without the principal service, and the integration of that feature or functionality is not <br /> a means to circumvent the applicability of the rules of this Regulation. For example, filters categorising facial or body <br /> features used on online marketplaces could constitute such an ancillary feature as they can be used only in relation to <br /> the principal service which consists in selling a product by allowing the consumer to preview the display of the <br /> product on him or herself and help the consumer to make a purchase decision. Filters used on online social network <br /> services which categorise facial or body features to allow users to add or modify pictures or videos could also be <br /> considered to be ancillary feature as such filter cannot be used without the principal service of the social network <br /> services consisting in the sharing of content online.</p>
Show original text

Filters that modify facial or body features in social media are considered secondary features because they depend on the main service of sharing content online.

A 'remote biometric identification system' is an AI tool that identifies people without their knowledge or active participation, usually from a distance. It works by comparing a person's biometric data (like facial features or fingerprints) against a database of reference data. These systems can monitor multiple people at once to identify them without their consent.

However, this definition excludes biometric verification systems, which are used only to confirm that a specific person is who they claim to be. Examples include unlocking a device, accessing a service, or entering a secure location. These verification systems are excluded because they have less impact on people's rights compared to remote identification systems, which can process biometric data from many people without their knowledge.

'Real-time' systems capture biometric data, compare it, and identify people instantly or almost instantly, with no meaningful delay. Regulations should prevent companies from avoiding rules about real-time AI systems by introducing minor delays.

<p>ise facial or body features to allow users to add or modify pictures or videos could also be <br /> considered to be ancillary feature as such filter cannot be used without the principal service of the social network <br /> services consisting in the sharing of content online.<br /> (17)<br /> The notion of ‘remote biometric identification system’ referred to in this Regulation should be defined functionally, <br /> as an AI system intended for the identification of natural persons without their active involvement, typically at <br /> a distance, through the comparison of a person’s biometric data with the biometric data contained in a reference <br /> database, irrespectively of the particular technology, processes or types of biometric data used. Such remote <br /> biometric identification systems are typically used to perceive multiple persons or their behaviour simultaneously in <br /> order to facilitate significantly the identification of natural persons without their active involvement. This excludes <br /> AI systems intended to be used for biometric verification, which includes authentication, the sole purpose of which <br /> is to confirm that a specific natural person is the person he or she claims to be and to confirm the identity of <br /> a natural person for the sole purpose of having access to a service, unlocking a device or having security access to <br /> premises. That exclusion is justified by the fact that such systems are likely to have a minor impact on fundamental <br /> rights of natural persons compared to the remote biometric identification systems which may be used for the <br /> processing of the biometric data of a large number of persons without their active involvement. In the case of <br /> ‘real-time’ systems, the capturing of the biometric data, the comparison and the identification occur all <br /> instantaneously, near-instantaneously or in any event without a significant delay. In this regard, there should be no <br /> scope for circumventing the rules of this Regulation on the ‘real-time’ use of the AI systems concerned by providing <br /> for minor delays.</p>
Show original text

Real-time AI systems must work instantly or nearly instantly without significant delays. These systems use live or near-live material, such as video footage from cameras or similar devices. In contrast, post-event systems work with biometric data that was already collected. They compare and identify people only after a significant delay, using material like pictures or video footage from security cameras or private devices that was recorded before the system was used.

An emotion recognition system is an AI system designed to identify or infer the emotions or intentions of people based on their biometric data. These emotions include happiness, sadness, anger, surprise, disgust, embarrassment, excitement, shame, contempt, satisfaction, and amusement. However, emotion recognition systems do not include physical states like pain or fatigue. For example, systems that detect driver or pilot fatigue to prevent accidents are not emotion recognition systems. Additionally, simply detecting obvious facial expressions, gestures, or voice characteristics—such as a frown, smile, hand movements, or a raised voice—is not considered emotion recognition unless these signs are specifically used to identify or infer emotions.

<p>near-instantaneously or in any event without a significant delay. In this regard, there should be no <br /> scope for circumventing the rules of this Regulation on the ‘real-time’ use of the AI systems concerned by providing <br /> for minor delays. ‘Real-time’ systems involve the use of ‘live’ or ‘near-live’ material, such as video footage, generated <br /> by a camera or other device with similar functionality. In the case of ‘post’ systems, in contrast, the biometric data <br /> has already been captured and the comparison and identification occur only after a significant delay. This involves <br /> material, such as pictures or video footage generated by closed circuit television cameras or private devices, which <br /> has been generated before the use of the system in respect of the natural persons concerned.<br /> (18)<br /> The notion of ‘emotion recognition system’ referred to in this Regulation should be defined as an AI system for the <br /> purpose of identifying or inferring emotions or intentions of natural persons on the basis of their biometric data. <br /> The notion refers to emotions or intentions such as happiness, sadness, anger, surprise, disgust, embarrassment, <br /> excitement, shame, contempt, satisfaction and amusement. It does not include physical states, such as pain or <br /> fatigue, including, for example, systems used in detecting the state of fatigue of professional pilots or drivers for the <br /> purpose of preventing accidents. This does also not include the mere detection of readily apparent expressions, <br /> gestures or movements, unless they are used for identifying or inferring emotions. Those expressions can be basic <br /> facial expressions, such as a frown or a smile, or gestures such as the movement of hands, arms or head, or <br /> characteristics of a person’s voice, such as a raised voice or whispering.</p>
Show original text

Expressions include facial movements like frowns or smiles, hand or head gestures, and voice characteristics such as speaking loudly or softly. A publicly accessible space is any physical location that the general public can enter, regardless of whether it is privately or publicly owned. This includes spaces used for shopping (stores, restaurants, cafés), services (banks, offices, hotels), sports (pools, gyms, stadiums), transportation (train stations, airports, buses), entertainment (cinemas, theaters, museums), and leisure (parks, roads, playgrounds). A space is also considered publicly accessible if entry requires meeting certain conditions that anyone can fulfill, such as buying a ticket, registering in advance, or meeting an age requirement. However, a space is not publicly accessible if access is restricted to specific individuals by law for safety or security reasons, or if the person in charge has clearly stated that access is limited.

<p>expressions can be basic <br /> facial expressions, such as a frown or a smile, or gestures such as the movement of hands, arms or head, or <br /> characteristics of a person’s voice, such as a raised voice or whispering.<br /> (19)<br /> For the purposes of this Regulation the notion of ‘publicly accessible space’ should be understood as referring to any <br /> physical space that is accessible to an undetermined number of natural persons, and irrespective of whether the <br /> space in question is privately or publicly owned, irrespective of the activity for which the space may be used, such as <br /> for commerce, for example, shops, restaurants, cafés; for services, for example, banks, professional activities, <br /> hospitality; for sport, for example, swimming pools, gyms, stadiums; for transport, for example, bus, metro and <br /> railway stations, airports, means of transport; for entertainment, for example, cinemas, theatres, museums, concert <br /> and conference halls; or for leisure or otherwise, for example, public roads and squares, parks, forests, playgrounds. <br /> A space should also be classified as being publicly accessible if, regardless of potential capacity or security <br /> restrictions, access is subject to certain predetermined conditions which can be fulfilled by an undetermined number <br /> of persons, such as the purchase of a ticket or title of transport, prior registration or having a certain age. In contrast, <br /> a space should not be considered to be publicly accessible if access is limited to specific and defined natural persons <br /> through either Union or national law directly related to public safety or security or through the clear manifestation <br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 5/144</p> <p>of will by the person having the relevant authority over the space.</p>
Show original text

A space is only considered publicly accessible if people are allowed to enter it. Simply having an unlocked door or open gate does not make a space public if there are signs or other clear indications that access is restricted. Private spaces like company offices, factories, and workplaces are not publicly accessible. Prisons and border control areas are also excluded. Some locations, such as airport hallways or building entrances leading to private offices, contain both public and private areas. Online spaces are not included in this definition since they are not physical locations. Each situation must be evaluated individually based on its specific circumstances.

To maximize the benefits of AI systems while protecting people's rights, health, and safety, everyone involved needs AI literacy. This means providers, users, and affected people should understand how AI systems work and are used. This knowledge includes understanding how AI systems are developed, what safeguards are in place during use, how to correctly interpret AI results, and for those affected by AI decisions, understanding how those decisions will impact them.

<p>L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 5/144</p> <p>of will by the person having the relevant authority over the space. The factual possibility of access alone, such as an <br /> unlocked door or an open gate in a fence, does not imply that the space is publicly accessible in the presence of <br /> indications or circumstances suggesting the contrary, such as. signs prohibiting or restricting access. Company and <br /> factory premises, as well as offices and workplaces that are intended to be accessed only by relevant employees and <br /> service providers, are spaces that are not publicly accessible. Publicly accessible spaces should not include prisons or <br /> border control. Some other spaces may comprise both publicly accessible and non-publicly accessible spaces, such as <br /> the hallway of a private residential building necessary to access a doctor’s office or an airport. Online spaces are not <br /> covered, as they are not physical spaces. Whether a given space is accessible to the public should however be <br /> determined on a case-by-case basis, having regard to the specificities of the individual situation at hand.<br /> (20)<br /> In order to obtain the greatest benefits from AI systems while protecting fundamental rights, health and safety and to <br /> enable democratic control, AI literacy should equip providers, deployers and affected persons with the necessary <br /> notions to make informed decisions regarding AI systems. Those notions may vary with regard to the relevant <br /> context and can include understanding the correct application of technical elements during the AI system’s <br /> development phase, the measures to be applied during its use, the suitable ways in which to interpret the AI system’s <br /> output, and, in the case of affected persons, the knowledge necessary to understand how decisions taken with the <br /> assistance of AI will have an impact on them.</p>
Show original text

People need to understand how AI systems work, including how to interpret their outputs and how AI-assisted decisions will affect them. AI literacy helps everyone in the AI industry follow regulations correctly. Better AI education can improve working conditions and support trustworthy AI development in Europe. The European Artificial Intelligence Board should work with the Commission to promote AI literacy, raise public awareness, and explain the benefits, risks, and rules of AI systems. The Commission and Member States should create voluntary codes of conduct with stakeholders to improve AI literacy among developers, operators, and users. To protect people's rights fairly across Europe, these regulations must apply equally to all AI providers, whether based in the EU or elsewhere, and to all AI users in the EU. Some AI systems must follow these rules even if they are not sold or used in the EU, such as when an EU company hires a non-EU company to operate a high-risk AI system.

<p>applied during its use, the suitable ways in which to interpret the AI system’s <br /> output, and, in the case of affected persons, the knowledge necessary to understand how decisions taken with the <br /> assistance of AI will have an impact on them. In the context of the application this Regulation, AI literacy should <br /> provide all relevant actors in the AI value chain with the insights required to ensure the appropriate compliance and <br /> its correct enforcement. Furthermore, the wide implementation of AI literacy measures and the introduction of <br /> appropriate follow-up actions could contribute to improving working conditions and ultimately sustain the <br /> consolidation, and innovation path of trustworthy AI in the Union. The European Artificial Intelligence Board (the <br /> ‘Board’) should support the Commission, to promote AI literacy tools, public awareness and understanding of the <br /> benefits, risks, safeguards, rights and obligations in relation to the use of AI systems. In cooperation with the relevant <br /> stakeholders, the Commission and the Member States should facilitate the drawing up of voluntary codes of conduct <br /> to advance AI literacy among persons dealing with the development, operation and use of AI.<br /> (21)<br /> In order to ensure a level playing field and an effective protection of rights and freedoms of individuals across the <br /> Union, the rules established by this Regulation should apply to providers of AI systems in a non-discriminatory <br /> manner, irrespective of whether they are established within the Union or in a third country, and to deployers of AI <br /> systems established within the Union.<br /> (22)<br /> In light of their digital nature, certain AI systems should fall within the scope of this Regulation even when they are <br /> not placed on the market, put into service, or used in the Union. This is the case, for example, where an operator <br /> established in the Union contracts certain services to an operator established in a third country in relation to an <br /> activity to be performed by an AI system that would qualify as high-risk.</p>
Show original text

A company in the EU can hire a company outside the EU to use an AI system for high-risk activities. In this case, the outside company's AI system can process data from the EU and send results back to the EU company, without the AI system itself being used in the EU. To prevent companies from avoiding this regulation and to protect EU citizens, this regulation applies to AI providers and users outside the EU if their AI system's output is meant for use in the EU. However, there are exceptions for government agencies and international organizations from other countries that work with the EU on law enforcement and judicial matters. These exceptions apply only if the outside country or organization provides adequate protection of people's fundamental rights and freedoms. This cooperation is based on agreements made between EU Member States and other countries, or between the EU, Europol, and international organizations.

<p>Union. This is the case, for example, where an operator <br /> established in the Union contracts certain services to an operator established in a third country in relation to an <br /> activity to be performed by an AI system that would qualify as high-risk. In those circumstances, the AI system used <br /> in a third country by the operator could process data lawfully collected in and transferred from the Union, and <br /> provide to the contracting operator in the Union the output of that AI system resulting from that processing, <br /> without that AI system being placed on the market, put into service or used in the Union. To prevent the <br /> circumvention of this Regulation and to ensure an effective protection of natural persons located in the Union, this <br /> Regulation should also apply to providers and deployers of AI systems that are established in a third country, to the <br /> extent the output produced by those systems is intended to be used in the Union. Nonetheless, to take into account <br /> existing arrangements and special needs for future cooperation with foreign partners with whom information and <br /> evidence is exchanged, this Regulation should not apply to public authorities of a third country and international <br /> organisations when acting in the framework of cooperation or international agreements concluded at Union or <br /> national level for law enforcement and judicial cooperation with the Union or the Member States, provided that the <br /> relevant third country or international organisation provides adequate safeguards with respect to the protection of <br /> fundamental rights and freedoms of individuals. Where relevant, this may cover activities of entities entrusted by the <br /> third countries to carry out specific tasks in support of such law enforcement and judicial cooperation. Such <br /> framework for cooperation or agreements have been established bilaterally between Member States and third <br /> countries or between the European Union, Europol and other Union agencies and third countries and international <br /> organisations.</p>
Show original text

Law enforcement and judicial cooperation frameworks have been set up between EU Member States, third countries, Europol, other EU agencies, and international organizations. The authorities responsible for overseeing law enforcement and judicial bodies must check that these cooperation agreements properly protect people's fundamental rights and freedoms. When these agreements are used by national authorities and EU institutions, they remain responsible for ensuring compliance with EU law. Future revisions or new agreements should align with this Regulation's requirements. This Regulation applies to all EU institutions, bodies, offices, and agencies that provide or use AI systems. However, AI systems used for military, defense, or national security purposes are excluded from this Regulation, regardless of whether they are used by public or private entities.

<p>support of such law enforcement and judicial cooperation. Such <br /> framework for cooperation or agreements have been established bilaterally between Member States and third <br /> countries or between the European Union, Europol and other Union agencies and third countries and international <br /> organisations. The authorities competent for supervision of the law enforcement and judicial authorities under this <br /> Regulation should assess whether those frameworks for cooperation or international agreements include adequate <br /> safeguards with respect to the protection of fundamental rights and freedoms of individuals. Recipient national <br /> EN<br /> OJ L, 12.7.2024<br /> 6/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>authorities and Union institutions, bodies, offices and agencies making use of such outputs in the Union remain <br /> accountable to ensure their use complies with Union law. When those international agreements are revised or new <br /> ones are concluded in the future, the contracting parties should make utmost efforts to align those agreements with <br /> the requirements of this Regulation.<br /> (23)<br /> This Regulation should also apply to Union institutions, bodies, offices and agencies when acting as a provider or <br /> deployer of an AI system.<br /> (24)<br /> If, and insofar as, AI systems are placed on the market, put into service, or used with or without modification of such <br /> systems for military, defence or national security purposes, those should be excluded from the scope of this <br /> Regulation regardless of which type of entity is carrying out those activities, such as whether it is a public or private <br /> entity.</p>
Show original text

AI systems used for military, defense, or national security purposes are excluded from this Regulation, regardless of whether a public or private entity operates them. This exclusion is justified because these areas are governed by international law and remain the responsibility of individual Member States under Article 4(2) TEU. However, if an AI system originally developed for military, defense, or national security is later used for other purposes—such as civilian, humanitarian, law enforcement, or public security purposes—it must then comply with this Regulation. The entity using the system for these other purposes must ensure compliance, unless the system already meets the requirements. Additionally, AI systems designed for both excluded purposes (military, defense, national security) and non-excluded purposes (civilian, law enforcement) fall within the scope of this Regulation, and their providers must ensure full compliance.

<p>modification of such <br /> systems for military, defence or national security purposes, those should be excluded from the scope of this <br /> Regulation regardless of which type of entity is carrying out those activities, such as whether it is a public or private <br /> entity. As regards military and defence purposes, such exclusion is justified both by Article 4(2) TEU and by the <br /> specificities of the Member States’ and the common Union defence policy covered by Chapter 2 of Title V TEU that <br /> are subject to public international law, which is therefore the more appropriate legal framework for the regulation of <br /> AI systems in the context of the use of lethal force and other AI systems in the context of military and defence <br /> activities. As regards national security purposes, the exclusion is justified both by the fact that national security <br /> remains the sole responsibility of Member States in accordance with Article 4(2) TEU and by the specific nature and <br /> operational needs of national security activities and specific national rules applicable to those activities. Nonetheless, <br /> if an AI system developed, placed on the market, put into service or used for military, defence or national security <br /> purposes is used outside those temporarily or permanently for other purposes, for example, civilian or humanitarian <br /> purposes, law enforcement or public security purposes, such a system would fall within the scope of this Regulation. <br /> In that case, the entity using the AI system for other than military, defence or national security purposes should <br /> ensure the compliance of the AI system with this Regulation, unless the system is already compliant with this <br /> Regulation. AI systems placed on the market or put into service for an excluded purpose, namely military, defence or <br /> national security, and one or more non-excluded purposes, such as civilian purposes or law enforcement, fall within <br /> the scope of this Regulation and providers of those systems should ensure compliance with this Regulation.</p>
Show original text

AI systems that serve both excluded purposes (military, defense, national security) and non-excluded purposes (civilian use, law enforcement) must follow this Regulation. However, entities can still use AI systems for military, defense, and national security purposes without following this Regulation, even if those same systems are designed for civilian or law enforcement use. An AI system sold for civilian or law enforcement purposes remains outside this Regulation's scope if it is later used for military, defense, or national security purposes, regardless of who uses it.

This Regulation should encourage innovation and protect scientific freedom without hindering research and development. Therefore, AI systems and models created solely for scientific research are excluded from this Regulation. The Regulation also does not apply to research and development activities on AI systems before they are released to the market or put into use. However, once an AI system is placed on the market or put into service as a result of research and development, it must comply with this Regulation. This exclusion does not prevent the application of rules on AI regulatory sandboxes and real-world testing.

<p>an excluded purpose, namely military, defence or <br /> national security, and one or more non-excluded purposes, such as civilian purposes or law enforcement, fall within <br /> the scope of this Regulation and providers of those systems should ensure compliance with this Regulation. In those <br /> cases, the fact that an AI system may fall within the scope of this Regulation should not affect the possibility of <br /> entities carrying out national security, defence and military activities, regardless of the type of entity carrying out <br /> those activities, to use AI systems for national security, military and defence purposes, the use of which is excluded <br /> from the scope of this Regulation. An AI system placed on the market for civilian or law enforcement purposes <br /> which is used with or without modification for military, defence or national security purposes should not fall within <br /> the scope of this Regulation, regardless of the type of entity carrying out those activities.<br /> (25)<br /> This Regulation should support innovation, should respect freedom of science, and should not undermine research <br /> and development activity. It is therefore necessary to exclude from its scope AI systems and models specifically <br /> developed and put into service for the sole purpose of scientific research and development. Moreover, it is necessary <br /> to ensure that this Regulation does not otherwise affect scientific research and development activity on AI systems or <br /> models prior to being placed on the market or put into service. As regards product-oriented research, testing and <br /> development activity regarding AI systems or models, the provisions of this Regulation should also not apply prior <br /> to those systems and models being put into service or placed on the market. That exclusion is without prejudice to <br /> the obligation to comply with this Regulation where an AI system falling into the scope of this Regulation is placed <br /> on the market or put into service as a result of such research and development activity and to the application of <br /> provisions on AI regulatory sandboxes and testing in real world conditions.</p>
Show original text

AI systems used in research and development must follow this Regulation when they are released to the market or put into service. However, AI systems created only for scientific research are exempt. All other AI systems used in research must comply with this Regulation and follow ethical and professional standards for scientific research, as well as all applicable EU laws.

To regulate AI systems fairly and effectively, a risk-based approach should be used. This means the rules should match the level of risk that each AI system poses. Under this approach, certain dangerous AI practices will be banned, strict requirements will be set for high-risk AI systems, operators will have specific obligations, and some AI systems will need to be transparent about how they work.

While the risk-based approach is important for fair regulation, it is also valuable to remember the 2019 Ethics Guidelines for Trustworthy AI created by the independent AI High-Level Expert Group (AI HLEG) appointed by the European Commission. These guidelines include seven ethical principles for AI designed to help ensure that AI systems are trustworthy and ethically responsible.

<p>where an AI system falling into the scope of this Regulation is placed <br /> on the market or put into service as a result of such research and development activity and to the application of <br /> provisions on AI regulatory sandboxes and testing in real world conditions. Furthermore, without prejudice to the <br /> exclusion of AI systems specifically developed and put into service for the sole purpose of scientific research and <br /> development, any other AI system that may be used for the conduct of any research and development activity should <br /> remain subject to the provisions of this Regulation. In any event, any research and development activity should be <br /> carried out in accordance with recognised ethical and professional standards for scientific research and should be <br /> conducted in accordance with applicable Union law.<br /> (26)<br /> In order to introduce a proportionate and effective set of binding rules for AI systems, a clearly defined risk-based <br /> approach should be followed. That approach should tailor the type and content of such rules to the intensity and <br /> scope of the risks that AI systems can generate. It is therefore necessary to prohibit certain unacceptable AI practices, <br /> to lay down requirements for high-risk AI systems and obligations for the relevant operators, and to lay down <br /> transparency obligations for certain AI systems.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 7/144</p> <p>(27)<br /> While the risk-based approach is the basis for a proportionate and effective set of binding rules, it is important to <br /> recall the 2019 Ethics guidelines for trustworthy AI developed by the independent AI HLEG appointed by the <br /> Commission. In those guidelines, the AI HLEG developed seven non-binding ethical principles for AI which are <br /> intended to help ensure that AI is trustworthy and ethically sound.</p>
Show original text

The European Commission appointed an independent AI expert group (AI HLEG) to create guidelines for trustworthy AI. These guidelines include seven ethical principles designed to ensure AI systems are trustworthy and ethically responsible. The seven principles are: (1) Human agency and oversight — AI should serve people, respect human dignity, and remain under human control; (2) Technical robustness and safety — AI should be reliable, resistant to misuse, and minimize unintended harm; (3) Privacy and data governance — AI should follow privacy laws and use high-quality, trustworthy data; (4) Transparency — AI systems should be traceable and explainable, users should know they're interacting with AI, and people should understand the AI's capabilities and limitations; (5) Diversity, non-discrimination and fairness; (6) Societal and environmental well-being; and (7) Accountability. These guidelines support the development of AI that is human-centered, trustworthy, and aligned with European Union values and laws.

<p>guidelines for trustworthy AI developed by the independent AI HLEG appointed by the <br /> Commission. In those guidelines, the AI HLEG developed seven non-binding ethical principles for AI which are <br /> intended to help ensure that AI is trustworthy and ethically sound. The seven principles include human agency and <br /> oversight; technical robustness and safety; privacy and data governance; transparency; diversity, non-discrimination <br /> and fairness; societal and environmental well-being and accountability. Without prejudice to the legally binding <br /> requirements of this Regulation and any other applicable Union law, those guidelines contribute to the design of <br /> coherent, trustworthy and human-centric AI, in line with the Charter and with the values on which the Union is <br /> founded. According to the guidelines of the AI HLEG, human agency and oversight means that AI systems are <br /> developed and used as a tool that serves people, respects human dignity and personal autonomy, and that is <br /> functioning in a way that can be appropriately controlled and overseen by humans. Technical robustness and safety <br /> means that AI systems are developed and used in a way that allows robustness in the case of problems and resilience <br /> against attempts to alter the use or performance of the AI system so as to allow unlawful use by third parties, and <br /> minimise unintended harm. Privacy and data governance means that AI systems are developed and used in <br /> accordance with privacy and data protection rules, while processing data that meets high standards in terms of <br /> quality and integrity. Transparency means that AI systems are developed and used in a way that allows appropriate <br /> traceability and explainability, while making humans aware that they communicate or interact with an AI system, as <br /> well as duly informing deployers of the capabilities and limitations of that AI system and affected persons about their <br /> rights.</p>
Show original text

AI systems should be designed so that people know they're interacting with AI, understand what it can and cannot do, and are informed about their rights. Developers should ensure AI is fair, non-discriminatory, and accessible to everyone regardless of gender or cultural background. AI should also be developed responsibly to protect the environment and benefit society as a whole, with careful monitoring of its long-term effects on individuals, communities, and democracy. These principles should guide how AI models are built and used, and should form the basis for industry standards and codes of conduct. All organizations—including businesses, universities, civil society groups, and standards bodies—should help develop voluntary best practices based on these ethical guidelines. However, AI can be misused as a tool for manipulation, exploitation, and social control, which violates fundamental human rights like dignity, freedom, equality, democracy, privacy, and data protection. AI-powered manipulation techniques that trick people into making unwanted decisions or undermine their free choice should be prohibited.

<p>way that allows appropriate <br /> traceability and explainability, while making humans aware that they communicate or interact with an AI system, as <br /> well as duly informing deployers of the capabilities and limitations of that AI system and affected persons about their <br /> rights. Diversity, non-discrimination and fairness means that AI systems are developed and used in a way that <br /> includes diverse actors and promotes equal access, gender equality and cultural diversity, while avoiding <br /> discriminatory impacts and unfair biases that are prohibited by Union or national law. Social and environmental <br /> well-being means that AI systems are developed and used in a sustainable and environmentally friendly manner as <br /> well as in a way to benefit all human beings, while monitoring and assessing the long-term impacts on the <br /> individual, society and democracy. The application of those principles should be translated, when possible, in the <br /> design and use of AI models. They should in any case serve as a basis for the drafting of codes of conduct under this <br /> Regulation. All stakeholders, including industry, academia, civil society and standardisation organisations, are <br /> encouraged to take into account, as appropriate, the ethical principles for the development of voluntary best <br /> practices and standards.<br /> (28)<br /> Aside from the many beneficial uses of AI, it can also be misused and provide novel and powerful tools for <br /> manipulative, exploitative and social control practices. Such practices are particularly harmful and abusive and <br /> should be prohibited because they contradict Union values of respect for human dignity, freedom, equality, <br /> democracy and the rule of law and fundamental rights enshrined in the Charter, including the right to <br /> non-discrimination, to data protection and to privacy and the rights of the child.<br /> (29)<br /> AI-enabled manipulative techniques can be used to persuade persons to engage in unwanted behaviours, or to <br /> deceive them by nudging them into decisions in a way that subverts and impairs their autonomy, decision-making <br /> and free choices.</p>
Show original text

AI systems that use manipulative techniques to trick people into making unwanted decisions or choices should be banned. These systems can harm people's physical health, mental health, or finances. They work by using hidden stimuli like sounds, images, or videos that people cannot consciously perceive, or by using deceptive methods that people cannot resist or control, even if they notice them. Examples include brain-computer interfaces or virtual reality systems that can control what information people see. These AI systems are especially dangerous when they target vulnerable groups, such as children, people with disabilities, people in extreme poverty, or ethnic or religious minorities.

<p>29)<br /> AI-enabled manipulative techniques can be used to persuade persons to engage in unwanted behaviours, or to <br /> deceive them by nudging them into decisions in a way that subverts and impairs their autonomy, decision-making <br /> and free choices. The placing on the market, the putting into service or the use of certain AI systems with the <br /> objective to or the effect of materially distorting human behaviour, whereby significant harms, in particular having <br /> sufficiently important adverse impacts on physical, psychological health or financial interests are likely to occur, are <br /> particularly dangerous and should therefore be prohibited. Such AI systems deploy subliminal components such as <br /> audio, image, video stimuli that persons cannot perceive, as those stimuli are beyond human perception, or other <br /> manipulative or deceptive techniques that subvert or impair person’s autonomy, decision-making or free choice in <br /> ways that people are not consciously aware of those techniques or, where they are aware of them, can still be <br /> deceived or are not able to control or resist them. This could be facilitated, for example, by machine-brain interfaces <br /> or virtual reality as they allow for a higher degree of control of what stimuli are presented to persons, insofar as they <br /> may materially distort their behaviour in a significantly harmful manner. In addition, AI systems may also otherwise <br /> exploit the vulnerabilities of a person or a specific group of persons due to their age, disability within the meaning of <br /> Directive (EU) 2019/882 of the European Parliament and of the Council (16), or a specific social or economic <br /> situation that is likely to make those persons more vulnerable to exploitation such as persons living in extreme <br /> poverty, ethnic or religious minorities.</p>
Show original text

Certain AI systems are prohibited if they are designed to or likely to manipulate people's behavior in ways that cause serious harm. This applies especially to vulnerable groups, including people living in extreme poverty and ethnic or religious minorities. However, AI systems are not prohibited if the behavior distortion is caused by external factors beyond the provider's or deployer's control—factors that could not reasonably be foreseen or prevented. Importantly, the provider or deployer does not need to have intended to cause harm; the prohibition applies if manipulative or exploitative AI practices result in significant harm regardless of intent. This regulation is based on Directive (EU) 2019/882 regarding accessibility requirements for products and services.

<p>2019/882 of the European Parliament and of the Council (16), or a specific social or economic <br /> situation that is likely to make those persons more vulnerable to exploitation such as persons living in extreme <br /> poverty, ethnic or religious minorities. Such AI systems can be placed on the market, put into service or used with <br /> the objective to or the effect of materially distorting the behaviour of a person and in a manner that causes or is <br /> reasonably likely to cause significant harm to that or another person or groups of persons, including harms that may <br /> be accumulated over time and should therefore be prohibited. It may not be possible to assume that there is an <br /> EN<br /> OJ L, 12.7.2024<br /> 8/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> (16)<br /> Directive (EU) 2019/882 of the European Parliament and of the Council of 17 April 2019 on the accessibility requirements for <br /> products and services (OJ L 151, 7.6.2019, p. 70).</p> <p>intention to distort behaviour where the distortion results from factors external to the AI system which are outside <br /> the control of the provider or the deployer, namely factors that may not be reasonably foreseeable and therefore not <br /> possible for the provider or the deployer of the AI system to mitigate. In any case, it is not necessary for the provider <br /> or the deployer to have the intention to cause significant harm, provided that such harm results from the <br /> manipulative or exploitative AI-enabled practices.</p>
Show original text

AI providers and deployers don't need to intend harm for their systems to be prohibited—if manipulative or exploitative practices cause significant harm, they are still banned. These AI prohibitions work alongside existing European consumer protection laws that already ban unfair business practices causing financial harm, whether AI-based or not. However, legitimate medical treatments like psychological therapy or physical rehabilitation are allowed when they follow applicable laws and medical standards with proper consent. Similarly, lawful advertising and normal business practices that comply with the law are not considered harmful manipulative AI practices. Biometric systems that use facial recognition, fingerprints, or similar data to determine people's political views, religion, race, sexual orientation, or union membership are prohibited. This ban does not apply to lawful sorting or filtering of biometric data by characteristics like hair or eye color, such as systems used in law enforcement. AI systems that rate people's social behavior—whether run by governments or private companies—can create discrimination and unfairly exclude certain groups, which is a concern that needs addressing.

<p>of the AI system to mitigate. In any case, it is not necessary for the provider <br /> or the deployer to have the intention to cause significant harm, provided that such harm results from the <br /> manipulative or exploitative AI-enabled practices. The prohibitions for such AI practices are complementary to the <br /> provisions contained in Directive 2005/29/EC of the European Parliament and of the Council (17), in particular unfair <br /> commercial practices leading to economic or financial harms to consumers are prohibited under all circumstances, <br /> irrespective of whether they are put in place through AI systems or otherwise. The prohibitions of manipulative and <br /> exploitative practices in this Regulation should not affect lawful practices in the context of medical treatment such as <br /> psychological treatment of a mental disease or physical rehabilitation, when those practices are carried out in <br /> accordance with the applicable law and medical standards, for example explicit consent of the individuals or their <br /> legal representatives. In addition, common and legitimate commercial practices, for example in the field of <br /> advertising, that comply with the applicable law should not, in themselves, be regarded as constituting harmful <br /> manipulative AI-enabled practices.<br /> (30)<br /> Biometric categorisation systems that are based on natural persons’ biometric data, such as an individual person’s <br /> face or fingerprint, to deduce or infer an individuals’ political opinions, trade union membership, religious or <br /> philosophical beliefs, race, sex life or sexual orientation should be prohibited. That prohibition should not cover the <br /> lawful labelling, filtering or categorisation of biometric data sets acquired in line with Union or national law <br /> according to biometric data, such as the sorting of images according to hair colour or eye colour, which can for <br /> example be used in the area of law enforcement.<br /> (31)<br /> AI systems providing social scoring of natural persons by public or private actors may lead to discriminatory <br /> outcomes and the exclusion of certain groups.</p>
Show original text

AI systems that assign social scores to people can create unfair treatment and exclude certain groups. These systems violate basic rights like dignity, non-discrimination, equality, and justice. They work by analyzing multiple data points about a person's behavior across different situations and their personal characteristics over time. A low social score can lead to harmful or unfavorable treatment in unrelated areas of life, or punishment that is too severe for the behavior. AI systems that use such unfair scoring methods should be banned. However, this ban does not apply to legitimate evaluations of people done for specific legal purposes. Additionally, using AI to identify people by their faces in real-time in public spaces for law enforcement is highly invasive. It affects many people's privacy, creates a sense of constant surveillance, and discourages people from exercising freedoms like assembly. AI facial recognition systems can also produce inaccurate and biased results that discriminate against people based on age, ethnicity, race, sex, or disabilities.

<p>hair colour or eye colour, which can for <br /> example be used in the area of law enforcement.<br /> (31)<br /> AI systems providing social scoring of natural persons by public or private actors may lead to discriminatory <br /> outcomes and the exclusion of certain groups. They may violate the right to dignity and non-discrimination and the <br /> values of equality and justice. Such AI systems evaluate or classify natural persons or groups thereof on the basis of <br /> multiple data points related to their social behaviour in multiple contexts or known, inferred or predicted personal <br /> or personality characteristics over certain periods of time. The social score obtained from such AI systems may lead <br /> to the detrimental or unfavourable treatment of natural persons or whole groups thereof in social contexts, which <br /> are unrelated to the context in which the data was originally generated or collected or to a detrimental treatment that <br /> is disproportionate or unjustified to the gravity of their social behaviour. AI systems entailing such unacceptable <br /> scoring practices and leading to such detrimental or unfavourable outcomes should therefore be prohibited. That <br /> prohibition should not affect lawful evaluation practices of natural persons that are carried out for a specific purpose <br /> in accordance with Union and national law.<br /> (32)<br /> The use of AI systems for ‘real-time’ remote biometric identification of natural persons in publicly accessible spaces <br /> for the purpose of law enforcement is particularly intrusive to the rights and freedoms of the concerned persons, to <br /> the extent that it may affect the private life of a large part of the population, evoke a feeling of constant surveillance <br /> and indirectly dissuade the exercise of the freedom of assembly and other fundamental rights. Technical inaccuracies <br /> of AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail <br /> discriminatory effects. Such possible biased results and discriminatory effects are particularly relevant with regard to <br /> age, ethnicity, race, sex or disabilities.</p>
Show original text

AI systems that identify people by their biological features from a distance can produce unfair results and cause discrimination. This is especially concerning for people of different ages, ethnicities, races, genders, or those with disabilities. These systems are particularly risky because they make instant decisions with little chance for review or correction, which threatens people's rights and freedoms, especially in law enforcement situations.

For this reason, law enforcement should generally be banned from using these systems. However, there are a few specific exceptions where use is allowed if it serves an important public need that outweighs the risks. These exceptions include: searching for crime victims and missing persons, responding to serious threats to people's safety or terrorist attacks, and finding or identifying people suspected of serious crimes (those punishable by imprisonment) listed in the regulation.

<p>AI systems intended for the remote biometric identification of natural persons can lead to biased results and entail <br /> discriminatory effects. Such possible biased results and discriminatory effects are particularly relevant with regard to <br /> age, ethnicity, race, sex or disabilities. In addition, the immediacy of the impact and the limited opportunities for <br /> further checks or corrections in relation to the use of such systems operating in real-time carry heightened risks for <br /> the rights and freedoms of the persons concerned in the context of, or impacted by, law enforcement activities.<br /> (33)<br /> The use of those systems for the purpose of law enforcement should therefore be prohibited, except in exhaustively <br /> listed and narrowly defined situations, where the use is strictly necessary to achieve a substantial public interest, the <br /> importance of which outweighs the risks. Those situations involve the search for certain victims of crime including <br /> missing persons; certain threats to the life or to the physical safety of natural persons or of a terrorist attack; and the <br /> localisation or identification of perpetrators or suspects of the criminal offences listed in an annex to this Regulation, <br /> where those criminal offences are punishable in the Member State concerned by a custodial sentence or a detention <br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 9/144<br /> (17)<br /> Directive 2005/29/EC of the European Parliament and of the Council of 11 May 2005 concerning unfair business-to-consumer <br /> commercial practices in the internal market and amending Council Directive 84/450/EEC, Directives 97/7/EC, 98/27/EC and <br /> 2002/65/EC of the European Parliament and of the Council and Regulation (EC) No 2006/2004 of the European Parliament and of <br /> the Council (‘Unfair Commercial Practices Directive’) (OJ L 149, 11</p>
Show original text

This regulation is based on the Unfair Commercial Practices Directive (2005/65/EC and 2006/2004/EC). It allows authorities to use real-time remote biometric identification systems only for serious crimes. A crime must carry a prison sentence of at least four years under national law to qualify. The regulation lists 32 criminal offences from the Council Framework Decision 2002/584/JHA. However, not all offences are equally serious or require this technology equally. The use of biometric identification must be necessary and proportionate based on how likely it is needed to find or identify suspects and the severity of potential harm. The regulation also covers imminent threats to life or physical safety caused by serious damage to critical infrastructure, as defined in Directive (EU) 2022/2557. This includes infrastructure damage that threatens basic supplies to the public or core government functions.

<p>/65/EC of the European Parliament and of the Council and Regulation (EC) No 2006/2004 of the European Parliament and of <br /> the Council (‘Unfair Commercial Practices Directive’) (OJ L 149, 11.6.2005, p. 22).</p> <p>order for a maximum period of at least four years and as they are defined in the law of that Member State. Such <br /> a threshold for the custodial sentence or detention order in accordance with national law contributes to ensuring <br /> that the offence should be serious enough to potentially justify the use of ‘real-time’ remote biometric identification <br /> systems. Moreover, the list of criminal offences provided in an annex to this Regulation is based on the 32 criminal <br /> offences listed in the Council Framework Decision 2002/584/JHA (18), taking into account that some of those <br /> offences are, in practice, likely to be more relevant than others, in that the recourse to ‘real-time’ remote biometric <br /> identification could, foreseeably, be necessary and proportionate to highly varying degrees for the practical pursuit <br /> of the localisation or identification of a perpetrator or suspect of the different criminal offences listed and having <br /> regard to the likely differences in the seriousness, probability and scale of the harm or possible negative <br /> consequences. An imminent threat to life or the physical safety of natural persons could also result from a serious <br /> disruption of critical infrastructure, as defined in Article 2, point (4) of Directive (EU) 2022/2557 of the European <br /> Parliament and of the Council (19), where the disruption or destruction of such critical infrastructure would result in <br /> an imminent threat to life or the physical safety of a person, including through serious harm to the provision of basic <br /> supplies to the population or to the exercise of the core function of the State.</p>
Show original text

Damaging critical infrastructure could immediately threaten people's lives or safety, including by disrupting essential supplies or government services. Law enforcement, border control, immigration, and asylum authorities must be able to check people's identities in person according to Union and national laws. These authorities can use information systems to identify people who refuse to provide their identity or cannot prove it during identity checks, without needing prior permission from this Regulation. This applies to situations like identifying crime suspects or helping people unable to identify themselves due to accidents or medical conditions. To ensure these systems are used responsibly and fairly, authorities must consider specific factors in each situation: the nature of the problem, how the system affects people's rights and freedoms, and what protections are in place. When law enforcement uses real-time biometric identification systems (like facial recognition) in public spaces, they should only use them to confirm a specific person's identity. This use must be limited to what is absolutely necessary in terms of time period, geographic area, and number of people involved, based on evidence or information about threats, victims, or suspects.

<p>or destruction of such critical infrastructure would result in <br /> an imminent threat to life or the physical safety of a person, including through serious harm to the provision of basic <br /> supplies to the population or to the exercise of the core function of the State. In addition, this Regulation should <br /> preserve the ability for law enforcement, border control, immigration or asylum authorities to carry out identity <br /> checks in the presence of the person concerned in accordance with the conditions set out in Union and national law <br /> for such checks. In particular, law enforcement, border control, immigration or asylum authorities should be able to <br /> use information systems, in accordance with Union or national law, to identify persons who, during an identity <br /> check, either refuse to be identified or are unable to state or prove their identity, without being required by this <br /> Regulation to obtain prior authorisation. This could be, for example, a person involved in a crime, being unwilling, <br /> or unable due to an accident or a medical condition, to disclose their identity to law enforcement authorities.<br /> (34)<br /> In order to ensure that those systems are used in a responsible and proportionate manner, it is also important to <br /> establish that, in each of those exhaustively listed and narrowly defined situations, certain elements should be taken <br /> into account, in particular as regards the nature of the situation giving rise to the request and the consequences of <br /> the use for the rights and freedoms of all persons concerned and the safeguards and conditions provided for with the <br /> use. In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the <br /> purpose of law enforcement should be deployed only to confirm the specifically targeted individual’s identity and <br /> should be limited to what is strictly necessary concerning the period of time, as well as the geographic and personal <br /> scope, having regard in particular to the evidence or indications regarding the threats, the victims or perpetrator.</p>
Show original text

Real-time biometric identification systems used by law enforcement in public spaces must be carefully controlled. These systems should only collect information that is strictly necessary about specific individuals, limited to relevant time periods and locations. Before using such a system, law enforcement must complete a rights impact assessment and register it in an official database. The reference database used must be appropriate for each specific use case.

Law enforcement must obtain permission from a court or independent government authority before using real-time biometric identification systems in public spaces. However, exceptions exist in urgent situations where waiting for permission would be impossible or impractical. In emergencies, law enforcement can use the system with minimal scope and appropriate safeguards as defined by national law. Even in urgent cases, law enforcement must request official permission without delay, and no later than 24 hours after starting to use the system. They must explain why they could not request permission beforehand.

<p>the specifically targeted individual’s identity and <br /> should be limited to what is strictly necessary concerning the period of time, as well as the geographic and personal <br /> scope, having regard in particular to the evidence or indications regarding the threats, the victims or perpetrator. The <br /> use of the real-time remote biometric identification system in publicly accessible spaces should be authorised only if <br /> the relevant law enforcement authority has completed a fundamental rights impact assessment and, unless provided <br /> otherwise in this Regulation, has registered the system in the database as set out in this Regulation. The reference <br /> database of persons should be appropriate for each use case in each of the situations mentioned above.<br /> (35)<br /> Each use of a ‘real-time’ remote biometric identification system in publicly accessible spaces for the purpose of law <br /> enforcement should be subject to an express and specific authorisation by a judicial authority or by an independent <br /> administrative authority of a Member State whose decision is binding. Such authorisation should, in principle, be <br /> obtained prior to the use of the AI system with a view to identifying a person or persons. Exceptions to that rule <br /> should be allowed in duly justified situations on grounds of urgency, namely in situations where the need to use the <br /> systems concerned is such as to make it effectively and objectively impossible to obtain an authorisation before <br /> commencing the use of the AI system. In such situations of urgency, the use of the AI system should be restricted to <br /> the absolute minimum necessary and should be subject to appropriate safeguards and conditions, as determined in <br /> national law and specified in the context of each individual urgent use case by the law enforcement authority itself. <br /> In addition, the law enforcement authority should in such situations request such authorisation while providing the <br /> reasons for not having been able to request it earlier, without undue delay and at the latest within 24 hours.</p>
Show original text

Law enforcement must request authorization for using real-time biometric identification systems, explaining why they couldn't request it earlier. This request must be made without unnecessary delay, within 24 hours at the latest. If authorization is denied, the system must stop immediately and all related data must be deleted. This includes any information the AI system collected and any results it produced from that authorization. However, data that was legally obtained under other EU or national laws does not need to be deleted. Importantly, no decision that negatively affects a person can be based solely on the output of a remote biometric identification system. Market surveillance authorities and national data protection authorities must be informed whenever a real-time biometric identification system is used.

<p>the law enforcement authority itself. <br /> In addition, the law enforcement authority should in such situations request such authorisation while providing the <br /> reasons for not having been able to request it earlier, without undue delay and at the latest within 24 hours. If such <br /> an authorisation is rejected, the use of real-time biometric identification systems linked to that authorisation should <br /> cease with immediate effect and all the data related to such use should be discarded and deleted. Such data includes <br /> EN<br /> OJ L, 12.7.2024<br /> 10/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> (18)<br /> Council Framework Decision 2002/584/JHA of 13 June 2002 on the European arrest warrant and the surrender procedures <br /> between Member States (OJ L 190, 18.7.2002, p. 1).<br /> (19)<br /> Directive (EU) 2022/2557 of the European Parliament and of the Council of 14 December 2022 on the resilience of critical entities <br /> and repealing Council Directive 2008/114/EC (OJ L 333, 27.12.2022, p. 164).</p> <p>input data directly acquired by an AI system in the course of the use of such system as well as the results and outputs <br /> of the use linked to that authorisation. It should not include input that is legally acquired in accordance with another <br /> Union or national law. In any case, no decision producing an adverse legal effect on a person should be taken based <br /> solely on the output of the remote biometric identification system.<br /> (36)<br /> In order to carry out their tasks in accordance with the requirements set out in this Regulation as well as in national <br /> rules, the relevant market surveillance authority and the national data protection authority should be notified of each <br /> use of the real-time biometric identification system.</p>
Show original text

Authorities must notify both market surveillance and data protection authorities whenever a real-time biometric identification system is used. These authorities must then submit yearly reports to the Commission about how these systems are being used. Each Member State can decide whether to allow this technology within its borders and under what conditions. If a Member State chooses to permit it, the rules can vary—some states may allow it for all purposes while others may restrict it to specific objectives. Member States must inform the Commission of their national rules within 30 days of adopting them. When AI systems are used for real-time biometric identification of people in public spaces for law enforcement purposes, biometric data is processed. The rules in this Regulation that restrict such use take priority over the biometric data processing rules in EU Directive 2016/680, providing complete oversight of how this technology and data can be used.

<p>to carry out their tasks in accordance with the requirements set out in this Regulation as well as in national <br /> rules, the relevant market surveillance authority and the national data protection authority should be notified of each <br /> use of the real-time biometric identification system. Market surveillance authorities and the national data protection <br /> authorities that have been notified should submit to the Commission an annual report on the use of real-time <br /> biometric identification systems.<br /> (37)<br /> Furthermore, it is appropriate to provide, within the exhaustive framework set by this Regulation that such use in <br /> the territory of a Member State in accordance with this Regulation should only be possible where and in as far as the <br /> Member State concerned has decided to expressly provide for the possibility to authorise such use in its detailed rules <br /> of national law. Consequently, Member States remain free under this Regulation not to provide for such a possibility <br /> at all or to only provide for such a possibility in respect of some of the objectives capable of justifying authorised use <br /> identified in this Regulation. Such national rules should be notified to the Commission within 30 days of their <br /> adoption.<br /> (38)<br /> The use of AI systems for real-time remote biometric identification of natural persons in publicly accessible spaces <br /> for the purpose of law enforcement necessarily involves the processing of biometric data. The rules of this <br /> Regulation that prohibit, subject to certain exceptions, such use, which are based on Article 16 TFEU, should apply <br /> as lex specialis in respect of the rules on the processing of biometric data contained in Article 10 of Directive (EU) <br /> 2016/680, thus regulating such use and the processing of biometric data involved in an exhaustive manner.</p>
Show original text

This Regulation acts as a special law that fully controls how biometric data is processed, replacing the general rules in Article 10 of Directive (EU) 2016/680. Law enforcement authorities can only use biometric systems and process biometric data if it follows the rules set by this Regulation. This Regulation does not provide a separate legal basis for processing personal data under Article 8 of Directive (EU) 2016/680. However, when real-time biometric identification systems are used in public spaces for purposes other than law enforcement (including by government authorities), they are not covered by this Regulation's law enforcement framework. Such non-law enforcement uses do not require authorization under this Regulation and are instead governed by applicable national laws. All other uses of biometric data and personal data in AI-based biometric identification systems—except for real-time remote biometric identification in public spaces for law enforcement purposes—must continue to follow all requirements in Article 10 of Directive (EU) 2016/680.

<p>as lex specialis in respect of the rules on the processing of biometric data contained in Article 10 of Directive (EU) <br /> 2016/680, thus regulating such use and the processing of biometric data involved in an exhaustive manner. <br /> Therefore, such use and processing should be possible only in as far as it is compatible with the framework set by <br /> this Regulation, without there being scope, outside that framework, for the competent authorities, where they act for <br /> purpose of law enforcement, to use such systems and process such data in connection thereto on the grounds listed <br /> in Article 10 of Directive (EU) 2016/680. In that context, this Regulation is not intended to provide the legal basis <br /> for the processing of personal data under Article 8 of Directive (EU) 2016/680. However, the use of real-time remote <br /> biometric identification systems in publicly accessible spaces for purposes other than law enforcement, including by <br /> competent authorities, should not be covered by the specific framework regarding such use for the purpose of law <br /> enforcement set by this Regulation. Such use for purposes other than law enforcement should therefore not be <br /> subject to the requirement of an authorisation under this Regulation and the applicable detailed rules of national law <br /> that may give effect to that authorisation.<br /> (39)<br /> Any processing of biometric data and other personal data involved in the use of AI systems for biometric <br /> identification, other than in connection to the use of real-time remote biometric identification systems in publicly <br /> accessible spaces for the purpose of law enforcement as regulated by this Regulation, should continue to comply <br /> with all requirements resulting from Article 10 of Directive (EU) 2016/680.</p>
Show original text

When law enforcement uses real-time biometric identification systems in public spaces, they must follow all requirements in Article 10 of Directive (EU) 2016/680. For non-law enforcement purposes, Articles 9(1) of Regulation (EU) 2016/679 and 10(1) of Regulation (EU) 2018/1725 ban the processing of biometric data, with only limited exceptions allowed. National data protection authorities have already prohibited remote biometric identification for non-law enforcement uses under these regulations. According to Article 6a of Protocol No 21 regarding the United Kingdom and Ireland's position on freedom, security, and justice, Ireland is not bound by certain rules in this Regulation. Specifically, Ireland is exempt from: Article 5(1) point (g) regarding biometric categorization systems used in police and criminal justice cooperation; Article 5(1) point (d) regarding certain AI systems; Article 5(1) point (h); Articles 5(2) through 5(6); and Article 26(10). These exemptions apply to Ireland's processing of personal data in activities covered by Chapters 4 and 5 of Title V, Part Three of the TFEU, where Ireland is not bound by criminal justice cooperation rules.

<p>use of real-time remote biometric identification systems in publicly <br /> accessible spaces for the purpose of law enforcement as regulated by this Regulation, should continue to comply <br /> with all requirements resulting from Article 10 of Directive (EU) 2016/680. For purposes other than law <br /> enforcement, Article 9(1) of Regulation (EU) 2016/679 and Article 10(1) of Regulation (EU) 2018/1725 prohibit the <br /> processing of biometric data subject to limited exceptions as provided in those Articles. In the application of Article <br /> 9(1) of Regulation (EU) 2016/679, the use of remote biometric identification for purposes other than law <br /> enforcement has already been subject to prohibition decisions by national data protection authorities.<br /> (40)<br /> In accordance with Article 6a of Protocol No 21 on the position of the United Kingdom and Ireland in respect of the <br /> area of freedom, security and justice, as annexed to the TEU and to the TFEU, Ireland is not bound by the rules laid <br /> down in Article 5(1), first subparagraph, point (g), to the extent it applies to the use of biometric categorisation <br /> systems for activities in the field of police cooperation and judicial cooperation in criminal matters, Article 5(1), first <br /> subparagraph, point (d), to the extent it applies to the use of AI systems covered by that provision, Article 5(1), first <br /> subparagraph, point (h), Article 5(2) to (6) and Article 26(10) of this Regulation adopted on the basis of Article 16 <br /> TFEU which relate to the processing of personal data by the Member States when carrying out activities falling <br /> within the scope of Chapter 4 or Chapter 5 of Title V of Part Three of the TFEU, where Ireland is not bound by the <br /> rules governing the forms of judicial cooperation in criminal</p>
Show original text

Ireland and Denmark have special exemptions from certain EU rules. Ireland is not required to follow rules about how courts and police cooperate in criminal cases. Denmark is not bound by specific rules about using biometric systems and artificial intelligence in police and criminal justice cooperation, particularly regarding how personal data is processed. These exemptions apply to activities under Chapters 4 and 5 of Title V in Part Three of the Treaty on the Functioning of the European Union. Additionally, all people in the EU have the right to be presumed innocent and should only be judged based on their actual actions.

<p>by the Member States when carrying out activities falling <br /> within the scope of Chapter 4 or Chapter 5 of Title V of Part Three of the TFEU, where Ireland is not bound by the <br /> rules governing the forms of judicial cooperation in criminal matters or police cooperation which require <br /> compliance with the provisions laid down on the basis of Article 16 TFEU.<br /> (41)<br /> In accordance with Articles 2 and 2a of Protocol No 22 on the position of Denmark, annexed to the TEU and to the <br /> TFEU, Denmark is not bound by rules laid down in Article 5(1), first subparagraph, point (g), to the extent it applies <br /> to the use of biometric categorisation systems for activities in the field of police cooperation and judicial cooperation <br /> in criminal matters, Article 5(1), first subparagraph, point (d), to the extent it applies to the use of AI systems <br /> covered by that provision, Article 5(1), first subparagraph, point (h), (2) to (6) and Article 26(10) of this Regulation <br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 11/144</p> <p>adopted on the basis of Article 16 TFEU, or subject to their application, which relate to the processing of personal <br /> data by the Member States when carrying out activities falling within the scope of Chapter 4 or Chapter 5 of <br /> Title V of Part Three of the TFEU.<br /> (42)<br /> In line with the presumption of innocence, natural persons in the Union should always be judged on their actual <br /> behaviour.</p>
Show original text

People in the EU must be judged based on their actual actions, not on AI predictions about their behavior. AI systems cannot make decisions about someone based only on their profile, personality traits, or personal characteristics (such as nationality, birthplace, number of children, or debt level) unless there is real evidence they committed a crime and a human has reviewed the case. Therefore, AI systems that predict whether someone will commit a crime based solely on profiling or personality analysis are banned. However, this ban does not apply to AI systems that analyze risk in other ways, such as detecting fraud by businesses through suspicious transactions or helping customs officials locate illegal drugs based on known trafficking patterns. Additionally, AI systems that create facial recognition databases by collecting facial images from the internet or security cameras without a specific target are prohibited because they increase surveillance concerns and can violate people's privacy rights. Finally, there are serious concerns about AI systems designed to identify or interpret human emotions, since emotions are expressed differently across cultures, situations, and even within the same person.

<p>the scope of Chapter 4 or Chapter 5 of <br /> Title V of Part Three of the TFEU.<br /> (42)<br /> In line with the presumption of innocence, natural persons in the Union should always be judged on their actual <br /> behaviour. Natural persons should never be judged on AI-predicted behaviour based solely on their profiling, <br /> personality traits or characteristics, such as nationality, place of birth, place of residence, number of children, level of <br /> debt or type of car, without a reasonable suspicion of that person being involved in a criminal activity based on <br /> objective verifiable facts and without human assessment thereof. Therefore, risk assessments carried out with regard <br /> to natural persons in order to assess the likelihood of their offending or to predict the occurrence of an actual or <br /> potential criminal offence based solely on profiling them or on assessing their personality traits and characteristics <br /> should be prohibited. In any case, that prohibition does not refer to or touch upon risk analytics that are not based <br /> on the profiling of individuals or on the personality traits and characteristics of individuals, such as AI systems using <br /> risk analytics to assess the likelihood of financial fraud by undertakings on the basis of suspicious transactions or <br /> risk analytic tools to predict the likelihood of the localisation of narcotics or illicit goods by customs authorities, for <br /> example on the basis of known trafficking routes.<br /> (43)<br /> The placing on the market, the putting into service for that specific purpose, or the use of AI systems that create or <br /> expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV <br /> footage, should be prohibited because that practice adds to the feeling of mass surveillance and can lead to gross <br /> violations of fundamental rights, including the right to privacy.<br /> (44)<br /> There are serious concerns about the scientific basis of AI systems aiming to identify or infer emotions, particularly <br /> as expression of emotions vary considerably across cultures and situations, and even within a single individual.</p>
Show original text

AI systems that try to detect or guess people's emotions based on their physical features have serious problems. Emotions are expressed differently depending on culture, situation, and individual, so these systems are often unreliable and don't work well across different groups of people. This can lead to unfair treatment and violate people's privacy rights. In workplaces and schools, where there is already a power imbalance, these systems could harm certain individuals or groups. Therefore, AI systems designed to detect emotions in work and education settings should be banned. However, this ban does not apply to medical or safety systems, such as those used for therapy. Other laws already protect people in areas like data protection, non-discrimination, consumer protection, and competition—this regulation does not change those protections. High-risk AI systems can only be sold, used, or put into service in the EU if they meet strict safety requirements to ensure they do not create unacceptable risks to important public interests protected by EU law.

<p>fundamental rights, including the right to privacy.<br /> (44)<br /> There are serious concerns about the scientific basis of AI systems aiming to identify or infer emotions, particularly <br /> as expression of emotions vary considerably across cultures and situations, and even within a single individual. <br /> Among the key shortcomings of such systems are the limited reliability, the lack of specificity and the limited <br /> generalisability. Therefore, AI systems identifying or inferring emotions or intentions of natural persons on the basis <br /> of their biometric data may lead to discriminatory outcomes and can be intrusive to the rights and freedoms of the <br /> concerned persons. Considering the imbalance of power in the context of work or education, combined with the <br /> intrusive nature of these systems, such systems could lead to detrimental or unfavourable treatment of certain <br /> natural persons or whole groups thereof. Therefore, the placing on the market, the putting into service, or the use of <br /> AI systems intended to be used to detect the emotional state of individuals in situations related to the workplace and <br /> education should be prohibited. That prohibition should not cover AI systems placed on the market strictly for <br /> medical or safety reasons, such as systems intended for therapeutical use.<br /> (45)<br /> Practices that are prohibited by Union law, including data protection law, non-discrimination law, consumer <br /> protection law, and competition law, should not be affected by this Regulation.<br /> (46)<br /> High-risk AI systems should only be placed on the Union market, put into service or used if they comply with <br /> certain mandatory requirements. Those requirements should ensure that high-risk AI systems available in the Union <br /> or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests <br /> as recognised and protected by Union law.</p>
Show original text

High-risk AI systems used in the European Union must meet specific mandatory requirements. These requirements are designed to protect important public interests recognized by EU law by preventing unacceptable risks. Following the New Legislative Framework and the Commission's 2022 Blue Guide on EU product rules, multiple EU laws can apply to a single product. For example, EU Regulations 2017/745 (medical devices) and 2017/746, as well as EU Directive 2006/42/EC, may all apply when a product is made available or put into service.

<p>comply with <br /> certain mandatory requirements. Those requirements should ensure that high-risk AI systems available in the Union <br /> or whose output is otherwise used in the Union do not pose unacceptable risks to important Union public interests <br /> as recognised and protected by Union law. On the basis of the New Legislative Framework, as clarified in the <br /> Commission notice ‘The “Blue Guide” on the implementation of EU product rules 2022’ (20), the general rule is that <br /> more than one legal act of Union harmonisation legislation, such as Regulations (EU) 2017/745 (21) and (EU) <br /> 2017/746 (22) of the European Parliament and of the Council or Directive 2006/42/EC of the European Parliament <br /> and of the Council (23), may be applicable to one product, since the making available or putting into service can take <br /> EN<br /> OJ L, 12.7.2024<br /> 12/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> (20)<br /> OJ C 247, 29.6.2022, p. 1.<br /> (21)<br /> Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive <br /> 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and <br /> 93/42/EEC (OJ L 117, 5.5.2017, p. 1).</p>
Show original text

This text discusses EU regulations for medical devices and AI systems. Key regulations mentioned include: Regulation (EU) 2017/746 from April 5, 2017 about in vitro diagnostic medical devices (published in Official Journal L 117, May 5, 2017, page 176), and Directive 2006/42/EC from May 17, 2006 about machinery (published in Official Journal L 157, June 9, 2006, page 24). The text explains that products can only display the CE mark when they meet all applicable EU laws. To reduce unnecessary costs and paperwork, companies that make products containing high-risk AI systems should have flexibility in deciding how to comply with all relevant EU requirements. High-risk AI systems should be limited to those that could seriously harm people's health, safety, or fundamental rights in the EU, while minimizing negative effects on international trade. AI systems can pose risks to human health and safety, especially when they function as safety components within products.

<p>EC) No 1223/2009 and repealing Council Directives 90/385/EEC and <br /> 93/42/EEC (OJ L 117, 5.5.2017, p. 1).<br /> (22)<br /> Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and <br /> repealing Directive 98/79/EC and Commission Decision 2010/227/EU (OJ L 117, 5.5.2017, p. 176).<br /> (23)<br /> Directive 2006/42/EC of the European Parliament and of the Council of 17 May 2006 on machinery, and amending Directive <br /> 95/16/EC (OJ L 157, 9.6.2006, p. 24).</p> <p>place only when the product complies with all applicable Union harmonisation legislation. To ensure consistency <br /> and avoid unnecessary administrative burdens or costs, providers of a product that contains one or more high-risk <br /> AI systems, to which the requirements of this Regulation and of the Union harmonisation legislation listed in an <br /> annex to this Regulation apply, should have flexibility with regard to operational decisions on how to ensure <br /> compliance of a product that contains one or more AI systems with all applicable requirements of the Union <br /> harmonisation legislation in an optimal manner. AI systems identified as high-risk should be limited to those that <br /> have a significant harmful impact on the health, safety and fundamental rights of persons in the Union and such <br /> limitation should minimise any potential restriction to international trade.<br /> (47)<br /> AI systems could have an adverse impact on the health and safety of persons, in particular when such systems <br /> operate as safety components of products.</p>
Show original text

AI systems can harm people's health and safety, especially when they are built into products as safety features. The EU wants to make sure that only safe products reach the market and can move freely across member states. Therefore, it is crucial to identify and reduce safety risks that come from AI systems and other digital parts of products. For example, robots used in factories or to help care for people need to work safely in complicated situations. In healthcare, where lives are at stake, diagnostic tools and decision-support systems must be trustworthy and accurate. When deciding if an AI system is high-risk, it is important to consider how much damage it could cause to fundamental human rights. These rights include dignity, privacy, personal data protection, freedom of speech, freedom of assembly, protection against discrimination, education, consumer rights, worker protections, disability rights, gender equality, intellectual property, the right to a fair trial, the right to defend oneself, the presumption of innocence, and the right to fair government treatment. Any limitations on AI use should be designed to avoid blocking international trade.

<p>of persons in the Union and such <br /> limitation should minimise any potential restriction to international trade.<br /> (47)<br /> AI systems could have an adverse impact on the health and safety of persons, in particular when such systems <br /> operate as safety components of products. Consistent with the objectives of Union harmonisation legislation to <br /> facilitate the free movement of products in the internal market and to ensure that only safe and otherwise compliant <br /> products find their way into the market, it is important that the safety risks that may be generated by a product as <br /> a whole due to its digital components, including AI systems, are duly prevented and mitigated. For instance, <br /> increasingly autonomous robots, whether in the context of manufacturing or personal assistance and care should be <br /> able to safely operate and performs their functions in complex environments. Similarly, in the health sector where <br /> the stakes for life and health are particularly high, increasingly sophisticated diagnostics systems and systems <br /> supporting human decisions should be reliable and accurate.<br /> (48)<br /> The extent of the adverse impact caused by the AI system on the fundamental rights protected by the Charter is of <br /> particular relevance when classifying an AI system as high risk. Those rights include the right to human dignity, <br /> respect for private and family life, protection of personal data, freedom of expression and information, freedom of <br /> assembly and of association, the right to non-discrimination, the right to education, consumer protection, workers’ <br /> rights, the rights of persons with disabilities, gender equality, intellectual property rights, the right to an effective <br /> remedy and to a fair trial, the right of defence and the presumption of innocence, and the right to good <br /> administration.</p>
Show original text

Several important rights are protected, including the rights of people with disabilities, gender equality, intellectual property, fair trials, legal defense, and the right to good administration. Children have special protections under Article 24 of the Charter and the United Nations Convention on the Rights of the Child, which address their unique vulnerabilities and needs, especially in digital environments. Environmental protection is also a fundamental right that must be considered when evaluating how much harm an AI system could cause to people's health and safety. For high-risk AI systems that are safety components or products covered by various EU regulations (including those on aviation, agricultural vehicles, motorcycles, maritime equipment, railways, vehicles, and aviation safety), additional oversight and requirements apply.

<p>’ <br /> rights, the rights of persons with disabilities, gender equality, intellectual property rights, the right to an effective <br /> remedy and to a fair trial, the right of defence and the presumption of innocence, and the right to good <br /> administration. In addition to those rights, it is important to highlight the fact that children have specific rights as <br /> enshrined in Article 24 of the Charter and in the United Nations Convention on the Rights of the Child, further <br /> developed in the UNCRC General Comment No 25 as regards the digital environment, both of which require <br /> consideration of the children’s vulnerabilities and provision of such protection and care as necessary for their <br /> well-being. The fundamental right to a high level of environmental protection enshrined in the Charter and <br /> implemented in Union policies should also be considered when assessing the severity of the harm that an AI system <br /> can cause, including in relation to the health and safety of persons.<br /> (49)<br /> As regards high-risk AI systems that are safety components of products or systems, or which are themselves <br /> products or systems falling within the scope of Regulation (EC) No 300/2008 of the European Parliament and of the <br /> Council (24), Regulation (EU) No 167/2013 of the European Parliament and of the Council (25), Regulation <br /> (EU) No 168/2013 of the European Parliament and of the Council (26), Directive 2014/90/EU of the European <br /> Parliament and of the Council (27), Directive (EU) 2016/797 of the European Parliament and of the Council (28), <br /> Regulation (EU) 2018/858 of the European Parliament and of the Council (29), Regulation (EU) 2018/1139 of the <br /> OJ L, 12.7.</p>
Show original text

This document references several European Union regulations and directives that establish safety and approval standards across different transportation sectors: Regulation (EC) No 300/2008 sets common security rules for civil aviation; Regulation (EU) No 167/2013 covers approval and monitoring of agricultural and forestry vehicles; Regulation (EU) No 168/2013 addresses approval and monitoring of two- or three-wheel vehicles and quadricycles; Directive 2014/90/EU establishes standards for marine equipment; and Directive (EU) 2016/797 ensures rail systems are compatible across the European Union. These regulations are published in the Official Journal of the European Union (OJ) with specific dates and page references.

<p>of the Council (28), <br /> Regulation (EU) 2018/858 of the European Parliament and of the Council (29), Regulation (EU) 2018/1139 of the <br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 13/144<br /> (24)<br /> Regulation (EC) No 300/2008 of the European Parliament and of the Council of 11 March 2008 on common rules in the field of <br /> civil aviation security and repealing Regulation (EC) No 2320/2002 (OJ L 97, 9.4.2008, p. 72).<br /> (25)<br /> Regulation (EU) No 167/2013 of the European Parliament and of the Council of 5 February 2013 on the approval and market <br /> surveillance of agricultural and forestry vehicles (OJ L 60, 2.3.2013, p. 1).<br /> (26)<br /> Regulation (EU) No 168/2013 of the European Parliament and of the Council of 15 January 2013 on the approval and market <br /> surveillance of two- or three-wheel vehicles and quadricycles (OJ L 60, 2.3.2013, p. 52).<br /> (27)<br /> Directive 2014/90/EU of the European Parliament and of the Council of 23 July 2014 on marine equipment and repealing Council <br /> Directive 96/98/EC (OJ L 257, 28.8.2014, p. 146).<br /> (28)<br /> Directive (EU) 2016/797 of the European Parliament and of the Council of 11 May 2016 on the interoperability of the rail system <br /> within the European Union (OJ L 138, 26.5.</p>
Show original text

This text refers to two EU regulations: (1) Directive 2016/797 from May 11, 2016, which ensures that rail systems work together properly across the European Union, and (2) Regulation 2018/858 from May 30, 2018, which sets rules for approving motor vehicles, trailers, and their parts before they can be sold, and for checking them once they are on the market. The text also states that the European Commission should update these regulations to include safety requirements for high-risk artificial intelligence (AI) systems. These updates should respect each industry's specific technical rules and existing oversight systems. Additionally, AI systems that are safety components of products or are products themselves covered by EU safety laws listed in this regulation should be classified as high-risk if the product must be tested and approved by an independent third-party testing organization.

<p>)<br /> Directive (EU) 2016/797 of the European Parliament and of the Council of 11 May 2016 on the interoperability of the rail system <br /> within the European Union (OJ L 138, 26.5.2016, p. 44).<br /> (29)<br /> Regulation (EU) 2018/858 of the European Parliament and of the Council of 30 May 2018 on the approval and market surveillance <br /> of motor vehicles and their trailers, and of systems, components and separate technical units intended for such vehicles, amending <br /> Regulations (EC) No 715/2007 and (EC) No 595/2009 and repealing Directive 2007/46/EC (OJ L 151, 14.6.2018, p. 1).</p> <p>European Parliament and of the Council (30), and Regulation (EU) 2019/2144 of the European Parliament and of the <br /> Council (31), it is appropriate to amend those acts to ensure that the Commission takes into account, on the basis of <br /> the technical and regulatory specificities of each sector, and without interfering with existing governance, conformity <br /> assessment and enforcement mechanisms and authorities established therein, the mandatory requirements for <br /> high-risk AI systems laid down in this Regulation when adopting any relevant delegated or implementing acts on the <br /> basis of those acts.<br /> (50)<br /> As regards AI systems that are safety components of products, or which are themselves products, falling within the <br /> scope of certain Union harmonisation legislation listed in an annex to this Regulation, it is appropriate to classify <br /> them as high-risk under this Regulation if the product concerned undergoes the conformity assessment procedure <br /> with a third-party conformity assessment body pursuant to that relevant Union harmonisation legislation.</p>
Show original text

AI systems that are safety components of certain regulated products should be classified as high-risk if those products undergo third-party safety testing. These regulated products include machinery, toys, lifts, equipment for explosive atmospheres, radio equipment, pressure equipment, recreational craft, cableway systems, gas appliances, medical devices, diagnostic devices, automotive equipment, and aviation equipment.

However, classifying an AI system as high-risk does not automatically mean the product containing it is high-risk under other EU safety regulations. This is especially true for medical device regulations (EU 2017/745 and EU 2017/746), which have their own risk classifications.

For standalone AI systems that are not part of other products, they should be classified as high-risk if they could cause serious harm to people's health, safety, or fundamental rights. This classification depends on how severe the potential harm is, how likely it is to occur, and whether the AI system is used in specific areas defined by this regulation.

<p>isation legislation listed in an annex to this Regulation, it is appropriate to classify <br /> them as high-risk under this Regulation if the product concerned undergoes the conformity assessment procedure <br /> with a third-party conformity assessment body pursuant to that relevant Union harmonisation legislation. In <br /> particular, such products are machinery, toys, lifts, equipment and protective systems intended for use in potentially <br /> explosive atmospheres, radio equipment, pressure equipment, recreational craft equipment, cableway installations, <br /> appliances burning gaseous fuels, medical devices, in vitro diagnostic medical devices, automotive and aviation.<br /> (51)<br /> The classification of an AI system as high-risk pursuant to this Regulation should not necessarily mean that the <br /> product whose safety component is the AI system, or the AI system itself as a product, is considered to be high-risk <br /> under the criteria established in the relevant Union harmonisation legislation that applies to the product. This is, in <br /> particular, the case for Regulations (EU) 2017/745 and (EU) 2017/746, where a third-party conformity assessment is <br /> provided for medium-risk and high-risk products.<br /> (52)<br /> As regards stand-alone AI systems, namely high-risk AI systems other than those that are safety components of <br /> products, or that are themselves products, it is appropriate to classify them as high-risk if, in light of their intended <br /> purpose, they pose a high risk of harm to the health and safety or the fundamental rights of persons, taking into <br /> account both the severity of the possible harm and its probability of occurrence and they are used in a number of <br /> specifically pre-defined areas specified in this Regulation.</p>
Show original text

High-risk AI systems are those that could harm people's health, safety, or fundamental rights. These systems are identified based on how severe the potential harm is and how likely it is to occur. They are used in specific areas defined by this Regulation. The Commission can update the list of high-risk AI systems through delegated acts to keep up with rapid technological changes and new ways AI systems are being used.

However, some AI systems in these predefined areas may not actually pose significant risks because they don't meaningfully affect decision-making or cause substantial harm. For this Regulation, an AI system that doesn't materially influence decision-making is one that doesn't impact the substance or outcome of decisions, whether made by humans or machines.

AI systems that don't materially influence decision-making include those that perform narrow, limited tasks. Examples include: AI systems that convert unstructured data into structured data, AI systems that sort documents into categories, or AI systems that identify duplicate applications. These narrow tasks carry only limited risks and are not made riskier by being used in high-risk contexts listed in this Regulation's annexes.

<p>risk of harm to the health and safety or the fundamental rights of persons, taking into <br /> account both the severity of the possible harm and its probability of occurrence and they are used in a number of <br /> specifically pre-defined areas specified in this Regulation. The identification of those systems is based on the same <br /> methodology and criteria envisaged also for any future amendments of the list of high-risk AI systems that the <br /> Commission should be empowered to adopt, via delegated acts, to take into account the rapid pace of technological <br /> development, as well as the potential changes in the use of AI systems.<br /> (53)<br /> It is also important to clarify that there may be specific cases in which AI systems referred to in pre-defined areas <br /> specified in this Regulation do not lead to a significant risk of harm to the legal interests protected under those areas <br /> because they do not materially influence the decision-making or do not harm those interests substantially. For the <br /> purposes of this Regulation, an AI system that does not materially influence the outcome of decision-making should <br /> be understood to be an AI system that does not have an impact on the substance, and thereby the outcome, of <br /> decision-making, whether human or automated. An AI system that does not materially influence the outcome of <br /> decision-making could include situations in which one or more of the following conditions are fulfilled. The first <br /> such condition should be that the AI system is intended to perform a narrow procedural task, such as an AI system <br /> that transforms unstructured data into structured data, an AI system that classifies incoming documents into <br /> categories or an AI system that is used to detect duplicates among a large number of applications. Those tasks are of <br /> such narrow and limited nature that they pose only limited risks which are not increased through the use of an AI <br /> system in a context that is listed as a high-risk use in an annex to this Regulation. The second condition should be <br /> EN<br /> OJ L, 12.7.</p>
Show original text

AI systems used in high-risk applications listed in this Regulation's annexes must not increase existing risks beyond limited levels. This requirement is supported by two key EU regulations: (1) Regulation (EU) 2018/1139, which establishes safety rules for civil aviation and created the European Union Aviation Safety Agency, and (2) Regulation (EU) 2019/2144, which sets safety and approval standards for motor vehicles, their components, and systems to protect vehicle occupants and pedestrians.

<p>only limited risks which are not increased through the use of an AI <br /> system in a context that is listed as a high-risk use in an annex to this Regulation. The second condition should be <br /> EN<br /> OJ L, 12.7.2024<br /> 14/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> (30)<br /> Regulation (EU) 2018/1139 of the European Parliament and of the Council of 4 July 2018 on common rules in the field of civil <br /> aviation and establishing a European Union Aviation Safety Agency, and amending Regulations (EC) No 2111/2005, (EC) <br /> No 1008/2008, (EU) No 996/2010, (EU) No 376/2014 and Directives 2014/30/EU and 2014/53/EU of the European Parliament <br /> and of the Council, and repealing Regulations (EC) No 552/2004 and (EC) No 216/2008 of the European Parliament and of the <br /> Council and Council Regulation (EEC) No 3922/91 (OJ L 212, 22.8.2018, p. 1).<br /> (31)<br /> Regulation (EU) 2019/2144 of the European Parliament and of the Council of 27 November 2019 on type-approval requirements <br /> for motor vehicles and their trailers, and systems, components and separate technical units intended for such vehicles, as regards <br /> their general safety and the protection of vehicle occupants and vulnerable road users, amending Regulation (EU) 2018/858 of the <br /> European Parliament and of the Council and repealing Regulations (EC) No 78/2009, (EC) No 79/2009 and (EC) No 661/2009 of <br /> the European Parliament and of the Council</p>
Show original text

This text discusses two main points: First, it references the repeal of multiple European Union regulations from 2009-2015, including regulations numbered 78/2009, 79/2009, 661/2009, and many others (631/2009 through 2015/166), as documented in the Official Journal on December 16, 2019. Second, it explains that certain AI systems are designed to enhance human work that may relate to high-risk activities listed in this Regulation's annex. Because these AI systems only add an extra layer to human decision-making, they present lower risk. Examples include AI that improves the language in already-written documents by adjusting professional tone, academic style, or brand messaging. Additionally, the AI system should be able to identify patterns in decision-making or spot when decisions deviate from established patterns.

<p>the <br /> European Parliament and of the Council and repealing Regulations (EC) No 78/2009, (EC) No 79/2009 and (EC) No 661/2009 of <br /> the European Parliament and of the Council and Commission Regulations (EC) No 631/2009, (EU) No 406/2010, (EU) <br /> No 672/2010, (EU) No 1003/2010, (EU) No 1005/2010, (EU) No 1008/2010, (EU) No 1009/2010, (EU) No 19/2011, (EU) <br /> No 109/2011, (EU) No 458/2011, (EU) No 65/2012, (EU) No 130/2012, (EU) No 347/2012, (EU) No 351/2012, (EU) <br /> No 1230/2012 and (EU) 2015/166 (OJ L 325, 16.12.2019, p. 1).</p> <p>that the task performed by the AI system is intended to improve the result of a previously completed human activity <br /> that may be relevant for the purposes of the high-risk uses listed in an annex to this Regulation. Considering those <br /> characteristics, the AI system provides only an additional layer to a human activity with consequently lowered risk. <br /> That condition would, for example, apply to AI systems that are intended to improve the language used in previously <br /> drafted documents, for example in relation to professional tone, academic style of language or by aligning text to <br /> a certain brand messaging. The third condition should be that the AI system is intended to detect decision-making <br /> patterns or deviations from prior decision-making patterns.</p>
Show original text

AI systems can be considered lower risk in certain situations. First, when they help maintain professional tone, academic language, or brand messaging consistency. Second, when they detect unusual patterns or deviations in decision-making—for example, flagging if a teacher's grades suddenly differ from their usual grading pattern. These systems support human decisions but don't replace human judgment. Third, when AI performs preparatory tasks before a formal assessment, such as organizing files, searching documents, processing text or speech, or translating initial documents. These preparatory tasks have minimal impact on the final assessment. However, AI systems used for high-risk purposes listed in regulations are still considered risky if they involve profiling—meaning they create detailed profiles of individuals based on personal data, as defined in EU data protection laws (Regulations 2016/679, 2016/680, and 2018/1725). Such profiling can pose significant risks to health, safety, and fundamental rights.

<p>, for example in relation to professional tone, academic style of language or by aligning text to <br /> a certain brand messaging. The third condition should be that the AI system is intended to detect decision-making <br /> patterns or deviations from prior decision-making patterns. The risk would be lowered because the use of the AI <br /> system follows a previously completed human assessment which it is not meant to replace or influence, without <br /> proper human review. Such AI systems include for instance those that, given a certain grading pattern of a teacher, <br /> can be used to check ex post whether the teacher may have deviated from the grading pattern so as to flag potential <br /> inconsistencies or anomalies. The fourth condition should be that the AI system is intended to perform a task that is <br /> only preparatory to an assessment relevant for the purposes of the AI systems listed in an annex to this Regulation, <br /> thus making the possible impact of the output of the system very low in terms of representing a risk for the <br /> assessment to follow. That condition covers, inter alia, smart solutions for file handling, which include various <br /> functions from indexing, searching, text and speech processing or linking data to other data sources, or AI systems <br /> used for translation of initial documents. In any case, AI systems used in high-risk use-cases listed in an annex to this <br /> Regulation should be considered to pose significant risks of harm to the health, safety or fundamental rights if the AI <br /> system implies profiling within the meaning of Article 4, point (4) of Regulation (EU) 2016/679 or Article 3, <br /> point (4) of Directive (EU) 2016/680 or Article 3, point (5) of Regulation (EU) 2018/1725.</p>
Show original text

AI providers must document their assessment if they believe their system is not high-risk before selling or using it. They must provide this documentation to national authorities when requested and register the system in the EU database. The European Commission should create guidelines with practical examples to help clarify which AI systems are high-risk and which are not.

Biometric data (like fingerprints or facial recognition) is sensitive personal information, so AI systems using biometric data are classified as high-risk in certain situations. Remote biometric identification systems can produce inaccurate or biased results that may discriminate against people based on age, ethnicity, race, sex, or disabilities. For this reason, remote biometric identification systems must be classified as high-risk.

<p>Regulation (EU) 2016/679 or Article 3, <br /> point (4) of Directive (EU) 2016/680 or Article 3, point (5) of Regulation (EU) 2018/1725. To ensure traceability <br /> and transparency, a provider who considers that an AI system is not high-risk on the basis of the conditions referred <br /> to above should draw up documentation of the assessment before that system is placed on the market or put into <br /> service and should provide that documentation to national competent authorities upon request. Such a provider <br /> should be obliged to register the AI system in the EU database established under this Regulation. With a view to <br /> providing further guidance for the practical implementation of the conditions under which the AI systems listed in <br /> an annex to this Regulation are, on an exceptional basis, non-high-risk, the Commission should, after consulting the <br /> Board, provide guidelines specifying that practical implementation, completed by a comprehensive list of practical <br /> examples of use cases of AI systems that are high-risk and use cases that are not.<br /> (54)<br /> As biometric data constitutes a special category of personal data, it is appropriate to classify as high-risk several <br /> critical-use cases of biometric systems, insofar as their use is permitted under relevant Union and national law. <br /> Technical inaccuracies of AI systems intended for the remote biometric identification of natural persons can lead to <br /> biased results and entail discriminatory effects. The risk of such biased results and discriminatory effects is <br /> particularly relevant with regard to age, ethnicity, race, sex or disabilities. Remote biometric identification systems <br /> should therefore be classified as high-risk in view of the risks that they pose.</p>
Show original text

Remote biometric identification systems that recognize people from a distance should be classified as high-risk AI because they can produce biased results and discriminatory effects, especially regarding age, ethnicity, race, sex, and disabilities. However, this classification does not apply to biometric verification systems used only to confirm someone's identity for purposes like unlocking a device, accessing a service, or entering a secure location. AI systems that categorize people based on sensitive characteristics protected by EU data protection law, as well as emotion recognition systems, should also be classified as high-risk. Biometric systems used only for cybersecurity and data protection purposes are not considered high-risk. Additionally, AI systems used to manage critical infrastructure—such as digital systems, roads, water, gas, heating, and electricity supplies—should be classified as high-risk because if they fail or malfunction, they could endanger many people's lives and cause serious disruptions to society and the economy.

<p>effects. The risk of such biased results and discriminatory effects is <br /> particularly relevant with regard to age, ethnicity, race, sex or disabilities. Remote biometric identification systems <br /> should therefore be classified as high-risk in view of the risks that they pose. Such a classification excludes AI <br /> systems intended to be used for biometric verification, including authentication, the sole purpose of which is to <br /> confirm that a specific natural person is who that person claims to be and to confirm the identity of a natural person <br /> for the sole purpose of having access to a service, unlocking a device or having secure access to premises. In addition, <br /> AI systems intended to be used for biometric categorisation according to sensitive attributes or characteristics <br /> protected under Article 9(1) of Regulation (EU) 2016/679 on the basis of biometric data, in so far as these are not <br /> prohibited under this Regulation, and emotion recognition systems that are not prohibited under this Regulation, <br /> should be classified as high-risk. Biometric systems which are intended to be used solely for the purpose of enabling <br /> cybersecurity and personal data protection measures should not be considered to be high-risk AI systems.<br /> (55)<br /> As regards the management and operation of critical infrastructure, it is appropriate to classify as high-risk the AI <br /> systems intended to be used as safety components in the management and operation of critical digital infrastructure <br /> as listed in point (8) of the Annex to Directive (EU) 2022/2557, road traffic and the supply of water, gas, heating and <br /> electricity, since their failure or malfunctioning may put at risk the life and health of persons at large scale and lead to <br /> appreciable disruptions in the ordinary conduct of social and economic activities.</p>
Show original text

Critical infrastructure like water, gas, heating, and electricity systems are essential because their failure could endanger lives and disrupt society and the economy. Safety components are special systems that protect critical infrastructure or keep people safe, but they are not needed for the main system to work. If these safety components fail, they can directly threaten the physical safety of critical infrastructure and harm people or property. Cybersecurity-only systems are not considered safety components. Examples include water pressure monitoring systems or fire alarms in data centers. AI systems in education are valuable for improving digital learning and helping students and teachers develop digital skills, media literacy, and critical thinking needed for work, society, and democracy. However, AI systems used in education or training that make decisions about admission, placement, grading, determining educational level, or monitoring student behavior during tests should be classified as high-risk because they can significantly affect a person's educational and career future.

<p>the supply of water, gas, heating and <br /> electricity, since their failure or malfunctioning may put at risk the life and health of persons at large scale and lead to <br /> appreciable disruptions in the ordinary conduct of social and economic activities. Safety components of critical <br /> infrastructure, including critical digital infrastructure, are systems used to directly protect the physical integrity of <br /> critical infrastructure or the health and safety of persons and property but which are not necessary in order for the <br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 15/144</p> <p>system to function. The failure or malfunctioning of such components might directly lead to risks to the physical <br /> integrity of critical infrastructure and thus to risks to health and safety of persons and property. Components <br /> intended to be used solely for cybersecurity purposes should not qualify as safety components. Examples of safety <br /> components of such critical infrastructure may include systems for monitoring water pressure or fire alarm <br /> controlling systems in cloud computing centres.<br /> (56)<br /> The deployment of AI systems in education is important to promote high-quality digital education and training and <br /> to allow all learners and teachers to acquire and share the necessary digital skills and competences, including media <br /> literacy, and critical thinking, to take an active part in the economy, society, and in democratic processes. However, <br /> AI systems used in education or vocational training, in particular for determining access or admission, for assigning <br /> persons to educational and vocational training institutions or programmes at all levels, for evaluating learning <br /> outcomes of persons, for assessing the appropriate level of education for an individual and materially influencing the <br /> level of education and training that individuals will receive or will be able to access or for monitoring and detecting <br /> prohibited behaviour of students during tests should be classified as high-risk AI systems, since they may determine <br /> the educational and professional course of a person’s life and therefore may</p>
Show original text

AI systems used to monitor students during tests should be classified as high-risk because they can significantly impact a person's education and career prospects. If poorly designed, these systems may violate educational rights and anti-discrimination protections, and could reinforce existing discrimination against women, certain age groups, people with disabilities, and people of specific racial, ethnic, or sexual orientations.

AI systems used in employment decisions should also be classified as high-risk. This includes systems used for hiring, selecting candidates, determining work conditions, promotions, terminations, task assignments based on behavior or personal traits, and monitoring employee performance. These systems can greatly affect people's careers, livelihoods, and worker rights. Employees and platform workers should be meaningfully involved in these processes. During recruitment and evaluation, such systems may repeat historical discrimination patterns against women, certain age groups, people with disabilities, and people of specific racial, ethnic, or sexual orientations. Additionally, monitoring systems may violate fundamental rights to data protection and privacy.

<p>that individuals will receive or will be able to access or for monitoring and detecting <br /> prohibited behaviour of students during tests should be classified as high-risk AI systems, since they may determine <br /> the educational and professional course of a person’s life and therefore may affect that person’s ability to secure <br /> a livelihood. When improperly designed and used, such systems may be particularly intrusive and may violate the <br /> right to education and training as well as the right not to be discriminated against and perpetuate historical patterns <br /> of discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain <br /> racial or ethnic origins or sexual orientation.<br /> (57)<br /> AI systems used in employment, workers management and access to self-employment, in particular for the <br /> recruitment and selection of persons, for making decisions affecting terms of the work-related relationship, <br /> promotion and termination of work-related contractual relationships, for allocating tasks on the basis of individual <br /> behaviour, personal traits or characteristics and for monitoring or evaluation of persons in work-related contractual <br /> relationships, should also be classified as high-risk, since those systems may have an appreciable impact on future <br /> career prospects, livelihoods of those persons and workers’ rights. Relevant work-related contractual relationships <br /> should, in a meaningful manner, involve employees and persons providing services through platforms as referred to <br /> in the Commission Work Programme 2021. Throughout the recruitment process and in the evaluation, promotion, <br /> or retention of persons in work-related contractual relationships, such systems may perpetuate historical patterns of <br /> discrimination, for example against women, certain age groups, persons with disabilities, or persons of certain racial <br /> or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of such persons <br /> may also undermine their fundamental rights to data protection and privacy.</p>
Show original text

AI systems can unfairly discriminate against women, certain age groups, people with disabilities, and people of specific racial, ethnic, or sexual orientations. When AI monitors these groups, it can also violate their rights to data protection and privacy.

AI systems that help decide who receives essential public services deserve careful attention. These services include healthcare, social security, unemployment benefits, and housing assistance. People applying for these services are often vulnerable and depend on them to live. When AI systems are used to approve, deny, reduce, or cancel these benefits, they can seriously affect people's lives and violate fundamental rights like social protection, fair treatment, dignity, and access to justice. Therefore, such AI systems should be classified as high-risk.

However, this regulation should not prevent government agencies from using safe, compliant AI systems that do not pose high risks.

AI systems that evaluate credit scores and creditworthiness should also be classified as high-risk. These systems control whether people can access money and essential services like housing, electricity, and phone services.

<p>example against women, certain age groups, persons with disabilities, or persons of certain racial <br /> or ethnic origins or sexual orientation. AI systems used to monitor the performance and behaviour of such persons <br /> may also undermine their fundamental rights to data protection and privacy.<br /> (58)<br /> Another area in which the use of AI systems deserves special consideration is the access to and enjoyment of certain <br /> essential private and public services and benefits necessary for people to fully participate in society or to improve <br /> one’s standard of living. In particular, natural persons applying for or receiving essential public assistance benefits <br /> and services from public authorities namely healthcare services, social security benefits, social services providing <br /> protection in cases such as maternity, illness, industrial accidents, dependency or old age and loss of employment <br /> and social and housing assistance, are typically dependent on those benefits and services and in a vulnerable position <br /> in relation to the responsible authorities. If AI systems are used for determining whether such benefits and services <br /> should be granted, denied, reduced, revoked or reclaimed by authorities, including whether beneficiaries are <br /> legitimately entitled to such benefits or services, those systems may have a significant impact on persons’ livelihood <br /> and may infringe their fundamental rights, such as the right to social protection, non-discrimination, human dignity <br /> or an effective remedy and should therefore be classified as high-risk. Nonetheless, this Regulation should not <br /> hamper the development and use of innovative approaches in the public administration, which would stand to <br /> benefit from a wider use of compliant and safe AI systems, provided that those systems do not entail a high risk to <br /> legal and natural persons. In addition, AI systems used to evaluate the credit score or creditworthiness of natural <br /> persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources <br /> or essential services such as housing, electricity, and telecommunication services.</p>
Show original text

AI systems that assess credit scores or creditworthiness are considered high-risk because they control access to money and essential services like housing, electricity, and telecommunications. These systems can discriminate against people or groups based on race, ethnicity, gender, disability, age, or sexual orientation, and may repeat historical discrimination patterns or create new ones. However, AI systems used by the EU to detect fraud in financial services or calculate capital requirements for banks and insurance companies are not classified as high-risk. AI systems used to assess risk and set prices for health and life insurance are also high-risk because they significantly affect people's financial security and wellbeing. If poorly designed or used, they can violate fundamental rights and cause serious harm, including financial exclusion and discrimination. Additionally, AI systems that evaluate emergency calls, dispatch emergency services (police, firefighters, medical aid), or prioritize emergency healthcare patients are high-risk because they make critical decisions affecting people's lives, health, and property.

<p>AI systems used to evaluate the credit score or creditworthiness of natural <br /> persons should be classified as high-risk AI systems, since they determine those persons’ access to financial resources <br /> or essential services such as housing, electricity, and telecommunication services. AI systems used for those purposes <br /> may lead to discrimination between persons or groups and may perpetuate historical patterns of discrimination, <br /> such as that based on racial or ethnic origins, gender, disabilities, age or sexual orientation, or may create new forms <br /> of discriminatory impacts. However, AI systems provided for by Union law for the purpose of detecting fraud in the <br /> offering of financial services and for prudential purposes to calculate credit institutions’ and insurance undertakings’ <br /> capital requirements should not be considered to be high-risk under this Regulation. Moreover, AI systems intended <br /> EN<br /> OJ L, 12.7.2024<br /> 16/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>to be used for risk assessment and pricing in relation to natural persons for health and life insurance can also have <br /> a significant impact on persons’ livelihood and if not duly designed, developed and used, can infringe their <br /> fundamental rights and can lead to serious consequences for people’s life and health, including financial exclusion <br /> and discrimination. Finally, AI systems used to evaluate and classify emergency calls by natural persons or to <br /> dispatch or establish priority in the dispatching of emergency first response services, including by police, firefighters <br /> and medical aid, as well as of emergency healthcare patient triage systems, should also be classified as high-risk since <br /> they make decisions in very critical situations for the life and health of persons and their property.</p>
Show original text

Emergency services like police, firefighters, and medical teams, along with emergency healthcare triage systems, should be classified as high-risk AI applications because they make critical decisions affecting people's lives, health, and property. Law enforcement use of AI systems creates a significant power imbalance and can lead to surveillance, arrest, loss of freedom, or violations of fundamental rights. If these AI systems are trained on poor-quality data, lack adequate performance standards, or are not properly designed and tested before use, they may unfairly target or incorrectly identify people. Additionally, important legal protections—such as the right to a fair trial, the right to defend oneself, and the presumption of innocence—can be undermined if AI systems lack transparency and clear explanations of how they work. Therefore, AI systems used in law enforcement should be classified as high-risk and must prioritize accuracy, reliability, and transparency to prevent harm, maintain public trust, and ensure accountability and fair remedies for those affected.

<p>services, including by police, firefighters <br /> and medical aid, as well as of emergency healthcare patient triage systems, should also be classified as high-risk since <br /> they make decisions in very critical situations for the life and health of persons and their property.<br /> (59)<br /> Given their role and responsibility, actions by law enforcement authorities involving certain uses of AI systems are <br /> characterised by a significant degree of power imbalance and may lead to surveillance, arrest or deprivation of <br /> a natural person’s liberty as well as other adverse impacts on fundamental rights guaranteed in the Charter. In <br /> particular, if the AI system is not trained with high-quality data, does not meet adequate requirements in terms of its <br /> performance, its accuracy or robustness, or is not properly designed and tested before being put on the market or <br /> otherwise put into service, it may single out people in a discriminatory or otherwise incorrect or unjust manner. <br /> Furthermore, the exercise of important procedural fundamental rights, such as the right to an effective remedy and <br /> to a fair trial as well as the right of defence and the presumption of innocence, could be hampered, in particular, <br /> where such AI systems are not sufficiently transparent, explainable and documented. It is therefore appropriate to <br /> classify as high-risk, insofar as their use is permitted under relevant Union and national law, a number of AI systems <br /> intended to be used in the law enforcement context where accuracy, reliability and transparency is particularly <br /> important to avoid adverse impacts, retain public trust and ensure accountability and effective redress.</p>
Show original text

AI systems used in law enforcement must be accurate, reliable, and transparent to prevent harm, maintain public trust, and ensure accountability. High-risk AI systems include those used by law enforcement or EU institutions to assess whether someone might become a crime victim, evaluate evidence reliability, assess reoffending risk, or create criminal profiles during investigations or prosecutions. However, AI systems used by tax authorities, customs agencies, and financial intelligence units for administrative tasks are not classified as high-risk law enforcement AI. Law enforcement must use AI tools fairly to avoid inequality or exclusion. Additionally, the impact on suspects' rights must be considered, particularly the challenge of understanding how these AI systems work and the difficulty in contesting their results in court.

<p>permitted under relevant Union and national law, a number of AI systems <br /> intended to be used in the law enforcement context where accuracy, reliability and transparency is particularly <br /> important to avoid adverse impacts, retain public trust and ensure accountability and effective redress. In view of the <br /> nature of the activities and the risks relating thereto, those high-risk AI systems should include in particular AI <br /> systems intended to be used by or on behalf of law enforcement authorities or by Union institutions, bodies, offices, <br /> or agencies in support of law enforcement authorities for assessing the risk of a natural person to become a victim of <br /> criminal offences, as polygraphs and similar tools, for the evaluation of the reliability of evidence in in the course of <br /> investigation or prosecution of criminal offences, and, insofar as not prohibited under this Regulation, for assessing <br /> the risk of a natural person offending or reoffending not solely on the basis of the profiling of natural persons or the <br /> assessment of personality traits and characteristics or the past criminal behaviour of natural persons or groups, for <br /> profiling in the course of detection, investigation or prosecution of criminal offences. AI systems specifically <br /> intended to be used for administrative proceedings by tax and customs authorities as well as by financial intelligence <br /> units carrying out administrative tasks analysing information pursuant to Union anti-money laundering law should <br /> not be classified as high-risk AI systems used by law enforcement authorities for the purpose of prevention, <br /> detection, investigation and prosecution of criminal offences. The use of AI tools by law enforcement and other <br /> relevant authorities should not become a factor of inequality, or exclusion. The impact of the use of AI tools on the <br /> defence rights of suspects should not be ignored, in particular the difficulty in obtaining meaningful information on <br /> the functioning of those systems and the resulting difficulty in challenging their results in court, in particular by <br /> natural persons under investigation.</p>
Show original text

AI tools used in law enforcement and border control need careful oversight. There are two main concerns: First, suspects have difficulty understanding how these AI systems work and challenging their results in court. Second, AI systems used in migration, asylum, and border control affect vulnerable people who depend on government decisions. These systems must be accurate, fair, and transparent to protect people's fundamental rights, including freedom of movement, protection from discrimination, privacy, and access to fair treatment. Therefore, the following AI systems should be classified as high-risk and strictly regulated: AI tools that assess risks posed by people entering a country or applying for visas or asylum; AI systems that help authorities review asylum and visa applications; and AI systems that identify or recognize people in migration and border control situations (except for checking travel documents). These AI systems must follow the rules set out in EU Regulation 810/2009.

<p>AI tools on the <br /> defence rights of suspects should not be ignored, in particular the difficulty in obtaining meaningful information on <br /> the functioning of those systems and the resulting difficulty in challenging their results in court, in particular by <br /> natural persons under investigation.<br /> (60)<br /> AI systems used in migration, asylum and border control management affect persons who are often in particularly <br /> vulnerable position and who are dependent on the outcome of the actions of the competent public authorities. The <br /> accuracy, non-discriminatory nature and transparency of the AI systems used in those contexts are therefore <br /> particularly important to guarantee respect for the fundamental rights of the affected persons, in particular their <br /> rights to free movement, non-discrimination, protection of private life and personal data, international protection <br /> and good administration. It is therefore appropriate to classify as high-risk, insofar as their use is permitted under <br /> relevant Union and national law, AI systems intended to be used by or on behalf of competent public authorities or <br /> by Union institutions, bodies, offices or agencies charged with tasks in the fields of migration, asylum and border <br /> control management as polygraphs and similar tools, for assessing certain risks posed by natural persons entering <br /> the territory of a Member State or applying for visa or asylum, for assisting competent public authorities for the <br /> examination, including related assessment of the reliability of evidence, of applications for asylum, visa and residence <br /> permits and associated complaints with regard to the objective to establish the eligibility of the natural persons <br /> applying for a status, for the purpose of detecting, recognising or identifying natural persons in the context of <br /> migration, asylum and border control management, with the exception of verification of travel documents. AI <br /> systems in the area of migration, asylum and border control management covered by this Regulation should comply <br /> with the relevant procedural requirements set by the Regulation (EC) No 810/2009 of the European Parliament and <br /> OJ L, 12.7.</p>
Show original text

AI systems used in migration, asylum, and border control must follow the rules set by EU Regulation 810/2009, Directive 2013/32/EU, and other relevant EU laws. Member States and EU institutions cannot use AI systems in these areas to avoid their obligations under the 1951 UN Refugee Convention (as updated in 1967). AI systems must not violate the principle of non-refoulement, which prevents returning people to places where they face danger, and must not block safe legal entry to the EU or the right to seek international protection. Certain AI systems used in courts and democratic processes are classified as high-risk because they can significantly affect democracy, the rule of law, individual freedoms, and the right to a fair trial. AI systems that help judges research facts and law, or apply law to specific cases, are considered high-risk due to risks of bias, errors, and lack of transparency. AI systems used by alternative dispute resolution bodies (non-court bodies that settle disputes) for these purposes are also high-risk when their decisions have legal consequences for the parties involved.

<p>the area of migration, asylum and border control management covered by this Regulation should comply <br /> with the relevant procedural requirements set by the Regulation (EC) No 810/2009 of the European Parliament and <br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 17/144</p> <p>of the Council (32), the Directive 2013/32/EU of the European Parliament and of the Council (33), and other relevant <br /> Union law. The use of AI systems in migration, asylum and border control management should, in no circumstances, <br /> be used by Member States or Union institutions, bodies, offices or agencies as a means to circumvent their <br /> international obligations under the UN Convention relating to the Status of Refugees done at Geneva on 28 July <br /> 1951 as amended by the Protocol of 31 January 1967. Nor should they be used to in any way infringe on the <br /> principle of non-refoulement, or to deny safe and effective legal avenues into the territory of the Union, including <br /> the right to international protection.<br /> (61)<br /> Certain AI systems intended for the administration of justice and democratic processes should be classified as <br /> high-risk, considering their potentially significant impact on democracy, the rule of law, individual freedoms as well <br /> as the right to an effective remedy and to a fair trial. In particular, to address the risks of potential biases, errors and <br /> opacity, it is appropriate to qualify as high-risk AI systems intended to be used by a judicial authority or on its behalf <br /> to assist judicial authorities in researching and interpreting facts and the law and in applying the law to a concrete set <br /> of facts. AI systems intended to be used by alternative dispute resolution bodies for those purposes should also be <br /> considered to be high-risk when the outcomes of the alternative dispute resolution proceedings produce legal effects <br /> for the parties.</p>
Show original text

AI systems used by dispute resolution bodies that produce legal outcomes for parties should be classified as high-risk. AI can assist judges in decision-making, but humans must make the final decision. However, AI systems used only for administrative support—such as anonymizing court documents or handling internal communications—are not considered high-risk.

AI systems designed to influence election or referendum outcomes, or to affect how people vote, should be classified as high-risk. This does not apply to AI tools that help organize political campaigns behind the scenes, where voters are not directly exposed to the AI's output.

Classifying an AI system as high-risk under this regulation does not mean it is legal under other EU or national laws. Other regulations still apply, including those on data protection, polygraph use, and systems that detect people's emotions.

<p>law to a concrete set <br /> of facts. AI systems intended to be used by alternative dispute resolution bodies for those purposes should also be <br /> considered to be high-risk when the outcomes of the alternative dispute resolution proceedings produce legal effects <br /> for the parties. The use of AI tools can support the decision-making power of judges or judicial independence, but <br /> should not replace it: the final decision-making must remain a human-driven activity. The classification of AI <br /> systems as high-risk should not, however, extend to AI systems intended for purely ancillary administrative activities <br /> that do not affect the actual administration of justice in individual cases, such as anonymisation or <br /> pseudonymisation of judicial decisions, documents or data, communication between personnel, administrative tasks.<br /> (62)<br /> Without prejudice to the rules provided for in Regulation (EU) 2024/900 of the European Parliament and of the <br /> Council (34), and in order to address the risks of undue external interference with the right to vote enshrined in <br /> Article 39 of the Charter, and of adverse effects on democracy and the rule of law, AI systems intended to be used to <br /> influence the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of <br /> their vote in elections or referenda should be classified as high-risk AI systems with the exception of AI systems <br /> whose output natural persons are not directly exposed to, such as tools used to organise, optimise and structure <br /> political campaigns from an administrative and logistical point of view.<br /> (63)<br /> The fact that an AI system is classified as a high-risk AI system under this Regulation should not be interpreted as <br /> indicating that the use of the system is lawful under other acts of Union law or under national law compatible with <br /> Union law, such as on the protection of personal data, on the use of polygraphs and similar tools or other systems to <br /> detect the emotional state of natural persons.</p>
Show original text

This system is legal under other EU laws or national laws that follow EU standards, such as data protection laws and rules about polygraphs or tools that detect emotions. Any use must follow the Charter and all relevant EU and national laws. This Regulation does not create a legal basis for processing personal data, including sensitive personal data, unless it specifically says so.

To reduce risks from high-risk AI systems sold or used in the market and to build trust, providers must follow certain mandatory requirements. These requirements depend on how the AI system will be used and what risks it poses. Providers must follow a risk management system and use current best practices in AI. Their solutions must be proportionate and effective.

Under EU product rules (as explained in the Commission's 2022 Blue Guide), multiple EU laws may apply to one product. A product can only be sold or used when it meets all applicable EU laws. This Regulation covers different safety concerns than existing EU product laws, so it adds to and complements those existing laws.

<p>system is lawful under other acts of Union law or under national law compatible with <br /> Union law, such as on the protection of personal data, on the use of polygraphs and similar tools or other systems to <br /> detect the emotional state of natural persons. Any such use should continue to occur solely in accordance with the <br /> applicable requirements resulting from the Charter and from the applicable acts of secondary Union law and <br /> national law. This Regulation should not be understood as providing for the legal ground for processing of personal <br /> data, including special categories of personal data, where relevant, unless it is specifically otherwise provided for in <br /> this Regulation.<br /> (64)<br /> To mitigate the risks from high-risk AI systems placed on the market or put into service and to ensure a high level of <br /> trustworthiness, certain mandatory requirements should apply to high-risk AI systems, taking into account the <br /> intended purpose and the context of use of the AI system and according to the risk-management system to be <br /> established by the provider. The measures adopted by the providers to comply with the mandatory requirements of <br /> this Regulation should take into account the generally acknowledged state of the art on AI, be proportionate and <br /> effective to meet the objectives of this Regulation. Based on the New Legislative Framework, as clarified in <br /> Commission notice ‘The “Blue Guide” on the implementation of EU product rules 2022’, the general rule is that <br /> more than one legal act of Union harmonisation legislation may be applicable to one product, since the making <br /> available or putting into service can take place only when the product complies with all applicable Union <br /> harmonisation legislation. The hazards of AI systems covered by the requirements of this Regulation concern <br /> different aspects than the existing Union harmonisation legislation and therefore the requirements of this Regulation <br /> would complement the existing body of the Union harmonisation legislation.</p>
Show original text

This regulation addresses AI system hazards that are not covered by existing EU laws. Therefore, it works alongside current EU regulations rather than replacing them. For example, machinery or medical devices that use AI systems may create risks that existing health and safety rules do not address. Since sector-specific laws do not account for AI-specific risks, this regulation must be applied together with other relevant EU laws.

<p>Union <br /> harmonisation legislation. The hazards of AI systems covered by the requirements of this Regulation concern <br /> different aspects than the existing Union harmonisation legislation and therefore the requirements of this Regulation <br /> would complement the existing body of the Union harmonisation legislation. For example, machinery or medical <br /> devices products incorporating an AI system might present risks not addressed by the essential health and safety <br /> EN<br /> OJ L, 12.7.2024<br /> 18/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> (32)<br /> Regulation (EC) No 810/2009 of the European Parliament and of the Council of 13 July 2009 establishing a Community Code on <br /> Visas (Visa Code) (OJ L 243, 15.9.2009, p. 1).<br /> (33)<br /> Directive 2013/32/EU of the European Parliament and of the Council of 26 June 2013 on common procedures for granting and <br /> withdrawing international protection (OJ L 180, 29.6.2013, p. 60).<br /> (34)<br /> Regulation (EU) 2024/900 of the European parliament and of the Council of 13 March 2024 on the transparency and targeting of <br /> political advertising (OJ L, 2024/900, 20.3.2024, ELI: http://data.europa.eu/eli/reg/2024/900/oj).</p> <p>requirements set out in the relevant Union harmonised legislation, as that sectoral law does not deal with risks <br /> specific to AI systems. This calls for a simultaneous and complementary application of the various legislative acts.</p>
Show original text

Providers of products containing high-risk AI systems must follow both this AI Regulation and existing EU product safety laws, as those laws do not address AI-specific risks. To reduce unnecessary costs and administrative burden, providers have flexibility in how they combine compliance processes. For example, they can integrate AI testing and documentation requirements into existing safety documentation procedures. However, this flexibility does not reduce their obligation to meet all applicable requirements. High-risk AI systems require a continuous risk-management process throughout their entire lifecycle. This process must identify and reduce risks to health, safety, and fundamental rights. Providers must regularly review and update their risk-management systems, document all significant decisions, and maintain records of actions taken to comply with this Regulation.

<p>/reg/2024/900/oj).</p> <p>requirements set out in the relevant Union harmonised legislation, as that sectoral law does not deal with risks <br /> specific to AI systems. This calls for a simultaneous and complementary application of the various legislative acts. To <br /> ensure consistency and to avoid an unnecessary administrative burden and unnecessary costs, providers of a product <br /> that contains one or more high-risk AI system, to which the requirements of this Regulation and of the Union <br /> harmonisation legislation based on the New Legislative Framework and listed in an annex to this Regulation apply, <br /> should have flexibility with regard to operational decisions on how to ensure compliance of a product that contains <br /> one or more AI systems with all the applicable requirements of that Union harmonised legislation in an optimal <br /> manner. That flexibility could mean, for example a decision by the provider to integrate a part of the necessary <br /> testing and reporting processes, information and documentation required under this Regulation into already existing <br /> documentation and procedures required under existing Union harmonisation legislation based on the New <br /> Legislative Framework and listed in an annex to this Regulation. This should not, in any way, undermine the <br /> obligation of the provider to comply with all the applicable requirements.<br /> (65)<br /> The risk-management system should consist of a continuous, iterative process that is planned and run throughout <br /> the entire lifecycle of a high-risk AI system. That process should be aimed at identifying and mitigating the relevant <br /> risks of AI systems on health, safety and fundamental rights. The risk-management system should be regularly <br /> reviewed and updated to ensure its continuing effectiveness, as well as justification and documentation of any <br /> significant decisions and actions taken subject to this Regulation.</p>
Show original text

AI providers must have a risk-management system that regularly checks for dangers to health, safety, and basic rights. This system should identify potential problems with the AI and put safeguards in place. Providers must document their decisions and explain why they chose specific safety measures, consulting experts when needed. When planning for misuse, providers should consider how people might use the AI in ways not intended but that are reasonably predictable based on how the system works. Any known or possible risks—whether from proper use or foreseeable misuse—must be clearly explained in the user instructions so that people using the AI understand the dangers. Providers do not need to create special training to prevent foreseeable misuse, but they are encouraged to do so if it helps reduce risks.

<p>risks of AI systems on health, safety and fundamental rights. The risk-management system should be regularly <br /> reviewed and updated to ensure its continuing effectiveness, as well as justification and documentation of any <br /> significant decisions and actions taken subject to this Regulation. This process should ensure that the provider <br /> identifies risks or adverse impacts and implements mitigation measures for the known and reasonably foreseeable <br /> risks of AI systems to the health, safety and fundamental rights in light of their intended purpose and reasonably <br /> foreseeable misuse, including the possible risks arising from the interaction between the AI system and the <br /> environment within which it operates. The risk-management system should adopt the most appropriate <br /> risk-management measures in light of the state of the art in AI. When identifying the most appropriate <br /> risk-management measures, the provider should document and explain the choices made and, when relevant, <br /> involve experts and external stakeholders. In identifying the reasonably foreseeable misuse of high-risk AI systems, <br /> the provider should cover uses of AI systems which, while not directly covered by the intended purpose and <br /> provided for in the instruction for use may nevertheless be reasonably expected to result from readily predictable <br /> human behaviour in the context of the specific characteristics and use of a particular AI system. Any known or <br /> foreseeable circumstances related to the use of the high-risk AI system in accordance with its intended purpose or <br /> under conditions of reasonably foreseeable misuse, which may lead to risks to the health and safety or fundamental <br /> rights should be included in the instructions for use that are provided by the provider. This is to ensure that the <br /> deployer is aware and takes them into account when using the high-risk AI system. Identifying and implementing <br /> risk mitigation measures for foreseeable misuse under this Regulation should not require specific additional training <br /> for the high-risk AI system by the provider to address foreseeable misuse. The providers however are encouraged to <br /> consider such additional training measures to mitigate reasonable foreseeable misuses as necessary and appropriate.</p>
Show original text

Providers of high-risk AI systems are not required to add special training to prevent foreseeable misuse, but they are encouraged to do so when reasonable and appropriate. High-risk AI systems must meet specific requirements in several areas: risk management, data quality, technical documentation, record-keeping, transparency, information sharing with users, human oversight, and security. These requirements are necessary to protect health, safety, and fundamental rights, and they do not unfairly restrict trade since no better alternatives exist. High-quality data is essential for AI systems to work properly and safely, and to prevent discrimination. Training, validation, and testing data must be relevant, representative, accurate, and complete for the system's intended purpose. Data governance practices should follow EU data protection laws, including clear information about why personal data was originally collected.

<p>for foreseeable misuse under this Regulation should not require specific additional training <br /> for the high-risk AI system by the provider to address foreseeable misuse. The providers however are encouraged to <br /> consider such additional training measures to mitigate reasonable foreseeable misuses as necessary and appropriate.<br /> (66)<br /> Requirements should apply to high-risk AI systems as regards risk management, the quality and relevance of data <br /> sets used, technical documentation and record-keeping, transparency and the provision of information to deployers, <br /> human oversight, and robustness, accuracy and cybersecurity. Those requirements are necessary to effectively <br /> mitigate the risks for health, safety and fundamental rights. As no other less trade restrictive measures are reasonably <br /> available those requirements are not unjustified restrictions to trade.<br /> (67)<br /> High-quality data and access to high-quality data plays a vital role in providing structure and in ensuring the <br /> performance of many AI systems, especially when techniques involving the training of models are used, with a view <br /> to ensure that the high-risk AI system performs as intended and safely and it does not become a source of <br /> discrimination prohibited by Union law. High-quality data sets for training, validation and testing require the <br /> implementation of appropriate data governance and management practices. Data sets for training, validation and <br /> testing, including the labels, should be relevant, sufficiently representative, and to the best extent possible free of <br /> errors and complete in view of the intended purpose of the system. In order to facilitate compliance with Union data <br /> protection law, such as Regulation (EU) 2016/679, data governance and management practices should include, in <br /> the case of personal data, transparency about the original purpose of the data collection.</p>
Show original text

To follow EU data protection rules like Regulation (EU) 2016/679, organizations must be transparent about why they originally collected personal data. AI datasets used for high-risk systems must have good statistical quality and represent the groups of people who will use the system. Special care is needed to reduce bias in datasets, which could harm people's health, safety, fundamental rights, or cause illegal discrimination. Bias can come from historical data or develop when systems are used in real situations. AI results influenced by bias can gradually increase and strengthen existing discrimination, especially for vulnerable groups including racial or ethnic minorities. Datasets should be as complete and error-free as possible, but this doesn't prevent using privacy-protecting techniques when developing and testing AI systems. Datasets should include features and characteristics relevant to the specific location, context, behavior, or function where the AI system will operate.

<p>order to facilitate compliance with Union data <br /> protection law, such as Regulation (EU) 2016/679, data governance and management practices should include, in <br /> the case of personal data, transparency about the original purpose of the data collection. The data sets should also <br /> have the appropriate statistical properties, including as regards the persons or groups of persons in relation to whom <br /> the high-risk AI system is intended to be used, with specific attention to the mitigation of possible biases in the data <br /> sets, that are likely to affect the health and safety of persons, have a negative impact on fundamental rights or lead to <br /> discrimination prohibited under Union law, especially where data outputs influence inputs for future operations <br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 19/144</p> <p>(feedback loops). Biases can for example be inherent in underlying data sets, especially when historical data is being <br /> used, or generated when the systems are implemented in real world settings. Results provided by AI systems could be <br /> influenced by such inherent biases that are inclined to gradually increase and thereby perpetuate and amplify existing <br /> discrimination, in particular for persons belonging to certain vulnerable groups, including racial or ethnic groups. <br /> The requirement for the data sets to be to the best extent possible complete and free of errors should not affect the <br /> use of privacy-preserving techniques in the context of the development and testing of AI systems. In particular, data <br /> sets should take into account, to the extent required by their intended purpose, the features, characteristics or <br /> elements that are particular to the specific geographical, contextual, behavioural or functional setting which the AI <br /> system is intended to be used.</p>
Show original text

When creating datasets for AI systems, developers should include features and characteristics that match the specific location, context, behavior, or purpose where the AI will be used. Companies can meet data governance requirements by working with third-party providers who offer certified compliance services, including verification of data quality, dataset integrity, and proper training and testing practices.

To develop and test high-risk AI systems, key actors like AI providers, authorized testing bodies, European Digital Innovation Hubs, and researchers need access to high-quality datasets. The European Commission is establishing common data spaces to help businesses and government share data safely and fairly. For example, the European health data space will allow secure access to health data for training AI systems while protecting privacy and maintaining transparency. Government authorities can also help provide quality data for training and testing AI systems.

Privacy and personal data protection must be maintained throughout the entire life of an AI system. When handling personal data, developers must follow EU data protection rules, including using only necessary data and building privacy protections into the system from the start.

<p>, data <br /> sets should take into account, to the extent required by their intended purpose, the features, characteristics or <br /> elements that are particular to the specific geographical, contextual, behavioural or functional setting which the AI <br /> system is intended to be used. The requirements related to data governance can be complied with by having recourse <br /> to third parties that offer certified compliance services including verification of data governance, data set integrity, <br /> and data training, validation and testing practices, as far as compliance with the data requirements of this Regulation <br /> are ensured.<br /> (68)<br /> For the development and assessment of high-risk AI systems, certain actors, such as providers, notified bodies and <br /> other relevant entities, such as European Digital Innovation Hubs, testing experimentation facilities and researchers, <br /> should be able to access and use high-quality data sets within the fields of activities of those actors which are related <br /> to this Regulation. European common data spaces established by the Commission and the facilitation of data sharing <br /> between businesses and with government in the public interest will be instrumental to provide trustful, accountable <br /> and non-discriminatory access to high-quality data for the training, validation and testing of AI systems. For <br /> example, in health, the European health data space will facilitate non-discriminatory access to health data and the <br /> training of AI algorithms on those data sets, in a privacy-preserving, secure, timely, transparent and trustworthy <br /> manner, and with an appropriate institutional governance. Relevant competent authorities, including sectoral ones, <br /> providing or supporting the access to data may also support the provision of high-quality data for the training, <br /> validation and testing of AI systems.<br /> (69)<br /> The right to privacy and to protection of personal data must be guaranteed throughout the entire lifecycle of the AI <br /> system. In this regard, the principles of data minimisation and data protection by design and by default, as set out in <br /> Union data protection law, are applicable when personal data are processed.</p>
Show original text

Data protection must be maintained throughout the entire life of an AI system. Providers should follow data minimization and protection principles from EU data protection laws when handling personal data. They can use methods like anonymization, encryption, and technology that allows AI training without sharing or copying raw data between parties.

To prevent discrimination caused by bias in AI systems, providers may be allowed to process sensitive personal data in limited cases. This is only permitted when strictly necessary to detect and fix bias in high-risk AI systems, with proper safeguards for people's rights and freedoms, and in compliance with EU data protection regulations.

High-risk AI systems must have clear documentation showing how they were developed and how they perform over time. This information is essential for tracking these systems, ensuring they meet legal requirements, monitoring their operations, and checking their performance after they are released to the market.

<p>must be guaranteed throughout the entire lifecycle of the AI <br /> system. In this regard, the principles of data minimisation and data protection by design and by default, as set out in <br /> Union data protection law, are applicable when personal data are processed. Measures taken by providers to ensure <br /> compliance with those principles may include not only anonymisation and encryption, but also the use of <br /> technology that permits algorithms to be brought to the data and allows training of AI systems without the <br /> transmission between parties or copying of the raw or structured data themselves, without prejudice to the <br /> requirements on data governance provided for in this Regulation.<br /> (70)<br /> In order to protect the right of others from the discrimination that might result from the bias in AI systems, the <br /> providers should, exceptionally, to the extent that it is strictly necessary for the purpose of ensuring bias detection <br /> and correction in relation to the high-risk AI systems, subject to appropriate safeguards for the fundamental rights <br /> and freedoms of natural persons and following the application of all applicable conditions laid down under this <br /> Regulation in addition to the conditions laid down in Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive <br /> (EU) 2016/680, be able to process also special categories of personal data, as a matter of substantial public interest <br /> within the meaning of Article 9(2), point (g) of Regulation (EU) 2016/679 and Article 10(2), point (g) of Regulation <br /> (EU) 2018/1725.<br /> (71)<br /> Having comprehensible information on how high-risk AI systems have been developed and how they perform <br /> throughout their lifetime is essential to enable traceability of those systems, verify compliance with the requirements <br /> under this Regulation, as well as monitoring of their operations and post market monitoring.</p>
Show original text

High-risk AI systems must maintain detailed records and technical documentation throughout their entire lifetime. This documentation is necessary to track the systems, ensure they follow regulations, and monitor their performance after release. The technical documentation should describe the system's characteristics, capabilities, and limitations, including information about its algorithms, data, training, testing, and validation processes. It must also include risk management details and be kept current at all times. Additionally, high-risk AI systems must automatically record events through logs during their entire operational lifetime. Before high-risk AI systems are released to the market or put into service, they must be transparent and easy to understand. These systems should be designed so that users can comprehend how they work, evaluate their performance, and understand their strengths and weaknesses. High-risk AI systems must come with clear instructions for use that explain the system's characteristics, capabilities, and performance limitations.

<p>on how high-risk AI systems have been developed and how they perform <br /> throughout their lifetime is essential to enable traceability of those systems, verify compliance with the requirements <br /> under this Regulation, as well as monitoring of their operations and post market monitoring. This requires keeping <br /> records and the availability of technical documentation, containing information which is necessary to assess the <br /> compliance of the AI system with the relevant requirements and facilitate post market monitoring. Such information <br /> should include the general characteristics, capabilities and limitations of the system, algorithms, data, training, <br /> testing and validation processes used as well as documentation on the relevant risk-management system and drawn <br /> in a clear and comprehensive form. The technical documentation should be kept up to date, appropriately <br /> throughout the lifetime of the AI system. Furthermore, high-risk AI systems should technically allow for the <br /> automatic recording of events, by means of logs, over the duration of the lifetime of the system.<br /> EN<br /> OJ L, 12.7.2024<br /> 20/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>(72)<br /> To address concerns related to opacity and complexity of certain AI systems and help deployers to fulfil their <br /> obligations under this Regulation, transparency should be required for high-risk AI systems before they are placed <br /> on the market or put it into service. High-risk AI systems should be designed in a manner to enable deployers to <br /> understand how the AI system works, evaluate its functionality, and comprehend its strengths and limitations. <br /> High-risk AI systems should be accompanied by appropriate information in the form of instructions of use. Such <br /> information should include the characteristics, capabilities and limitations of performance of the AI system.</p>
Show original text

High-risk AI systems must come with clear instructions for use. These instructions should explain what the AI system can and cannot do, how well it performs, and what risks it might create for health, safety, and people's rights. The instructions should also describe any changes the provider has made and approved, as well as how humans should oversee the system's use. Clear information helps users understand the system better, make smart choices about whether to use it, learn the correct and incorrect ways to use it, and use it properly. To make instructions easier to read and understand, providers should include examples showing what the system can and cannot do. All documentation must be written in simple, clear language that the target users can easily understand. Instructions should be provided in the language of the country where the system will be used.

<p>works, evaluate its functionality, and comprehend its strengths and limitations. <br /> High-risk AI systems should be accompanied by appropriate information in the form of instructions of use. Such <br /> information should include the characteristics, capabilities and limitations of performance of the AI system. Those <br /> would cover information on possible known and foreseeable circumstances related to the use of the high-risk AI <br /> system, including deployer action that may influence system behaviour and performance, under which the AI system <br /> can lead to risks to health, safety, and fundamental rights, on the changes that have been pre-determined and <br /> assessed for conformity by the provider and on the relevant human oversight measures, including the measures to <br /> facilitate the interpretation of the outputs of the AI system by the deployers. Transparency, including the <br /> accompanying instructions for use, should assist deployers in the use of the system and support informed decision <br /> making by them. Deployers should, inter alia, be in a better position to make the correct choice of the system that <br /> they intend to use in light of the obligations applicable to them, be educated about the intended and precluded uses, <br /> and use the AI system correctly and as appropriate. In order to enhance legibility and accessibility of the information <br /> included in the instructions of use, where appropriate, illustrative examples, for instance on the limitations and on <br /> the intended and precluded uses of the AI system, should be included. Providers should ensure that all <br /> documentation, including the instructions for use, contains meaningful, comprehensive, accessible and <br /> understandable information, taking into account the needs and foreseeable knowledge of the target deployers. <br /> Instructions for use should be made available in a language which can be easily understood by target deployers, as <br /> determined by the Member State concerned.</p>
Show original text

Instructions for high-risk AI systems must be provided in a language that target users can easily understand, as decided by each Member State. High-risk AI systems should be designed so that people can monitor and control how they work, ensure they are used correctly, and manage any problems throughout their lifetime. Before selling or using these systems, providers must put in place appropriate human oversight measures. These measures should include: built-in limits that the system cannot override, responsiveness to human operators, and ensuring that people overseeing the system have the right training, skills, and authority. High-risk AI systems should also have features that help people make informed decisions about when and how to intervene to prevent harm or stop the system if it is not working properly. For biometric identification systems (which match faces or fingerprints), there is an extra requirement: at least two different people must separately verify and confirm any identification before the system's user can take action based on that result. These two people can work for the same or different organizations and may include the system operator. This verification should not cause unnecessary delays, and the system can automatically record these confirmations in its logs.

<p>understandable information, taking into account the needs and foreseeable knowledge of the target deployers. <br /> Instructions for use should be made available in a language which can be easily understood by target deployers, as <br /> determined by the Member State concerned.<br /> (73)<br /> High-risk AI systems should be designed and developed in such a way that natural persons can oversee their <br /> functioning, ensure that they are used as intended and that their impacts are addressed over the system’s lifecycle. To <br /> that end, appropriate human oversight measures should be identified by the provider of the system before its placing <br /> on the market or putting into service. In particular, where appropriate, such measures should guarantee that the <br /> system is subject to in-built operational constraints that cannot be overridden by the system itself and is responsive <br /> to the human operator, and that the natural persons to whom human oversight has been assigned have the necessary <br /> competence, training and authority to carry out that role. It is also essential, as appropriate, to ensure that high-risk <br /> AI systems include mechanisms to guide and inform a natural person to whom human oversight has been assigned <br /> to make informed decisions if, when and how to intervene in order to avoid negative consequences or risks, or stop <br /> the system if it does not perform as intended. Considering the significant consequences for persons in the case of an <br /> incorrect match by certain biometric identification systems, it is appropriate to provide for an enhanced human <br /> oversight requirement for those systems so that no action or decision may be taken by the deployer on the basis of <br /> the identification resulting from the system unless this has been separately verified and confirmed by at least two <br /> natural persons. Those persons could be from one or more entities and include the person operating or using the <br /> system. This requirement should not pose unnecessary burden or delays and it could be sufficient that the separate <br /> verifications by the different persons are automatically recorded in the logs generated by the system.</p>
Show original text

High-risk AI systems must work reliably throughout their entire lifespan and meet appropriate standards for accuracy, robustness, and cybersecurity based on their intended use and current best practices. The Commission and relevant organizations should focus on reducing risks and negative impacts of these systems. Providers must clearly state the expected performance levels in their instructions and communicate this information to users in a straightforward, honest way without confusion or misleading claims. EU measurement laws (Directives 2014/31/EU and 2014/32/EU) ensure measurement accuracy and promote transparency in business. The Commission should work with measurement and benchmarking authorities to develop standards and testing methods for AI systems, and should coordinate with international partners on AI measurement standards. For law enforcement, migration, border control, and asylum applications, these verification requirements may not apply if Union or national law determines they would be disproportionate. System logs can automatically record separate verifications by different operators without creating unnecessary burden or delays.

<p>one or more entities and include the person operating or using the <br /> system. This requirement should not pose unnecessary burden or delays and it could be sufficient that the separate <br /> verifications by the different persons are automatically recorded in the logs generated by the system. Given the <br /> specificities of the areas of law enforcement, migration, border control and asylum, this requirement should not <br /> apply where Union or national law considers the application of that requirement to be disproportionate.<br /> (74)<br /> High-risk AI systems should perform consistently throughout their lifecycle and meet an appropriate level of <br /> accuracy, robustness and cybersecurity, in light of their intended purpose and in accordance with the generally <br /> acknowledged state of the art. The Commission and relevant organisations and stakeholders are encouraged to take <br /> due consideration of the mitigation of risks and the negative impacts of the AI system. The expected level of <br /> performance metrics should be declared in the accompanying instructions of use. Providers are urged to <br /> communicate that information to deployers in a clear and easily understandable way, free of misunderstandings or <br /> misleading statements. Union law on legal metrology, including Directives 2014/31/EU (35) and 2014/32/EU (36) of <br /> the European Parliament and of the Council, aims to ensure the accuracy of measurements and to help the <br /> transparency and fairness of commercial transactions. In that context, in cooperation with relevant stakeholders and <br /> organisation, such as metrology and benchmarking authorities, the Commission should encourage, as appropriate, <br /> the development of benchmarks and measurement methodologies for AI systems. In doing so, the Commission <br /> should take note and collaborate with international partners working on metrology and relevant measurement <br /> indicators relating to AI.<br /> OJ L, 12.7.</p>
Show original text

The Commission should develop benchmarks and measurement methods for AI systems, working with international partners on AI measurement standards and indicators. High-risk AI systems must be technically robust and able to handle errors, faults, and unexpected situations. Companies should implement technical and organizational safeguards to ensure these systems work reliably. This includes designing safety features that allow the system to stop operating safely if problems occur or if it operates outside normal limits. Without these protections, AI systems could cause safety problems or harm people's fundamental rights through incorrect decisions or biased results.

<p>the development of benchmarks and measurement methodologies for AI systems. In doing so, the Commission <br /> should take note and collaborate with international partners working on metrology and relevant measurement <br /> indicators relating to AI.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 21/144<br /> (35)<br /> Directive 2014/31/EU of the European Parliament and of the Council of 26 February 2014 on the harmonisation of the laws of the <br /> Member States relating to the making available on the market of non-automatic weighing instruments (OJ L 96, 29.3.2014, p. 107).<br /> (36)<br /> Directive 2014/32/EU of the European Parliament and of the Council of 26 February 2014 on the harmonisation of the laws of the <br /> Member States relating to the making available on the market of measuring instruments (OJ L 96, 29.3.2014, p. 149).</p> <p>(75)<br /> Technical robustness is a key requirement for high-risk AI systems. They should be resilient in relation to harmful or <br /> otherwise undesirable behaviour that may result from limitations within the systems or the environment in which <br /> the systems operate (e.g. errors, faults, inconsistencies, unexpected situations). Therefore, technical and <br /> organisational measures should be taken to ensure robustness of high-risk AI systems, for example by designing <br /> and developing appropriate technical solutions to prevent or minimise harmful or otherwise undesirable behaviour. <br /> Those technical solution may include for instance mechanisms enabling the system to safely interrupt its operation <br /> (fail-safe plans) in the presence of certain anomalies or when operation takes place outside certain predetermined <br /> boundaries. Failure to protect against these risks could lead to safety impacts or negatively affect the fundamental <br /> rights, for example due to erroneous decisions or wrong or biased outputs generated by the AI system.</p>
Show original text

AI systems must be protected against risks that could cause safety problems or harm people's rights through wrong decisions or biased outputs. Cybersecurity is essential to keep AI systems safe from attacks by malicious actors who try to change how the system works, steal its data, or exploit weaknesses. Attackers can target AI-specific components like training data (data poisoning) or trained models (adversarial attacks), or they can exploit vulnerabilities in the system's digital infrastructure. Providers of high-risk AI systems must implement appropriate security measures and controls to protect against these threats. High-risk AI systems that meet European cybersecurity standards for digital products can demonstrate compliance with this regulation's cybersecurity requirements by following those standards. When high-risk AI systems meet the essential cybersecurity requirements of European regulations for digital products, they are considered compliant with this regulation's cybersecurity rules, as long as this compliance is documented in the EU declaration of conformity.

<p>certain anomalies or when operation takes place outside certain predetermined <br /> boundaries. Failure to protect against these risks could lead to safety impacts or negatively affect the fundamental <br /> rights, for example due to erroneous decisions or wrong or biased outputs generated by the AI system.<br /> (76)<br /> Cybersecurity plays a crucial role in ensuring that AI systems are resilient against attempts to alter their use, <br /> behaviour, performance or compromise their security properties by malicious third parties exploiting the system’s <br /> vulnerabilities. Cyberattacks against AI systems can leverage AI specific assets, such as training data sets (e.g. data <br /> poisoning) or trained models (e.g. adversarial attacks or membership inference), or exploit vulnerabilities in the AI <br /> system’s digital assets or the underlying ICT infrastructure. To ensure a level of cybersecurity appropriate to the risks, <br /> suitable measures, such as security controls, should therefore be taken by the providers of high-risk AI systems, also <br /> taking into account as appropriate the underlying ICT infrastructure.<br /> (77)<br /> Without prejudice to the requirements related to robustness and accuracy set out in this Regulation, high-risk AI <br /> systems which fall within the scope of a regulation of the European Parliament and of the Council on horizontal <br /> cybersecurity requirements for products with digital elements, in accordance with that regulation may demonstrate <br /> compliance with the cybersecurity requirements of this Regulation by fulfilling the essential cybersecurity <br /> requirements set out in that regulation. When high-risk AI systems fulfil the essential requirements of a regulation of <br /> the European Parliament and of the Council on horizontal cybersecurity requirements for products with digital <br /> elements, they should be deemed compliant with the cybersecurity requirements set out in this Regulation in so far <br /> as the achievement of those requirements is demonstrated in the EU declaration of conformity or parts thereof <br /> issued under that regulation.</p>
Show original text

Products with digital elements that meet cybersecurity requirements under EU regulations can be considered compliant if they demonstrate this compliance in their EU declaration of conformity. When assessing cybersecurity risks for high-risk AI systems, regulators must evaluate threats to the system's resilience, including attempts by unauthorized parties to change how the system works or performs. This includes AI-specific vulnerabilities like data poisoning and adversarial attacks, as well as risks to fundamental rights. High-risk AI systems must follow the conformity assessment procedures outlined in EU cybersecurity regulations. However, for critical products with digital elements, the assessment standards cannot be lowered. Therefore, high-risk AI systems that are also classified as important or critical products must follow the stricter conformity assessment requirements for critical products, particularly regarding their essential cybersecurity requirements.

<p>cybersecurity requirements for products with digital <br /> elements, they should be deemed compliant with the cybersecurity requirements set out in this Regulation in so far <br /> as the achievement of those requirements is demonstrated in the EU declaration of conformity or parts thereof <br /> issued under that regulation. To that end, the assessment of the cybersecurity risks, associated to a product with <br /> digital elements classified as high-risk AI system according to this Regulation, carried out under a regulation of the <br /> European Parliament and of the Council on horizontal cybersecurity requirements for products with digital <br /> elements, should consider risks to the cyber resilience of an AI system as regards attempts by unauthorised third <br /> parties to alter its use, behaviour or performance, including AI specific vulnerabilities such as data poisoning or <br /> adversarial attacks, as well as, as relevant, risks to fundamental rights as required by this Regulation.<br /> (78)<br /> The conformity assessment procedure provided by this Regulation should apply in relation to the essential <br /> cybersecurity requirements of a product with digital elements covered by a regulation of the European Parliament <br /> and of the Council on horizontal cybersecurity requirements for products with digital elements and classified as <br /> a high-risk AI system under this Regulation. However, this rule should not result in reducing the necessary level of <br /> assurance for critical products with digital elements covered by a regulation of the European Parliament and of the <br /> Council on horizontal cybersecurity requirements for products with digital elements. Therefore, by way of <br /> derogation from this rule, high-risk AI systems that fall within the scope of this Regulation and are also qualified as <br /> important and critical products with digital elements pursuant to a regulation of the European Parliament and of the <br /> Council on horizontal cybersecurity requirements for products with digital elements and to which the conformity <br /> assessment procedure based on internal control set out in an annex to this Regulation applies, are subject to the <br /> conformity assessment provisions of a regulation of the European Parliament and of the Council on horizontal <br /> cybersecurity requirements for products with digital elements insofar as the essential cybersecurity requirements of</p>
Show original text

Products with digital elements that meet certain cybersecurity standards must follow the cybersecurity assessment rules from European Parliament and Council regulations. For all other requirements in this regulation, companies must use internal control methods to verify compliance. The European Commission should work with ENISA (the European Union Agency for Cybersecurity) on AI system security issues, using ENISA's expertise from Regulation (EU) 2019/881. A provider—the company or person responsible for selling or using a high-risk AI system—must take responsibility for that system, even if they did not design or develop it themselves.

<p>out in an annex to this Regulation applies, are subject to the <br /> conformity assessment provisions of a regulation of the European Parliament and of the Council on horizontal <br /> cybersecurity requirements for products with digital elements insofar as the essential cybersecurity requirements of <br /> that regulation are concerned. In this case, for all the other aspects covered by this Regulation the respective <br /> provisions on conformity assessment based on internal control set out in an annex to this Regulation should apply. <br /> Building on the knowledge and expertise of ENISA on the cybersecurity policy and tasks assigned to ENISA under <br /> the Regulation (EU) 2019/881 of the European Parliament and of the Council (37), the Commission should cooperate <br /> with ENISA on issues related to cybersecurity of AI systems.<br /> EN<br /> OJ L, 12.7.2024<br /> 22/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> (37)<br /> Regulation (EU) 2019/881 of the European Parliament and of the Council of 17 April 2019 on ENISA (the European Union Agency <br /> for Cybersecurity) and on information and communications technology cybersecurity certification and repealing Regulation <br /> (EU) No 526/2013 (Cybersecurity Act) (OJ L 151, 7.6.2019, p. 15).</p> <p>(79)<br /> It is appropriate that a specific natural or legal person, defined as the provider, takes responsibility for the placing on <br /> the market or the putting into service of a high-risk AI system, regardless of whether that natural or legal person is <br /> the person who designed or developed the system.</p>
Show original text

A provider (any person or organization) is responsible for releasing a high-risk AI system to the market or putting it into service. This responsibility applies regardless of whether the provider designed or developed the system themselves.

The EU and its Member States have signed the United Nations Convention on the Rights of Persons with Disabilities. This legally requires them to protect people with disabilities from discrimination, ensure they have equal access to information and communication technologies, and respect their privacy. As AI systems become more widely used, all new technologies and services should be designed using universal design principles. This ensures everyone, including people with disabilities, can fully and equally access and use AI technologies while respecting their dignity and diversity. Providers must comply with accessibility requirements, including EU Directive 2016/2102 and EU Directive 2019/882. These accessibility requirements should be built into the AI system from the start of its design.

Providers of high-risk AI systems must establish a quality management system, complete required conformity assessments, prepare necessary documentation, and set up a system to monitor the product after it is released to the market. If providers already have quality management systems required by other EU laws in their sector, they can integrate the quality management requirements from this regulation into their existing systems.

<p>natural or legal person, defined as the provider, takes responsibility for the placing on <br /> the market or the putting into service of a high-risk AI system, regardless of whether that natural or legal person is <br /> the person who designed or developed the system.<br /> (80)<br /> As signatories to the United Nations Convention on the Rights of Persons with Disabilities, the Union and the <br /> Member States are legally obliged to protect persons with disabilities from discrimination and promote their equality, <br /> to ensure that persons with disabilities have access, on an equal basis with others, to information and <br /> communications technologies and systems, and to ensure respect for privacy for persons with disabilities. Given the <br /> growing importance and use of AI systems, the application of universal design principles to all new technologies and <br /> services should ensure full and equal access for everyone potentially affected by or using AI technologies, including <br /> persons with disabilities, in a way that takes full account of their inherent dignity and diversity. It is therefore <br /> essential that providers ensure full compliance with accessibility requirements, including Directive (EU) 2016/2102 <br /> of the European Parliament and of the Council (38) and Directive (EU) 2019/882. Providers should ensure <br /> compliance with these requirements by design. Therefore, the necessary measures should be integrated as much as <br /> possible into the design of the high-risk AI system.<br /> (81)<br /> The provider should establish a sound quality management system, ensure the accomplishment of the required <br /> conformity assessment procedure, draw up the relevant documentation and establish a robust post-market <br /> monitoring system. Providers of high-risk AI systems that are subject to obligations regarding quality management <br /> systems under relevant sectoral Union law should have the possibility to include the elements of the quality <br /> management system provided for in this Regulation as part of the existing quality management system provided for <br /> in that other sectoral Union law.</p>
Show original text

Organizations that already have quality management systems under other EU laws can include the quality management requirements from this Regulation into their existing systems. The Commission will consider how this Regulation works alongside other EU laws when developing future standards and guidance. Public authorities using high-risk AI systems for their own purposes can adopt these quality management requirements as part of their national or regional quality systems, taking into account their specific sector needs and organizational structure.

To enforce this Regulation fairly across all operators, someone in the EU must be able to provide authorities with information proving an AI system complies with the rules. Therefore, companies from outside the EU must appoint a representative based in the EU before selling their AI systems in the EU. This representative is responsible for ensuring high-risk AI systems meet compliance requirements and serves as the main contact person in the EU.

Because AI systems involve complex supply chains with many different companies involved, it is important to clarify the roles and responsibilities of all operators in that chain, including importers and distributors who help develop AI systems. This provides legal certainty and makes it easier for everyone to follow this Regulation.

<p>obligations regarding quality management <br /> systems under relevant sectoral Union law should have the possibility to include the elements of the quality <br /> management system provided for in this Regulation as part of the existing quality management system provided for <br /> in that other sectoral Union law. The complementarity between this Regulation and existing sectoral Union law <br /> should also be taken into account in future standardisation activities or guidance adopted by the Commission. Public <br /> authorities which put into service high-risk AI systems for their own use may adopt and implement the rules for the <br /> quality management system as part of the quality management system adopted at a national or regional level, as <br /> appropriate, taking into account the specificities of the sector and the competences and organisation of the public <br /> authority concerned.<br /> (82)<br /> To enable enforcement of this Regulation and create a level playing field for operators, and, taking into account the <br /> different forms of making available of digital products, it is important to ensure that, under all circumstances, <br /> a person established in the Union can provide authorities with all the necessary information on the compliance of an <br /> AI system. Therefore, prior to making their AI systems available in the Union, providers established in third <br /> countries should, by written mandate, appoint an authorised representative established in the Union. This authorised <br /> representative plays a pivotal role in ensuring the compliance of the high-risk AI systems placed on the market or <br /> put into service in the Union by those providers who are not established in the Union and in serving as their contact <br /> person established in the Union.<br /> (83)<br /> In light of the nature and complexity of the value chain for AI systems and in line with the New Legislative <br /> Framework, it is essential to ensure legal certainty and facilitate the compliance with this Regulation. Therefore, it is <br /> necessary to clarify the role and the specific obligations of relevant operators along that value chain, such as <br /> importers and distributors who may contribute to the development of AI systems.</p>
Show original text

To make the rules clear and help companies follow this Regulation, we need to explain what different operators must do. Operators in the AI supply chain—such as importers and distributors—have specific responsibilities. Some operators may have multiple roles at the same time and must follow all the rules for each role. For example, a company could be both a distributor and an importer.

To avoid confusion, certain parties must be treated as providers of high-risk AI systems and follow all related rules. This applies if they: (1) put their name or trademark on a high-risk AI system that is already being sold or used, unless a contract says otherwise; (2) make significant changes to a high-risk AI system that is already on the market or in use, keeping it as high-risk; or (3) change how an AI system is meant to be used—including general-purpose AI systems—in a way that makes it become high-risk, even if it was not classified as high-risk before. These rules apply alongside other specific requirements in EU product legislation.

<p>ensure legal certainty and facilitate the compliance with this Regulation. Therefore, it is <br /> necessary to clarify the role and the specific obligations of relevant operators along that value chain, such as <br /> importers and distributors who may contribute to the development of AI systems. In certain situations those <br /> operators could act in more than one role at the same time and should therefore fulfil cumulatively all relevant <br /> obligations associated with those roles. For example, an operator could act as a distributor and an importer at the <br /> same time.<br /> (84)<br /> To ensure legal certainty, it is necessary to clarify that, under certain specific conditions, any distributor, importer, <br /> deployer or other third-party should be considered to be a provider of a high-risk AI system and therefore assume all <br /> the relevant obligations. This would be the case if that party puts its name or trademark on a high-risk AI system <br /> already placed on the market or put into service, without prejudice to contractual arrangements stipulating that the <br /> obligations are allocated otherwise. This would also be the case if that party makes a substantial modification to <br /> a high-risk AI system that has already been placed on the market or has already been put into service in a way that it <br /> remains a high-risk AI system in accordance with this Regulation, or if it modifies the intended purpose of an AI <br /> system, including a general-purpose AI system, which has not been classified as high-risk and has already been <br /> placed on the market or put into service, in a way that the AI system becomes a high-risk AI system in accordance <br /> with this Regulation. Those provisions should apply without prejudice to more specific provisions established in <br /> certain Union harmonisation legislation based on the New Legislative Framework, together with which this <br /> OJ L, 12.7.</p>
Show original text

High-risk AI systems must follow this Regulation. These rules work alongside other EU laws, particularly those based on the New Legislative Framework. For example, medical device rules from Regulation (EU) 2017/745 continue to apply to high-risk AI systems that are also medical devices. General-purpose AI systems can be used as high-risk systems on their own or as parts of other high-risk systems. Because of this, providers of general-purpose AI systems must work closely with providers of high-risk AI systems to ensure everyone follows the required rules and works with the relevant authorities. This shared responsibility applies throughout the AI supply chain.

<p>-risk AI system in accordance <br /> with this Regulation. Those provisions should apply without prejudice to more specific provisions established in <br /> certain Union harmonisation legislation based on the New Legislative Framework, together with which this <br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 23/144<br /> (38)<br /> Directive (EU) 2016/2102 of the European Parliament and of the Council of 26 October 2016 on the accessibility of the websites <br /> and mobile applications of public sector bodies (OJ L 327, 2.12.2016, p. 1).</p> <p>Regulation should apply. For example, Article 16(2) of Regulation (EU) 2017/745, establishing that certain changes <br /> should not be considered to be modifications of a device that could affect its compliance with the applicable <br /> requirements, should continue to apply to high-risk AI systems that are medical devices within the meaning of that <br /> Regulation.<br /> (85)<br /> General-purpose AI systems may be used as high-risk AI systems by themselves or be components of other high-risk <br /> AI systems. Therefore, due to their particular nature and in order to ensure a fair sharing of responsibilities along the <br /> AI value chain, the providers of such systems should, irrespective of whether they may be used as high-risk AI <br /> systems as such by other providers or as components of high-risk AI systems and unless provided otherwise under <br /> this Regulation, closely cooperate with the providers of the relevant high-risk AI systems to enable their compliance <br /> with the relevant obligations under this Regulation and with the competent authorities established under this <br /> Regulation.</p>
Show original text

Distributors and other parties must work closely with providers of high-risk AI systems to help them follow the rules in this Regulation and cooperate with the relevant authorities. If the original provider of an AI system is no longer considered the provider under this Regulation, they must still cooperate fully. They must share necessary information, provide technical access, and offer assistance needed to meet the Regulation's requirements, especially for safety checks of high-risk AI systems. When a high-risk AI system is a safety component built into a product covered by EU product safety laws, the product manufacturer must follow the provider's obligations in this Regulation. They must ensure the AI system inside the final product meets all required standards. Throughout the AI supply chain, many companies provide AI systems, tools, services, and components that are combined by the provider into the final AI system. These contributions support various stages including training, retraining, testing, evaluation, software integration, and other development activities.

<p>of high-risk AI systems and unless provided otherwise under <br /> this Regulation, closely cooperate with the providers of the relevant high-risk AI systems to enable their compliance <br /> with the relevant obligations under this Regulation and with the competent authorities established under this <br /> Regulation.<br /> (86)<br /> Where, under the conditions laid down in this Regulation, the provider that initially placed the AI system on the <br /> market or put it into service should no longer be considered to be the provider for the purposes of this Regulation, <br /> and when that provider has not expressly excluded the change of the AI system into a high-risk AI system, the <br /> former provider should nonetheless closely cooperate and make available the necessary information and provide the <br /> reasonably expected technical access and other assistance that are required for the fulfilment of the obligations set <br /> out in this Regulation, in particular regarding the compliance with the conformity assessment of high-risk AI <br /> systems.<br /> (87)<br /> In addition, where a high-risk AI system that is a safety component of a product which falls within the scope of <br /> Union harmonisation legislation based on the New Legislative Framework is not placed on the market or put into <br /> service independently from the product, the product manufacturer defined in that legislation should comply with <br /> the obligations of the provider established in this Regulation and should, in particular, ensure that the AI system <br /> embedded in the final product complies with the requirements of this Regulation.<br /> (88)<br /> Along the AI value chain multiple parties often supply AI systems, tools and services but also components or <br /> processes that are incorporated by the provider into the AI system with various objectives, including the model <br /> training, model retraining, model testing and evaluation, integration into software, or other aspects of model <br /> development.</p>
Show original text

Suppliers provide components and services that are built into AI systems for training, testing, development, and integration. These suppliers play a key role in the AI value chain and must provide the main AI provider with necessary information, technical access, and support through written agreements. This helps the provider meet regulatory requirements while protecting the supplier's intellectual property and trade secrets.

Third parties who share free and open-source AI tools, services, or components (other than general-purpose AI models) are not required to follow the same value chain responsibilities as commercial providers. However, these developers are encouraged to use standard documentation practices like model cards and data sheets to share information more easily and promote trustworthy AI systems in the EU.

The European Commission may create and suggest voluntary contract templates between high-risk AI system providers and third-party suppliers. These templates would make cooperation easier along the value chain and could include requirements specific to different industries or business situations.

<p>and services but also components or <br /> processes that are incorporated by the provider into the AI system with various objectives, including the model <br /> training, model retraining, model testing and evaluation, integration into software, or other aspects of model <br /> development. Those parties have an important role to play in the value chain towards the provider of the high-risk <br /> AI system into which their AI systems, tools, services, components or processes are integrated, and should provide <br /> by written agreement this provider with the necessary information, capabilities, technical access and other assistance <br /> based on the generally acknowledged state of the art, in order to enable the provider to fully comply with the <br /> obligations set out in this Regulation, without compromising their own intellectual property rights or trade secrets.<br /> (89)<br /> Third parties making accessible to the public tools, services, processes, or AI components other than <br /> general-purpose AI models, should not be mandated to comply with requirements targeting the responsibilities <br /> along the AI value chain, in particular towards the provider that has used or integrated them, when those tools, <br /> services, processes, or AI components are made accessible under a free and open-source licence. Developers of free <br /> and open-source tools, services, processes, or AI components other than general-purpose AI models should be <br /> encouraged to implement widely adopted documentation practices, such as model cards and data sheets, as a way to <br /> accelerate information sharing along the AI value chain, allowing the promotion of trustworthy AI systems in the <br /> Union.<br /> (90)<br /> The Commission could develop and recommend voluntary model contractual terms between providers of high-risk <br /> AI systems and third parties that supply tools, services, components or processes that are used or integrated in <br /> high-risk AI systems, to facilitate the cooperation along the value chain. When developing voluntary model <br /> contractual terms, the Commission should also take into account possible contractual requirements applicable in <br /> specific sectors or business cases.</p>
Show original text

The Commission should develop voluntary contract templates for high-risk AI systems to improve cooperation across the supply chain, considering specific sector requirements. Companies that deploy high-risk AI systems have specific responsibilities. They must use these systems according to instructions, monitor their performance in real-world conditions, and keep proper records. Deployers must ensure that staff implementing these instructions and providing human oversight have adequate AI knowledge, training, and authority. These obligations do not replace other legal requirements under EU or national law. This regulation does not affect employer obligations to inform or consult workers and their representatives under EU or national law, including Directive 2002/14/EC. Employers must inform workers and their representatives about plans to deploy high-risk AI systems in the workplace, especially when other legal requirements do not apply.

<p>processes that are used or integrated in <br /> high-risk AI systems, to facilitate the cooperation along the value chain. When developing voluntary model <br /> contractual terms, the Commission should also take into account possible contractual requirements applicable in <br /> specific sectors or business cases.<br /> (91)<br /> Given the nature of AI systems and the risks to safety and fundamental rights possibly associated with their use, <br /> including as regards the need to ensure proper monitoring of the performance of an AI system in a real-life setting, it <br /> is appropriate to set specific responsibilities for deployers. Deployers should in particular take appropriate technical <br /> and organisational measures to ensure they use high-risk AI systems in accordance with the instructions of use and <br /> certain other obligations should be provided for with regard to monitoring of the functioning of the AI systems and <br /> with regard to record-keeping, as appropriate. Furthermore, deployers should ensure that the persons assigned to <br /> implement the instructions for use and human oversight as set out in this Regulation have the necessary <br /> EN<br /> OJ L, 12.7.2024<br /> 24/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>competence, in particular an adequate level of AI literacy, training and authority to properly fulfil those tasks. Those <br /> obligations should be without prejudice to other deployer obligations in relation to high-risk AI systems under <br /> Union or national law.<br /> (92)<br /> This Regulation is without prejudice to obligations for employers to inform or to inform and consult workers or <br /> their representatives under Union or national law and practice, including Directive 2002/14/EC of the European <br /> Parliament and of the Council (39), on decisions to put into service or use AI systems. It remains necessary to ensure <br /> information of workers and their representatives on the planned deployment of high-risk AI systems at the <br /> workplace where the conditions for those information or information and consultation obligations in other legal <br /> instruments are not fulfilled.</p>
Show original text

Companies must inform workers and their representatives before using high-risk AI systems in the workplace, unless other laws already require this. This requirement protects workers' fundamental rights.

Risks from AI systems can come from how they are designed or how they are used. Companies that deploy high-risk AI systems have an important responsibility to protect people's rights. They understand the specific workplace context better than the developers and can identify risks that weren't obvious during development. They know who will be affected, including vulnerable groups.

Companies deploying high-risk AI systems must inform people when the AI system will be used to make decisions about them. This information should explain the AI's purpose and what types of decisions it makes. Companies must also tell people about their right to receive an explanation of the AI's decisions. For AI systems used by law enforcement, this requirement must follow EU Directive 2016/680.

<p>AI systems. It remains necessary to ensure <br /> information of workers and their representatives on the planned deployment of high-risk AI systems at the <br /> workplace where the conditions for those information or information and consultation obligations in other legal <br /> instruments are not fulfilled. Moreover, such information right is ancillary and necessary to the objective of <br /> protecting fundamental rights that underlies this Regulation. Therefore, an information requirement to that effect <br /> should be laid down in this Regulation, without affecting any existing rights of workers.<br /> (93)<br /> Whilst risks related to AI systems can result from the way such systems are designed, risks can as well stem from <br /> how such AI systems are used. Deployers of high-risk AI system therefore play a critical role in ensuring that <br /> fundamental rights are protected, complementing the obligations of the provider when developing the AI system. <br /> Deployers are best placed to understand how the high-risk AI system will be used concretely and can therefore <br /> identify potential significant risks that were not foreseen in the development phase, due to a more precise knowledge <br /> of the context of use, the persons or groups of persons likely to be affected, including vulnerable groups. Deployers <br /> of high-risk AI systems listed in an annex to this Regulation also play a critical role in informing natural persons and <br /> should, when they make decisions or assist in making decisions related to natural persons, where applicable, inform <br /> the natural persons that they are subject to the use of the high-risk AI system. This information should include the <br /> intended purpose and the type of decisions it makes. The deployer should also inform the natural persons about <br /> their right to an explanation provided under this Regulation. With regard to high-risk AI systems used for law <br /> enforcement purposes, that obligation should be implemented in accordance with Article 13 of Directive (EU) <br /> 2016/680.</p>
Show original text

People have the right to receive explanations about AI systems under this regulation. For high-risk AI systems used by law enforcement, this requirement follows Article 13 of Directive (EU) 2016/680.

When AI systems use biometric data (like facial recognition) for law enforcement purposes, they must follow Article 10 of Directive (EU) 2016/680. This means such processing is only allowed when absolutely necessary, with proper protections for individuals' rights, and when authorized by EU or national law. These systems must also follow the core principles in Article 4(1) of Directive (EU) 2016/680: lawfulness, fairness, transparency, using data only for stated purposes, accuracy, and limiting how long data is kept.

Post-remote biometric identification systems (systems that identify people from recorded video footage) are intrusive and require strong safeguards under EU law, particularly Regulations (EU) 2016/679 and 2016/680. These systems must be used only when proportionate, legitimate, and strictly necessary. They should target specific individuals in specific locations during specific time periods, and only use legally obtained video footage. Law enforcement cannot use these systems for mass surveillance. The rules for post-remote biometric identification cannot be used to bypass the strict restrictions on real-time remote biometric identification.

<p>persons about <br /> their right to an explanation provided under this Regulation. With regard to high-risk AI systems used for law <br /> enforcement purposes, that obligation should be implemented in accordance with Article 13 of Directive (EU) <br /> 2016/680.<br /> (94)<br /> Any processing of biometric data involved in the use of AI systems for biometric identification for the purpose of <br /> law enforcement needs to comply with Article 10 of Directive (EU) 2016/680, that allows such processing only <br /> where strictly necessary, subject to appropriate safeguards for the rights and freedoms of the data subject, and where <br /> authorised by Union or Member State law. Such use, when authorised, also needs to respect the principles laid down <br /> in Article 4 (1) of Directive (EU) 2016/680 including lawfulness, fairness and transparency, purpose limitation, <br /> accuracy and storage limitation.<br /> (95)<br /> Without prejudice to applicable Union law, in particular Regulation (EU) 2016/679 and Directive (EU) 2016/680, <br /> considering the intrusive nature of post-remote biometric identification systems, the use of post-remote biometric <br /> identification systems should be subject to safeguards. Post-remote biometric identification systems should always be <br /> used in a way that is proportionate, legitimate and strictly necessary, and thus targeted, in terms of the individuals to <br /> be identified, the location, temporal scope and based on a closed data set of legally acquired video footage. In any <br /> case, post-remote biometric identification systems should not be used in the framework of law enforcement to lead <br /> to indiscriminate surveillance. The conditions for post-remote biometric identification should in any case not <br /> provide a basis to circumvent the conditions of the prohibition and strict exceptions for real time remote biometric <br /> identification.</p>
Show original text

Law enforcement must not use biometric identification systems in ways that allow uncontrolled surveillance. Rules that restrict real-time biometric identification should not be bypassed through post-identification methods.

To protect people's rights, organizations that deploy high-risk AI systems must conduct a fundamental rights impact assessment before using them. This requirement applies to government bodies, private companies providing public services, and specific high-risk AI systems in sectors like banking and insurance. Private companies may provide important public services in areas such as education, healthcare, social services, housing, and justice.

The purpose of the impact assessment is to identify specific risks to individuals or groups who may be affected by the AI system, and to determine what measures should be taken if those risks occur. Organizations must complete this assessment before deploying the high-risk AI system and update it whenever relevant factors change.

<p>framework of law enforcement to lead <br /> to indiscriminate surveillance. The conditions for post-remote biometric identification should in any case not <br /> provide a basis to circumvent the conditions of the prohibition and strict exceptions for real time remote biometric <br /> identification.<br /> (96)<br /> In order to efficiently ensure that fundamental rights are protected, deployers of high-risk AI systems that are bodies <br /> governed by public law, or private entities providing public services and deployers of certain high-risk AI systems <br /> listed in an annex to this Regulation, such as banking or insurance entities, should carry out a fundamental rights <br /> impact assessment prior to putting it into use. Services important for individuals that are of public nature may also <br /> be provided by private entities. Private entities providing such public services are linked to tasks in the public interest <br /> such as in the areas of education, healthcare, social services, housing, administration of justice. The aim of the <br /> fundamental rights impact assessment is for the deployer to identify the specific risks to the rights of individuals or <br /> groups of individuals likely to be affected, identify measures to be taken in the case of a materialisation of those risks. <br /> The impact assessment should be performed prior to deploying the high-risk AI system, and should be updated <br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 25/144<br /> (39)<br /> Directive 2002/14/EC of the European Parliament and of the Council of 11 March 2002 establishing a general framework for <br /> informing and consulting employees in the European Community (OJ L 80, 23.3.2002, p. 29).</p> <p>when the deployer considers that any of the relevant factors have changed.</p>
Show original text

Organizations that deploy high-risk AI systems must conduct an impact assessment whenever relevant factors change. This assessment should: (1) identify how and where the AI system will be used according to its intended purpose, including the timeframe and frequency of use; (2) describe which groups of people may be affected; (3) identify specific risks that could harm people's fundamental rights. Deployers should review information provided by the AI system's creator, including user instructions. Based on identified risks, deployers must plan protective measures such as human oversight arrangements, complaint procedures, and dispute resolution processes. After completing the assessment, deployers must inform the relevant market surveillance authority. When appropriate—especially in the public sector—deployers should involve affected communities, independent experts, and civil society organizations to help conduct the assessment and design risk mitigation measures.

<p>2 establishing a general framework for <br /> informing and consulting employees in the European Community (OJ L 80, 23.3.2002, p. 29).</p> <p>when the deployer considers that any of the relevant factors have changed. The impact assessment should identify <br /> the deployer’s relevant processes in which the high-risk AI system will be used in line with its intended purpose, and <br /> should include a description of the period of time and frequency in which the system is intended to be used as well <br /> as of specific categories of natural persons and groups who are likely to be affected in the specific context of use. The <br /> assessment should also include the identification of specific risks of harm likely to have an impact on the <br /> fundamental rights of those persons or groups. While performing this assessment, the deployer should take into <br /> account information relevant to a proper assessment of the impact, including but not limited to the information <br /> given by the provider of the high-risk AI system in the instructions for use. In light of the risks identified, deployers <br /> should determine measures to be taken in the case of a materialisation of those risks, including for example <br /> governance arrangements in that specific context of use, such as arrangements for human oversight according to the <br /> instructions of use or, complaint handling and redress procedures, as they could be instrumental in mitigating risks <br /> to fundamental rights in concrete use-cases. After performing that impact assessment, the deployer should notify the <br /> relevant market surveillance authority. Where appropriate, to collect relevant information necessary to perform the <br /> impact assessment, deployers of high-risk AI system, in particular when AI systems are used in the public sector, <br /> could involve relevant stakeholders, including the representatives of groups of persons likely to be affected by the AI <br /> system, independent experts, and civil society organisations in conducting such impact assessments and designing <br /> measures to be taken in the case of materialisation of the risks.</p>
Show original text

When assessing risks from AI systems, companies should involve affected groups, independent experts, and civil society organizations. The European Artificial Intelligence Office will create a template questionnaire to make compliance easier and reduce paperwork for companies deploying AI.

General-purpose AI models need a clear definition separate from regular AI systems. These models are designed to perform many different tasks well. They are usually trained on large amounts of data using methods like self-supervised or unsupervised learning. Companies can distribute these models through libraries, APIs, downloads, or physical copies. The models can be modified or adjusted to create new versions.

AI models are important parts of AI systems but are not complete systems by themselves. To become a functional AI system, models need additional components like a user interface. Typically, AI models are built into and become part of larger AI systems. This regulation sets specific rules for general-purpose AI models, especially those that pose systemic risks. These rules apply whether the models are standalone or integrated into other AI systems. Providers of general-purpose AI models must follow these obligations once they release their models to the market.

<p>relevant stakeholders, including the representatives of groups of persons likely to be affected by the AI <br /> system, independent experts, and civil society organisations in conducting such impact assessments and designing <br /> measures to be taken in the case of materialisation of the risks. The European Artificial Intelligence Office (AI Office) <br /> should develop a template for a questionnaire in order to facilitate compliance and reduce the administrative burden <br /> for deployers.<br /> (97)<br /> The notion of general-purpose AI models should be clearly defined and set apart from the notion of AI systems to <br /> enable legal certainty. The definition should be based on the key functional characteristics of a general-purpose AI <br /> model, in particular the generality and the capability to competently perform a wide range of distinct tasks. These <br /> models are typically trained on large amounts of data, through various methods, such as self-supervised, <br /> unsupervised or reinforcement learning. General-purpose AI models may be placed on the market in various ways, <br /> including through libraries, application programming interfaces (APIs), as direct download, or as physical copy. <br /> These models may be further modified or fine-tuned into new models. Although AI models are essential <br /> components of AI systems, they do not constitute AI systems on their own. AI models require the addition of further <br /> components, such as for example a user interface, to become AI systems. AI models are typically integrated into and <br /> form part of AI systems. This Regulation provides specific rules for general-purpose AI models and for <br /> general-purpose AI models that pose systemic risks, which should apply also when these models are integrated or <br /> form part of an AI system. It should be understood that the obligations for the providers of general-purpose AI <br /> models should apply once the general-purpose AI models are placed on the market.</p>
Show original text

Providers of general-purpose AI models must follow regulations once their models are released to the market. If a provider uses its own model in an AI system that is sold or deployed, that model is considered released and must follow both model and system regulations. However, regulations do not apply when a model is used only internally for processes that don't affect third parties or their rights. General-purpose AI models that pose systemic risks must always comply with regulations. Models used only for research, development, and testing before market release are exempt, but must comply once released. A model is considered general-purpose if it has at least one billion parameters, is trained on large amounts of data using self-supervision, and can perform many different tasks. Large generative AI models, which can create diverse content like text, audio, images, and video for various purposes, are typical examples of general-purpose AI models.

<p>risks, which should apply also when these models are integrated or <br /> form part of an AI system. It should be understood that the obligations for the providers of general-purpose AI <br /> models should apply once the general-purpose AI models are placed on the market. When the provider of <br /> a general-purpose AI model integrates an own model into its own AI system that is made available on the market or <br /> put into service, that model should be considered to be placed on the market and, therefore, the obligations in this <br /> Regulation for models should continue to apply in addition to those for AI systems. The obligations laid down for <br /> models should in any case not apply when an own model is used for purely internal processes that are not essential <br /> for providing a product or a service to third parties and the rights of natural persons are not affected. Considering <br /> their potential significantly negative effects, the general-purpose AI models with systemic risk should always be <br /> subject to the relevant obligations under this Regulation. The definition should not cover AI models used before their <br /> placing on the market for the sole purpose of research, development and prototyping activities. This is without <br /> prejudice to the obligation to comply with this Regulation when, following such activities, a model is placed on the <br /> market.<br /> (98)<br /> Whereas the generality of a model could, inter alia, also be determined by a number of parameters, models with at <br /> least a billion of parameters and trained with a large amount of data using self-supervision at scale should be <br /> considered to display significant generality and to competently perform a wide range of distinctive tasks.<br /> (99)<br /> Large generative AI models are a typical example for a general-purpose AI model, given that they allow for flexible <br /> generation of content, such as in the form of text, audio, images or video, that can readily accommodate a wide <br /> range of distinctive tasks.</p>
Show original text

AI models are general-purpose tools because they can flexibly create different types of content, including text, audio, images, and video, for many different tasks. When a general-purpose AI model is built into or becomes part of another AI system, that system is considered a general-purpose AI system if it can now serve multiple purposes. General-purpose AI systems can be used directly or integrated into other AI systems.

Companies that create general-purpose AI models have important responsibilities in the AI supply chain. Their models often become the foundation for many other AI systems created by other companies. These downstream companies need to understand the models and their capabilities to use them properly and follow regulations. Therefore, creators of general-purpose AI models must provide clear information about their models, including up-to-date documentation and details about how the models work and what they can do. They must prepare technical documentation and keep it current so they can share it with the AI Office and national authorities when requested. Specific rules about what information must be included in this documentation are listed in the regulation. The European Commission has the power to update these requirements as technology develops.

<p>AI models are a typical example for a general-purpose AI model, given that they allow for flexible <br /> generation of content, such as in the form of text, audio, images or video, that can readily accommodate a wide <br /> range of distinctive tasks.<br /> (100)<br /> When a general-purpose AI model is integrated into or forms part of an AI system, this system should be considered <br /> to be general-purpose AI system when, due to this integration, this system has the capability to serve a variety of <br /> purposes. A general-purpose AI system can be used directly, or it may be integrated into other AI systems.<br /> EN<br /> OJ L, 12.7.2024<br /> 26/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>(101)<br /> Providers of general-purpose AI models have a particular role and responsibility along the AI value chain, as the <br /> models they provide may form the basis for a range of downstream systems, often provided by downstream <br /> providers that necessitate a good understanding of the models and their capabilities, both to enable the integration of <br /> such models into their products, and to fulfil their obligations under this or other regulations. Therefore, <br /> proportionate transparency measures should be laid down, including the drawing up and keeping up to date of <br /> documentation, and the provision of information on the general-purpose AI model for its usage by the downstream <br /> providers. Technical documentation should be prepared and kept up to date by the general-purpose AI model <br /> provider for the purpose of making it available, upon request, to the AI Office and the national competent <br /> authorities. The minimal set of elements to be included in such documentation should be set out in specific annexes <br /> to this Regulation. The Commission should be empowered to amend those annexes by means of delegated acts in <br /> light of evolving technological developments.</p>
Show original text

The Commission should have the power to update the required documentation through delegated acts as technology evolves. Specific annexes to this Regulation should outline the minimum documentation needed.

Free and open-source software and AI models—where users can freely access, use, modify, and share them—support research, innovation, and economic growth in the EU. General-purpose AI models released under free and open-source licenses should be considered transparent and open if they publicly share their parameters (including weights), model architecture information, and usage information. A license qualifies as free and open-source when it allows users to run, copy, distribute, study, change, and improve the software and models, provided they credit the original provider and respect identical or comparable distribution terms.

Free and open-source AI components include software, data, models, tools, services, and processes within an AI system. These can be developed and shared through open repositories. However, AI components that are sold, monetized (such as through paid technical support or services), or use personal data for purposes beyond improving security, compatibility, or interoperability should not receive the same exceptions as free and open-source components. This exception does not apply to transactions between microenterprises.

<p>authorities. The minimal set of elements to be included in such documentation should be set out in specific annexes <br /> to this Regulation. The Commission should be empowered to amend those annexes by means of delegated acts in <br /> light of evolving technological developments.<br /> (102)<br /> Software and data, including models, released under a free and open-source licence that allows them to be openly <br /> shared and where users can freely access, use, modify and redistribute them or modified versions thereof, can <br /> contribute to research and innovation in the market and can provide significant growth opportunities for the Union <br /> economy. General-purpose AI models released under free and open-source licences should be considered to ensure <br /> high levels of transparency and openness if their parameters, including the weights, the information on the model <br /> architecture, and the information on model usage are made publicly available. The licence should be considered to be <br /> free and open-source also when it allows users to run, copy, distribute, study, change and improve software and data, <br /> including models under the condition that the original provider of the model is credited, the identical or comparable <br /> terms of distribution are respected.<br /> (103)<br /> Free and open-source AI components covers the software and data, including models and general-purpose AI <br /> models, tools, services or processes of an AI system. Free and open-source AI components can be provided through <br /> different channels, including their development on open repositories. For the purposes of this Regulation, AI <br /> components that are provided against a price or otherwise monetised, including through the provision of technical <br /> support or other services, including through a software platform, related to the AI component, or the use of <br /> personal data for reasons other than exclusively for improving the security, compatibility or interoperability of the <br /> software, with the exception of transactions between microenterprises, should not benefit from the exceptions <br /> provided to free and open-source AI components.</p>
Show original text

Free and open-source AI components should not be allowed to use personal data for purposes beyond improving software security, compatibility, or interoperability, except in transactions between small businesses. Simply sharing AI components through open repositories should not be considered a form of making money.

Providers of general-purpose AI models released under free and open-source licenses—where the model's parameters, weights, architecture details, and usage information are publicly available—should be exempt from transparency requirements for AI models. However, if a model poses a systemic risk, being open-source and transparent is not enough to avoid compliance with regulations.

Even though open-source AI models don't always reveal details about their training data or copyright compliance, providers must still: (1) provide a summary of the content used to train the model, and (2) establish a policy to follow European copyright law, including respecting creators' rights under EU Directive 2019/790.

General-purpose AI models, especially large generative models that create text, images, and other content, offer significant innovation opportunities. However, they also present challenges for artists, authors, and creators regarding how their creative work is made, shared, used, and consumed.

<p>use of <br /> personal data for reasons other than exclusively for improving the security, compatibility or interoperability of the <br /> software, with the exception of transactions between microenterprises, should not benefit from the exceptions <br /> provided to free and open-source AI components. The fact of making AI components available through open <br /> repositories should not, in itself, constitute a monetisation.<br /> (104)<br /> The providers of general-purpose AI models that are released under a free and open-source licence, and whose <br /> parameters, including the weights, the information on the model architecture, and the information on model usage, <br /> are made publicly available should be subject to exceptions as regards the transparency-related requirements <br /> imposed on general-purpose AI models, unless they can be considered to present a systemic risk, in which case the <br /> circumstance that the model is transparent and accompanied by an open-source license should not be considered to <br /> be a sufficient reason to exclude compliance with the obligations under this Regulation. In any case, given that the <br /> release of general-purpose AI models under free and open-source licence does not necessarily reveal substantial <br /> information on the data set used for the training or fine-tuning of the model and on how compliance of copyright <br /> law was thereby ensured, the exception provided for general-purpose AI models from compliance with the <br /> transparency-related requirements should not concern the obligation to produce a summary about the content used <br /> for model training and the obligation to put in place a policy to comply with Union copyright law, in particular to <br /> identify and comply with the reservation of rights pursuant to Article 4(3) of Directive (EU) 2019/790 of the <br /> European Parliament and of the Council (40).<br /> (105)<br /> General-purpose AI models, in particular large generative AI models, capable of generating text, images, and other <br /> content, present unique innovation opportunities but also challenges to artists, authors, and other creators and the <br /> way their creative content is created, distributed, used and consumed.</p>
Show original text

Large generative AI models that create text, images, and other content offer new opportunities for innovation but also create challenges for artists, authors, and creators. These models need to be trained on huge amounts of data including text, images, and videos. To gather and analyze this data, companies use text and data mining techniques. However, much of this content is protected by copyright law. Under EU Directive 2019/790, companies can use copyrighted material for text and data mining in certain situations. Copyright owners have the right to opt out and prevent their work from being used for this purpose, except when the mining is for scientific research. If a copyright owner formally reserves this right, AI model developers must get permission from the copyright owner before they can use text and data mining on that work.

<p>particular large generative AI models, capable of generating text, images, and other <br /> content, present unique innovation opportunities but also challenges to artists, authors, and other creators and the <br /> way their creative content is created, distributed, used and consumed. The development and training of such models <br /> require access to vast amounts of text, images, videos and other data. Text and data mining techniques may be used <br /> extensively in this context for the retrieval and analysis of such content, which may be protected by copyright and <br /> related rights. Any use of copyright protected content requires the authorisation of the rightsholder concerned <br /> unless relevant copyright exceptions and limitations apply. Directive (EU) 2019/790 introduced exceptions and <br /> limitations allowing reproductions and extractions of works or other subject matter, for the purpose of text and data <br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 27/144<br /> (40)<br /> Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the <br /> Digital Single Market and amending Directives 96/9/EC and 2001/29/EC (OJ L 130, 17.5.2019, p. 92).</p> <p>mining, under certain conditions. Under these rules, rightsholders may choose to reserve their rights over their <br /> works or other subject matter to prevent text and data mining, unless this is done for the purposes of scientific <br /> research. Where the rights to opt out has been expressly reserved in an appropriate manner, providers of <br /> general-purpose AI models need to obtain an authorisation from rightsholders if they want to carry out text and <br /> data mining over such works.</p>
Show original text

Providers of general-purpose AI models must obtain permission from copyright holders before using text and data mining on protected works, unless the copyright holder has not reserved this right. All providers selling general-purpose AI models in the EU must follow EU copyright laws. They must create a policy to identify and respect when copyright holders opt out of having their work used for AI training, as stated in EU Directive 2019/790. This requirement applies to all providers in the EU market, regardless of where the AI training actually takes place. This ensures all providers follow the same copyright standards and no one gains an unfair advantage by using lower standards. To be transparent about their training data, providers must publish a detailed summary of the content used to train their AI models, including copyrighted material. While protecting trade secrets and confidential business information, this summary should be comprehensive enough to help copyright holders and other interested parties understand and enforce their rights. The summary should list major databases and data sources used for training, along with explanations of other data sources.

<p>research. Where the rights to opt out has been expressly reserved in an appropriate manner, providers of <br /> general-purpose AI models need to obtain an authorisation from rightsholders if they want to carry out text and <br /> data mining over such works.<br /> (106)<br /> Providers that place general-purpose AI models on the Union market should ensure compliance with the relevant <br /> obligations in this Regulation. To that end, providers of general-purpose AI models should put in place a policy to <br /> comply with Union law on copyright and related rights, in particular to identify and comply with the reservation of <br /> rights expressed by rightsholders pursuant to Article 4(3) of Directive (EU) 2019/790. Any provider placing <br /> a general-purpose AI model on the Union market should comply with this obligation, regardless of the jurisdiction <br /> in which the copyright-relevant acts underpinning the training of those general-purpose AI models take place. This <br /> is necessary to ensure a level playing field among providers of general-purpose AI models where no provider should <br /> be able to gain a competitive advantage in the Union market by applying lower copyright standards than those <br /> provided in the Union.<br /> (107)<br /> In order to increase transparency on the data that is used in the pre-training and training of general-purpose AI <br /> models, including text and data protected by copyright law, it is adequate that providers of such models draw up and <br /> make publicly available a sufficiently detailed summary of the content used for training the general-purpose AI <br /> model. While taking into due account the need to protect trade secrets and confidential business information, this <br /> summary should be generally comprehensive in its scope instead of technically detailed to facilitate parties with <br /> legitimate interests, including copyright holders, to exercise and enforce their rights under Union law, for example by <br /> listing the main data collections or sets that went into training the model, such as large private or public databases or <br /> data archives, and by providing a narrative explanation about other data sources used.</p>
Show original text

Providers of general-purpose AI models must follow Union copyright laws and publicly share a summary of the data used to train their models. This summary should list major data sources like large databases and data archives, plus explain other data sources used. The AI Office will provide a simple template to help providers create these summaries in narrative form.

The AI Office will check whether providers meet these requirements but will not examine individual pieces of training data for copyright compliance. This regulation does not change how copyright laws are enforced under Union law.

Compliance requirements should be reasonable and match the provider's type. People developing or using models for non-professional or scientific research do not have to comply, though they are encouraged to do so. Compliance should consider the provider's size and offer simplified options for small and medium-sized businesses (SMEs) and start-ups that do not create excessive costs. When a model is modified or fine-tuned, providers only need to update their documentation with information about the changes and any new training data sources.

<p>and enforce their rights under Union law, for example by <br /> listing the main data collections or sets that went into training the model, such as large private or public databases or <br /> data archives, and by providing a narrative explanation about other data sources used. It is appropriate for the AI <br /> Office to provide a template for the summary, which should be simple, effective, and allow the provider to provide <br /> the required summary in narrative form.<br /> (108)<br /> With regard to the obligations imposed on providers of general-purpose AI models to put in place a policy to <br /> comply with Union copyright law and make publicly available a summary of the content used for the training, the AI <br /> Office should monitor whether the provider has fulfilled those obligations without verifying or proceeding to <br /> a work-by-work assessment of the training data in terms of copyright compliance. This Regulation does not affect <br /> the enforcement of copyright rules as provided for under Union law.<br /> (109)<br /> Compliance with the obligations applicable to the providers of general-purpose AI models should be commensurate <br /> and proportionate to the type of model provider, excluding the need for compliance for persons who develop or use <br /> models for non-professional or scientific research purposes, who should nevertheless be encouraged to voluntarily <br /> comply with these requirements. Without prejudice to Union copyright law, compliance with those obligations <br /> should take due account of the size of the provider and allow simplified ways of compliance for SMEs, including <br /> start-ups, that should not represent an excessive cost and not discourage the use of such models. In the case of <br /> a modification or fine-tuning of a model, the obligations for providers of general-purpose AI models should be <br /> limited to that modification or fine-tuning, for example by complementing the already existing technical <br /> documentation with information on the modifications, including new training data sources, as a means to comply <br /> with the value chain obligations provided in this Regulation.</p>
Show original text

General-purpose AI models can create serious risks that affect society as a whole. These systemic risks include: accidents and disruptions to critical services that harm public health and safety; damage to democratic processes and economic security; and the spread of illegal, false, or discriminatory content. These risks grow as AI models become more powerful and widely used. They can occur at any stage of the model's development and use, and are affected by factors such as how reliable and fair the model is, how secure it is, how much it can act independently, what tools it can access, and how it is released and distributed. International experts have identified specific concerns including: intentional misuse or loss of control over the model's alignment with human values; risks related to chemical, biological, radiological, and nuclear weapons development; risks of enabling cyber attacks; risks from the model controlling physical systems or damaging critical infrastructure; risks from models copying or replicating themselves; and risks from models creating harmful bias. Companies must document any modifications or improvements they make to these models, including information about new training data sources, to comply with the requirements in this Regulation.

<p>be <br /> limited to that modification or fine-tuning, for example by complementing the already existing technical <br /> documentation with information on the modifications, including new training data sources, as a means to comply <br /> with the value chain obligations provided in this Regulation.<br /> (110)<br /> General-purpose AI models could pose systemic risks which include, but are not limited to, any actual or reasonably <br /> foreseeable negative effects in relation to major accidents, disruptions of critical sectors and serious consequences to <br /> public health and safety; any actual or reasonably foreseeable negative effects on democratic processes, public and <br /> economic security; the dissemination of illegal, false, or discriminatory content. Systemic risks should be understood <br /> to increase with model capabilities and model reach, can arise along the entire lifecycle of the model, and are <br /> influenced by conditions of misuse, model reliability, model fairness and model security, the level of autonomy of <br /> EN<br /> OJ L, 12.7.2024<br /> 28/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>the model, its access to tools, novel or combined modalities, release and distribution strategies, the potential to <br /> remove guardrails and other factors. In particular, international approaches have so far identified the need to pay <br /> attention to risks from potential intentional misuse or unintended issues of control relating to alignment with <br /> human intent; chemical, biological, radiological, and nuclear risks, such as the ways in which barriers to entry can be <br /> lowered, including for weapons development, design acquisition, or use; offensive cyber capabilities, such as the <br /> ways in vulnerability discovery, exploitation, or operational use can be enabled; the effects of interaction and tool <br /> use, including for example the capacity to control physical systems and interfere with critical infrastructure; risks <br /> from models of making copies of themselves or ‘self-replicating’ or training other models; the ways in which models <br /> can give rise to harmful bias</p>
Show original text

AI models pose several serious risks: they can control physical systems and damage critical infrastructure; they can copy themselves or create other models; they can cause harmful bias and discrimination affecting people and communities; they can spread false information and violate privacy, threatening democracy and human rights; and a single problem could trigger a chain reaction causing widespread damage to entire cities, industries, or communities. To address these risks, we need a system to identify which general-purpose AI models are dangerous. A general-purpose AI model should be considered high-risk if it has advanced capabilities that match the most powerful AI models available, or if it could significantly impact the market due to its widespread use. The true capabilities of a model may only become clear after it is released or when people use it. Currently, the amount of computing power used to train a model (measured in floating point operations) is a good way to estimate its capabilities. This includes all computing used to improve the model before release, such as initial training, creating artificial data, and fine-tuning. Therefore, a minimum computing threshold should be set. If a general-purpose AI model meets or exceeds this threshold, it should be presumed to be a high-risk model with systemic risks.

<p>, including for example the capacity to control physical systems and interfere with critical infrastructure; risks <br /> from models of making copies of themselves or ‘self-replicating’ or training other models; the ways in which models <br /> can give rise to harmful bias and discrimination with risks to individuals, communities or societies; the facilitation of <br /> disinformation or harming privacy with threats to democratic values and human rights; risk that a particular event <br /> could lead to a chain reaction with considerable negative effects that could affect up to an entire city, an entire <br /> domain activity or an entire community.<br /> (111)<br /> It is appropriate to establish a methodology for the classification of general-purpose AI models as general-purpose <br /> AI model with systemic risks. Since systemic risks result from particularly high capabilities, a general-purpose AI <br /> model should be considered to present systemic risks if it has high-impact capabilities, evaluated on the basis of <br /> appropriate technical tools and methodologies, or significant impact on the internal market due to its reach. <br /> High-impact capabilities in general-purpose AI models means capabilities that match or exceed the capabilities <br /> recorded in the most advanced general-purpose AI models. The full range of capabilities in a model could be better <br /> understood after its placing on the market or when deployers interact with the model. According to the state of the <br /> art at the time of entry into force of this Regulation, the cumulative amount of computation used for the training of <br /> the general-purpose AI model measured in floating point operations is one of the relevant approximations for model <br /> capabilities. The cumulative amount of computation used for training includes the computation used across the <br /> activities and methods that are intended to enhance the capabilities of the model prior to deployment, such as <br /> pre-training, synthetic data generation and fine-tuning. Therefore, an initial threshold of floating point operations <br /> should be set, which, if met by a general-purpose AI model, leads to a presumption that the model is <br /> a general-purpose AI model with systemic risks.</p>
Show original text

To identify general-purpose AI models that pose systemic risks, a threshold based on computing power (floating point operations) should be established. If an AI model meets this threshold, it is presumed to be a high-risk general-purpose AI model. This threshold must be updated regularly to account for technological advances like better algorithms and more efficient hardware. The AI Office should work with scientists, industry experts, and civil society to develop benchmarks and tools that accurately measure whether a model is general-purpose and what risks it poses. These tools should consider how the model will be distributed and how many users it might affect. Additionally, the Commission should have the authority to individually designate an AI model as high-risk if it demonstrates capabilities or impact equivalent to the established threshold, even if it doesn't technically meet it. This decision should be based on factors such as training data quality and size, number of users, input/output capabilities, level of autonomy, scalability, and available tools. Providers can request the Commission to reconsider a high-risk designation if circumstances have changed. A clear procedure for classifying high-risk general-purpose AI models is therefore necessary.

<p>data generation and fine-tuning. Therefore, an initial threshold of floating point operations <br /> should be set, which, if met by a general-purpose AI model, leads to a presumption that the model is <br /> a general-purpose AI model with systemic risks. This threshold should be adjusted over time to reflect technological <br /> and industrial changes, such as algorithmic improvements or increased hardware efficiency, and should be <br /> supplemented with benchmarks and indicators for model capability. To inform this, the AI Office should engage <br /> with the scientific community, industry, civil society and other experts. Thresholds, as well as tools and benchmarks <br /> for the assessment of high-impact capabilities, should be strong predictors of generality, its capabilities and <br /> associated systemic risk of general-purpose AI models, and could take into account the way the model will be placed <br /> on the market or the number of users it may affect. To complement this system, there should be a possibility for the <br /> Commission to take individual decisions designating a general-purpose AI model as a general-purpose AI model <br /> with systemic risk if it is found that such model has capabilities or an impact equivalent to those captured by the set <br /> threshold. That decision should be taken on the basis of an overall assessment of the criteria for the designation of <br /> a general-purpose AI model with systemic risk set out in an annex to this Regulation, such as quality or size of the <br /> training data set, number of business and end users, its input and output modalities, its level of autonomy and <br /> scalability, or the tools it has access to. Upon a reasoned request of a provider whose model has been designated as <br /> a general-purpose AI model with systemic risk, the Commission should take the request into account and may <br /> decide to reassess whether the general-purpose AI model can still be considered to present systemic risks.<br /> (112)<br /> It is also necessary to clarify a procedure for the classification of a general-purpose AI model with systemic risks.</p>
Show original text

A general-purpose AI model that meets the threshold for high-impact capabilities is automatically considered to have systemic risks. The provider must notify the AI Office within two weeks of meeting these requirements or learning that their model will meet them. This is particularly important for the floating point operations threshold, since providers plan AI model training in advance and know beforehand if their model will meet the threshold. Providers can demonstrate to the AI Office that their specific model does not actually present systemic risks and should not be classified as such. Early notification helps the AI Office prepare for general-purpose AI models with systemic risks entering the market. This is especially critical for open-source models, since compliance measures become harder to enforce after public release.

<p>request into account and may <br /> decide to reassess whether the general-purpose AI model can still be considered to present systemic risks.<br /> (112)<br /> It is also necessary to clarify a procedure for the classification of a general-purpose AI model with systemic risks. <br /> A general-purpose AI model that meets the applicable threshold for high-impact capabilities should be presumed to <br /> be a general-purpose AI models with systemic risk. The provider should notify the AI Office at the latest two weeks <br /> after the requirements are met or it becomes known that a general-purpose AI model will meet the requirements <br /> that lead to the presumption. This is especially relevant in relation to the threshold of floating point operations <br /> because training of general-purpose AI models takes considerable planning which includes the upfront allocation of <br /> compute resources and, therefore, providers of general-purpose AI models are able to know if their model would <br /> meet the threshold before the training is completed. In the context of that notification, the provider should be able to <br /> demonstrate that, because of its specific characteristics, a general-purpose AI model exceptionally does not present <br /> systemic risks, and that it thus should not be classified as a general-purpose AI model with systemic risks. That <br /> information is valuable for the AI Office to anticipate the placing on the market of general-purpose AI models with <br /> systemic risks and the providers can start to engage with the AI Office early on. That information is especially <br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 29/144</p> <p>important with regard to general-purpose AI models that are planned to be released as open-source, given that, after <br /> the open-source model release, necessary measures to ensure compliance with the obligations under this Regulation <br /> may be more difficult to implement.</p>
Show original text

Open-source general-purpose AI models require special attention because it becomes harder to enforce compliance rules after the model is released publicly. If the Commission discovers that a general-purpose AI model poses systemic risks but was not previously identified or reported by the provider, the Commission can officially classify it as such. A qualified alert system will help the AI Office learn from a scientific panel about models that may need to be classified as systemic risk models, in addition to the AI Office's own monitoring. Providers of general-purpose AI models with systemic risks must follow additional requirements beyond standard general-purpose AI obligations. These include identifying and reducing risks and maintaining strong cybersecurity, whether the model is standalone or embedded in another AI system or product. Providers must evaluate their models before release, including conducting and documenting adversarial testing through internal or external testing. They must also continuously assess and reduce systemic risks by implementing risk management policies, governance processes, post-market monitoring, taking appropriate measures throughout the model's lifecycle, and working with other actors in the AI industry.

<p>144</p> <p>important with regard to general-purpose AI models that are planned to be released as open-source, given that, after <br /> the open-source model release, necessary measures to ensure compliance with the obligations under this Regulation <br /> may be more difficult to implement.<br /> (113)<br /> If the Commission becomes aware of the fact that a general-purpose AI model meets the requirements to classify as <br /> a general-purpose AI model with systemic risk, which previously had either not been known or of which the <br /> relevant provider has failed to notify the Commission, the Commission should be empowered to designate it so. <br /> A system of qualified alerts should ensure that the AI Office is made aware by the scientific panel of general-purpose <br /> AI models that should possibly be classified as general-purpose AI models with systemic risk, in addition to the <br /> monitoring activities of the AI Office.<br /> (114)<br /> The providers of general-purpose AI models presenting systemic risks should be subject, in addition to the <br /> obligations provided for providers of general-purpose AI models, to obligations aimed at identifying and mitigating <br /> those risks and ensuring an adequate level of cybersecurity protection, regardless of whether it is provided as <br /> a standalone model or embedded in an AI system or a product. To achieve those objectives, this Regulation should <br /> require providers to perform the necessary model evaluations, in particular prior to its first placing on the market, <br /> including conducting and documenting adversarial testing of models, also, as appropriate, through internal or <br /> independent external testing. In addition, providers of general-purpose AI models with systemic risks should <br /> continuously assess and mitigate systemic risks, including for example by putting in place risk-management policies, <br /> such as accountability and governance processes, implementing post-market monitoring, taking appropriate <br /> measures along the entire model’s lifecycle and cooperating with relevant actors along the AI value chain.<br /> (115)<br /> Providers of general-purpose AI models with systemic risks should assess and mitigate possible systemic risks.</p>
Show original text

Providers of general-purpose AI models that pose systemic risks must identify and reduce these risks. If a serious incident occurs despite these efforts, providers must immediately document it and report relevant details and corrective actions to the Commission and national authorities. Providers must also maintain strong cybersecurity protections throughout the model's entire lifecycle, including safeguards against data leaks, unauthorized access, safety measure bypasses, and cyberattacks. This can be achieved by securing model weights, algorithms, servers, and datasets through security measures, cybersecurity policies, technical solutions, and access controls appropriate to the specific risks. The AI Office should develop and update industry codes of practice in collaboration with national authorities, civil society organizations, experts, and the Scientific Panel. These codes should reflect current best practices and diverse perspectives. All general-purpose AI model providers should be invited to participate in creating these codes, which will establish obligations for providers of general-purpose AI models, especially those presenting systemic risks.

<p>governance processes, implementing post-market monitoring, taking appropriate <br /> measures along the entire model’s lifecycle and cooperating with relevant actors along the AI value chain.<br /> (115)<br /> Providers of general-purpose AI models with systemic risks should assess and mitigate possible systemic risks. If, <br /> despite efforts to identify and prevent risks related to a general-purpose AI model that may present systemic risks, <br /> the development or use of the model causes a serious incident, the general-purpose AI model provider should <br /> without undue delay keep track of the incident and report any relevant information and possible corrective measures <br /> to the Commission and national competent authorities. Furthermore, providers should ensure an adequate level of <br /> cybersecurity protection for the model and its physical infrastructure, if appropriate, along the entire model lifecycle. <br /> Cybersecurity protection related to systemic risks associated with malicious use or attacks should duly consider <br /> accidental model leakage, unauthorised releases, circumvention of safety measures, and defence against cyberattacks, <br /> unauthorised access or model theft. That protection could be facilitated by securing model weights, algorithms, <br /> servers, and data sets, such as through operational security measures for information security, specific cybersecurity <br /> policies, adequate technical and established solutions, and cyber and physical access controls, appropriate to the <br /> relevant circumstances and the risks involved.<br /> (116)<br /> The AI Office should encourage and facilitate the drawing up, review and adaptation of codes of practice, taking into <br /> account international approaches. All providers of general-purpose AI models could be invited to participate. To <br /> ensure that the codes of practice reflect the state of the art and duly take into account a diverse set of perspectives, <br /> the AI Office should collaborate with relevant national competent authorities, and could, where appropriate, consult <br /> with civil society organisations and other relevant stakeholders and experts, including the Scientific Panel, for the <br /> drawing up of such codes. Codes of practice should cover obligations for providers of general-purpose AI models <br /> and of general-purpose AI models presenting systemic risks.</p>
Show original text

Codes of practice should be developed with input from civil society organizations, stakeholders, experts, and the Scientific Panel. These codes must set requirements for providers of general-purpose AI models, especially those with systemic risks. They should also create a framework to identify and categorize systemic risks across the EU and their causes, while focusing on how to assess and reduce these risks. Codes of practice are a key tool for ensuring providers follow the rules in this Regulation. Providers can use codes of practice to prove they comply with requirements. The Commission can approve codes of practice through implementing acts, giving them validity across the EU, or create common implementation rules if a code cannot be finished or is inadequate by the time this Regulation takes effect. Once the AI Office approves a European harmonized standard as suitable, providers following it will be presumed compliant. If codes of practice or harmonized standards are not available, or providers choose not to use them, they can demonstrate compliance through other adequate methods.

<p>with civil society organisations and other relevant stakeholders and experts, including the Scientific Panel, for the <br /> drawing up of such codes. Codes of practice should cover obligations for providers of general-purpose AI models <br /> and of general-purpose AI models presenting systemic risks. In addition, as regards systemic risks, codes of practice <br /> should help to establish a risk taxonomy of the type and nature of the systemic risks at Union level, including their <br /> sources. Codes of practice should also be focused on specific risk assessment and mitigation measures.<br /> (117)<br /> The codes of practice should represent a central tool for the proper compliance with the obligations provided for <br /> under this Regulation for providers of general-purpose AI models. Providers should be able to rely on codes of <br /> practice to demonstrate compliance with the obligations. By means of implementing acts, the Commission may <br /> decide to approve a code of practice and give it a general validity within the Union, or, alternatively, to provide <br /> common rules for the implementation of the relevant obligations, if, by the time this Regulation becomes applicable, <br /> a code of practice cannot be finalised or is not deemed adequate by the AI Office. Once a harmonised standard is <br /> EN<br /> OJ L, 12.7.2024<br /> 30/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>published and assessed as suitable to cover the relevant obligations by the AI Office, compliance with a European <br /> harmonised standard should grant providers the presumption of conformity. Providers of general-purpose AI <br /> models should furthermore be able to demonstrate compliance using alternative adequate means, if codes of practice <br /> or harmonised standards are not available, or they choose not to rely on those.</p>
Show original text

AI providers can show they follow the rules using codes of practice, harmonised standards, or other acceptable methods. This regulation controls AI systems and models sold or used in the EU. It works alongside rules for online platforms and search engines (Regulation EU 2022/2065). When AI is built into very large online platforms or search engines, those platforms must assess and manage systemic risks from how the AI works and how people might misuse it, while protecting fundamental rights. The regulation assumes these obligations are met unless new serious risks appear that the other regulation doesn't cover. Because technology changes quickly, AI systems covered by this regulation may be offered as online services or parts of services, which should be understood in a technology-neutral way.

<p>standard should grant providers the presumption of conformity. Providers of general-purpose AI <br /> models should furthermore be able to demonstrate compliance using alternative adequate means, if codes of practice <br /> or harmonised standards are not available, or they choose not to rely on those.<br /> (118)<br /> This Regulation regulates AI systems and AI models by imposing certain requirements and obligations for relevant <br /> market actors that are placing them on the market, putting into service or use in the Union, thereby complementing <br /> obligations for providers of intermediary services that embed such systems or models into their services regulated by <br /> Regulation (EU) 2022/2065. To the extent that such systems or models are embedded into designated very large <br /> online platforms or very large online search engines, they are subject to the risk-management framework provided <br /> for in Regulation (EU) 2022/2065. Consequently, the corresponding obligations of this Regulation should be <br /> presumed to be fulfilled, unless significant systemic risks not covered by Regulation (EU) 2022/2065 emerge and are <br /> identified in such models. Within this framework, providers of very large online platforms and very large online <br /> search engines are obliged to assess potential systemic risks stemming from the design, functioning and use of their <br /> services, including how the design of algorithmic systems used in the service may contribute to such risks, as well as <br /> systemic risks stemming from potential misuses. Those providers are also obliged to take appropriate mitigating <br /> measures in observance of fundamental rights.<br /> (119)<br /> Considering the quick pace of innovation and the technological evolution of digital services in scope of different <br /> instruments of Union law in particular having in mind the usage and the perception of their recipients, the AI <br /> systems subject to this Regulation may be provided as intermediary services or parts thereof within the meaning of <br /> Regulation (EU) 2022/2065, which should be interpreted in a technology-neutral manner.</p>
Show original text

AI systems covered by this Regulation can be used as intermediary services or parts of intermediary services under EU Regulation 2022/2065. For example, AI systems power online search engines. An AI chatbot might search multiple websites, combine the results with its existing knowledge, and create a single output that brings together information from different sources.

Providers and deployers of certain AI systems must be able to detect and disclose when outputs are artificially created or altered. This is important for enforcing EU Regulation 2022/2065, especially for large online platforms and search engines. These platforms must identify and reduce risks from artificially generated or manipulated content, particularly risks to democracy, public discussion, and elections, including the spread of false information.

Standardization is essential for helping providers follow this Regulation using current technology. It also encourages innovation and business growth in Europe. Providers can demonstrate they meet the Regulation's requirements by following harmonized standards defined in EU Regulation 1025/2012, which reflect the latest technology and best practices.

<p>the perception of their recipients, the AI <br /> systems subject to this Regulation may be provided as intermediary services or parts thereof within the meaning of <br /> Regulation (EU) 2022/2065, which should be interpreted in a technology-neutral manner. For example, AI systems <br /> may be used to provide online search engines, in particular, to the extent that an AI system such as an online chatbot <br /> performs searches of, in principle, all websites, then incorporates the results into its existing knowledge and uses the <br /> updated knowledge to generate a single output that combines different sources of information.<br /> (120)<br /> Furthermore, obligations placed on providers and deployers of certain AI systems in this Regulation to enable the <br /> detection and disclosure that the outputs of those systems are artificially generated or manipulated are particularly <br /> relevant to facilitate the effective implementation of Regulation (EU) 2022/2065. This applies in particular as regards <br /> the obligations of providers of very large online platforms or very large online search engines to identify and <br /> mitigate systemic risks that may arise from the dissemination of content that has been artificially generated or <br /> manipulated, in particular risk of the actual or foreseeable negative effects on democratic processes, civic discourse <br /> and electoral processes, including through disinformation.<br /> (121)<br /> Standardisation should play a key role to provide technical solutions to providers to ensure compliance with this <br /> Regulation, in line with the state of the art, to promote innovation as well as competitiveness and growth in the <br /> single market. Compliance with harmonised standards as defined in Article 2, point (1)(c), of Regulation (EU) <br /> No 1025/2012 of the European Parliament and of the Council (41), which are normally expected to reflect the state <br /> of the art, should be a means for providers to demonstrate conformity with the requirements of this Regulation.</p>
Show original text

According to EU Regulation 1025/2012, providers can use harmonised standards to show they meet this Regulation's requirements. These standards should represent the latest technology and include input from all stakeholders—including small businesses, consumer groups, and environmental and social organisations—as outlined in Articles 5 and 6 of Regulation 1025/2012. To help providers comply, the Commission should request new standards quickly without unnecessary delays. Before making these requests, the Commission should consult its advisory forum and Board for expert advice. If no suitable harmonised standards exist, the Commission can create common specifications through implementing acts after consulting the advisory forum. Common specifications are a backup option to help providers meet requirements when: European standardisation organisations reject the request, existing standards don't adequately address fundamental rights, standards don't meet the request requirements, or there are delays in adopting appropriate standards. Technical complexity can cause these delays in standard adoption.

<p>EU) <br /> No 1025/2012 of the European Parliament and of the Council (41), which are normally expected to reflect the state <br /> of the art, should be a means for providers to demonstrate conformity with the requirements of this Regulation. <br /> A balanced representation of interests involving all relevant stakeholders in the development of standards, in <br /> particular SMEs, consumer organisations and environmental and social stakeholders in accordance with Articles 5 <br /> and 6 of Regulation (EU) No 1025/2012 should therefore be encouraged. In order to facilitate compliance, the <br /> standardisation requests should be issued by the Commission without undue delay. When preparing the <br /> standardisation request, the Commission should consult the advisory forum and the Board in order to collect <br /> relevant expertise. However, in the absence of relevant references to harmonised standards, the Commission should <br /> be able to establish, via implementing acts, and after consultation of the advisory forum, common specifications for <br /> certain requirements under this Regulation. The common specification should be an exceptional fall back solution to <br /> facilitate the provider’s obligation to comply with the requirements of this Regulation, when the standardisation <br /> request has not been accepted by any of the European standardisation organisations, or when the relevant <br /> harmonised standards insufficiently address fundamental rights concerns, or when the harmonised standards do not <br /> comply with the request, or when there are delays in the adoption of an appropriate harmonised standard. Where <br /> such a delay in the adoption of a harmonised standard is due to the technical complexity of that standard, this should <br /> OJ L, 12.7.</p>
Show original text

When there are delays in adopting a harmonised standard, especially due to technical complexity, the Commission should consider this before creating common specifications. The Commission is encouraged to work with international partners and standardisation bodies when developing these specifications. Providers of high-risk AI systems that have been trained and tested using data specific to their intended geographical location, user behaviour, context, or function are considered to meet the data governance requirements of this Regulation, without affecting the use of harmonised standards and common specifications.

<p>or when there are delays in the adoption of an appropriate harmonised standard. Where <br /> such a delay in the adoption of a harmonised standard is due to the technical complexity of that standard, this should <br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 31/144<br /> (41)<br /> Regulation (EU) No 1025/2012 of the European Parliament and of the Council of 25 October 2012 on European standardisation, <br /> amending Council Directives 89/686/EEC and 93/15/EEC and Directives 94/9/EC, 94/25/EC, 95/16/EC, 97/23/EC, 98/34/EC, <br /> 2004/22/EC, 2007/23/EC, 2009/23/EC and 2009/105/EC of the European Parliament and of the Council and repealing Council <br /> Decision 87/95/EEC and Decision No 1673/2006/EC of the European Parliament and of the Council (OJ L 316, 14.11.2012, p. 12).</p> <p>be considered by the Commission before contemplating the establishment of common specifications. When <br /> developing common specifications, the Commission is encouraged to cooperate with international partners and <br /> international standardisation bodies.<br /> (122)<br /> It is appropriate that, without prejudice to the use of harmonised standards and common specifications, providers of <br /> a high-risk AI system that has been trained and tested on data reflecting the specific geographical, behavioural, <br /> contextual or functional setting within which the AI system is intended to be used, should be presumed to comply <br /> with the relevant measure provided for under the requirement on data governance set out in this Regulation.</p>
Show original text

AI systems that use data specific to their intended location, user behavior, context, or function are considered to meet data governance requirements. High-risk AI systems that have received cybersecurity certification or approval under EU Regulation 2019/881 and are listed in the Official Journal of the European Union are presumed to meet this Regulation's cybersecurity requirements, as long as their certification covers those requirements. This does not affect the voluntary nature of the cybersecurity scheme. To ensure high-risk AI systems are trustworthy, they must pass a conformity assessment before being sold or used. For high-risk AI systems in products already covered by existing EU laws, their compliance should be checked as part of the existing assessment process to reduce burden and avoid duplication. The requirements of this Regulation should not change how conformity assessment works under current EU laws. Because high-risk AI systems are complex and carry significant risks, a proper conformity assessment procedure involving independent third-party bodies is necessary.

<p>on data reflecting the specific geographical, behavioural, <br /> contextual or functional setting within which the AI system is intended to be used, should be presumed to comply <br /> with the relevant measure provided for under the requirement on data governance set out in this Regulation. <br /> Without prejudice to the requirements related to robustness and accuracy set out in this Regulation, in accordance <br /> with Article 54(3) of Regulation (EU) 2019/881, high-risk AI systems that have been certified or for which <br /> a statement of conformity has been issued under a cybersecurity scheme pursuant to that Regulation and the <br /> references of which have been published in the Official Journal of the European Union should be presumed to comply <br /> with the cybersecurity requirement of this Regulation in so far as the cybersecurity certificate or statement of <br /> conformity or parts thereof cover the cybersecurity requirement of this Regulation. This remains without prejudice <br /> to the voluntary nature of that cybersecurity scheme.<br /> (123)<br /> In order to ensure a high level of trustworthiness of high-risk AI systems, those systems should be subject to <br /> a conformity assessment prior to their placing on the market or putting into service.<br /> (124)<br /> It is appropriate that, in order to minimise the burden on operators and avoid any possible duplication, for high-risk <br /> AI systems related to products which are covered by existing Union harmonisation legislation based on the New <br /> Legislative Framework, the compliance of those AI systems with the requirements of this Regulation should be <br /> assessed as part of the conformity assessment already provided for in that law. The applicability of the requirements <br /> of this Regulation should thus not affect the specific logic, methodology or general structure of conformity <br /> assessment under the relevant Union harmonisation legislation.<br /> (125)<br /> Given the complexity of high-risk AI systems and the risks that are associated with them, it is important to develop <br /> an adequate conformity assessment procedure for high-risk AI systems involving notified bodies, so-called third <br /> party conformity assessment.</p>
Show original text

High-risk AI systems are complex and potentially dangerous, so they need proper safety checks. These checks should involve independent third-party organizations (called notified bodies) to verify that systems meet safety standards. However, for now, most high-risk AI systems should be checked by the companies that make them, not by third parties. The only exception is AI systems used for biometric identification (like facial recognition), which must be checked by third parties.

Notified bodies must be approved by national authorities and meet strict requirements, including being independent, having the right expertise, avoiding conflicts of interest, and having strong cybersecurity measures. When approved, these bodies must be registered with the European Commission and other EU member states using an electronic system.

To support international trade, the EU should recognize safety checks done by qualified organizations in other countries, as long as those organizations meet EU standards and the EU has signed an agreement with that country. The Commission should work toward creating mutual recognition agreements with other countries to make this easier.

<p>legislation.<br /> (125)<br /> Given the complexity of high-risk AI systems and the risks that are associated with them, it is important to develop <br /> an adequate conformity assessment procedure for high-risk AI systems involving notified bodies, so-called third <br /> party conformity assessment. However, given the current experience of professional pre-market certifiers in the field <br /> of product safety and the different nature of risks involved, it is appropriate to limit, at least in an initial phase of <br /> application of this Regulation, the scope of application of third-party conformity assessment for high-risk AI <br /> systems other than those related to products. Therefore, the conformity assessment of such systems should be <br /> carried out as a general rule by the provider under its own responsibility, with the only exception of AI systems <br /> intended to be used for biometrics.<br /> (126)<br /> In order to carry out third-party conformity assessments when so required, notified bodies should be notified under <br /> this Regulation by the national competent authorities, provided that they comply with a set of requirements, in <br /> particular on independence, competence, absence of conflicts of interests and suitable cybersecurity requirements. <br /> Notification of those bodies should be sent by national competent authorities to the Commission and the other <br /> Member States by means of the electronic notification tool developed and managed by the Commission pursuant to <br /> Article R23 of Annex I to Decision No 768/2008/EC.<br /> (127)<br /> In line with Union commitments under the World Trade Organization Agreement on Technical Barriers to Trade, it is <br /> adequate to facilitate the mutual recognition of conformity assessment results produced by competent conformity <br /> assessment bodies, independent of the territory in which they are established, provided that those conformity <br /> assessment bodies established under the law of a third country meet the applicable requirements of this Regulation <br /> and the Union has concluded an agreement to that extent. In this context, the Commission should actively explore <br /> possible international instruments for that purpose and in particular pursue the conclusion of mutual recognition <br /> agreements with third countries.</p>
Show original text

The Commission should actively work to create international agreements recognizing compliance with this Regulation, particularly through mutual recognition agreements with other countries.

When a high-risk AI system undergoes significant changes—such as a new operating system, software architecture, or intended purpose—it must be treated as a new system and go through a new compliance check. However, automatic updates to the algorithm and performance of AI systems that learn after being sold or deployed do not count as significant changes, as long as the provider planned and assessed these changes during the initial compliance review.

High-risk AI systems must display a CE marking to show they meet this Regulation's requirements and can be sold freely across the EU. For high-risk AI systems built into physical products, a physical CE marking must be attached and can be accompanied by a digital marking. For high-risk AI systems available only in digital form, a digital CE marking should be used. EU member states cannot unfairly block the sale or use of high-risk AI systems that comply with this Regulation and display the CE marking.

<p>the applicable requirements of this Regulation <br /> and the Union has concluded an agreement to that extent. In this context, the Commission should actively explore <br /> possible international instruments for that purpose and in particular pursue the conclusion of mutual recognition <br /> agreements with third countries.<br /> (128)<br /> In line with the commonly established notion of substantial modification for products regulated by Union <br /> harmonisation legislation, it is appropriate that whenever a change occurs which may affect the compliance of <br /> a high-risk AI system with this Regulation (e.g. change of operating system or software architecture), or when the <br /> intended purpose of the system changes, that AI system should be considered to be a new AI system which should <br /> undergo a new conformity assessment. However, changes occurring to the algorithm and the performance of AI <br /> systems which continue to ‘learn’ after being placed on the market or put into service, namely automatically <br /> adapting how functions are carried out, should not constitute a substantial modification, provided that those <br /> changes have been pre-determined by the provider and assessed at the moment of the conformity assessment.<br /> EN<br /> OJ L, 12.7.2024<br /> 32/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>(129)<br /> High-risk AI systems should bear the CE marking to indicate their conformity with this Regulation so that they can <br /> move freely within the internal market. For high-risk AI systems embedded in a product, a physical CE marking <br /> should be affixed, and may be complemented by a digital CE marking. For high-risk AI systems only provided <br /> digitally, a digital CE marking should be used. Member States should not create unjustified obstacles to the placing <br /> on the market or the putting into service of high-risk AI systems that comply with the requirements laid down in <br /> this Regulation and bear the CE marking.</p>
Show original text

High-risk AI systems that meet the requirements of this Regulation and display a CE marking should be allowed to enter the market without unnecessary obstacles from Member States. In emergency situations involving public security, health and safety, environmental protection, or protection of critical infrastructure, market surveillance authorities can permit high-risk AI systems to be used even without completing the standard conformity assessment. Law enforcement and civil protection authorities may also deploy specific high-risk AI systems in urgent situations without prior approval, though they must request authorization during or immediately after use. To improve transparency and help the Commission and Member States oversee AI development, providers of high-risk AI systems must register themselves and their systems in an EU database managed by the Commission. This applies to providers of high-risk systems not covered by existing EU product regulations, and to those claiming their system should not be classified as high-risk. Public authorities, agencies, and government bodies that plan to use high-risk AI systems must also register in this database before deployment and select the specific system they intend to use.

<p>a digital CE marking should be used. Member States should not create unjustified obstacles to the placing <br /> on the market or the putting into service of high-risk AI systems that comply with the requirements laid down in <br /> this Regulation and bear the CE marking.<br /> (130)<br /> Under certain conditions, rapid availability of innovative technologies may be crucial for health and safety of <br /> persons, the protection of the environment and climate change and for society as a whole. It is thus appropriate that <br /> under exceptional reasons of public security or protection of life and health of natural persons, environmental <br /> protection and the protection of key industrial and infrastructural assets, market surveillance authorities could <br /> authorise the placing on the market or the putting into service of AI systems which have not undergone <br /> a conformity assessment. In duly justified situations, as provided for in this Regulation, law enforcement authorities <br /> or civil protection authorities may put a specific high-risk AI system into service without the authorisation of the <br /> market surveillance authority, provided that such authorisation is requested during or after the use without undue <br /> delay.<br /> (131)<br /> In order to facilitate the work of the Commission and the Member States in the AI field as well as to increase the <br /> transparency towards the public, providers of high-risk AI systems other than those related to products falling within <br /> the scope of relevant existing Union harmonisation legislation, as well as providers who consider that an AI system <br /> listed in the high-risk use cases in an annex to this Regulation is not high-risk on the basis of a derogation, should be <br /> required to register themselves and information about their AI system in an EU database, to be established and <br /> managed by the Commission. Before using an AI system listed in the high-risk use cases in an annex to this <br /> Regulation, deployers of high-risk AI systems that are public authorities, agencies or bodies, should register <br /> themselves in such database and select the system that they envisage to use.</p>
Show original text

High-risk AI systems must be registered in an EU database. Government agencies and public bodies are required to register their high-risk AI systems, while other organizations can do so voluntarily. The database should be publicly accessible, free, easy to navigate, and searchable by keywords so the public can find information about high-risk AI systems and their uses. When high-risk AI systems are significantly changed, these updates must also be registered. However, high-risk AI systems used in law enforcement, migration, asylum, and border control are registered in a secure, non-public section of the database. Only the European Commission and national market surveillance authorities can access their respective sections. High-risk AI systems in critical infrastructure are registered only at the national level, not in the EU database. The Commission manages the EU database and must follow data protection rules. To ensure the database works properly, the Commission will develop detailed technical specifications and conduct an independent security audit. The Commission must also consider cybersecurity risks when managing the database.

<p>the high-risk use cases in an annex to this <br /> Regulation, deployers of high-risk AI systems that are public authorities, agencies or bodies, should register <br /> themselves in such database and select the system that they envisage to use. Other deployers should be entitled to do <br /> so voluntarily. This section of the EU database should be publicly accessible, free of charge, the information should <br /> be easily navigable, understandable and machine-readable. The EU database should also be user-friendly, for example <br /> by providing search functionalities, including through keywords, allowing the general public to find relevant <br /> information to be submitted upon the registration of high-risk AI systems and on the use case of high-risk AI <br /> systems, set out in an annex to this Regulation, to which the high-risk AI systems correspond. Any substantial <br /> modification of high-risk AI systems should also be registered in the EU database. For high-risk AI systems in the <br /> area of law enforcement, migration, asylum and border control management, the registration obligations should be <br /> fulfilled in a secure non-public section of the EU database. Access to the secure non-public section should be strictly <br /> limited to the Commission as well as to market surveillance authorities with regard to their national section of that <br /> database. High-risk AI systems in the area of critical infrastructure should only be registered at national level. The <br /> Commission should be the controller of the EU database, in accordance with Regulation (EU) 2018/1725. In order <br /> to ensure the full functionality of the EU database, when deployed, the procedure for setting the database should <br /> include the development of functional specifications by the Commission and an independent audit report. The <br /> Commission should take into account cybersecurity risks when carrying out its tasks as data controller on the EU <br /> database.</p>
Show original text

When the EU database is set up, the Commission must create detailed specifications and get an independent audit. The Commission needs to consider cybersecurity risks when managing the database. The database should follow accessibility rules under EU Directive 2019/882 so the public can easily use it.

Some AI systems that interact with people or create content can trick or impersonate users, even if they are not classified as high-risk. These systems must follow special transparency rules. People should be told when they are talking to an AI system, unless it is obvious from context. This is especially important for vulnerable groups like children or people with disabilities. People should also be informed when AI systems use their biometric data (like facial recognition) to identify their emotions, intentions, or assign them to categories based on characteristics like age, ethnicity, appearance, or interests. All notifications must be provided in accessible formats for people with disabilities.

<p>, when deployed, the procedure for setting the database should <br /> include the development of functional specifications by the Commission and an independent audit report. The <br /> Commission should take into account cybersecurity risks when carrying out its tasks as data controller on the EU <br /> database. In order to maximise the availability and use of the EU database by the public, the EU database, including <br /> the information made available through it, should comply with requirements under the Directive (EU) 2019/882.<br /> (132)<br /> Certain AI systems intended to interact with natural persons or to generate content may pose specific risks of <br /> impersonation or deception irrespective of whether they qualify as high-risk or not. In certain circumstances, the use <br /> of these systems should therefore be subject to specific transparency obligations without prejudice to the <br /> requirements and obligations for high-risk AI systems and subject to targeted exceptions to take into account the <br /> special need of law enforcement. In particular, natural persons should be notified that they are interacting with an AI <br /> system, unless this is obvious from the point of view of a natural person who is reasonably well-informed, observant <br /> and circumspect taking into account the circumstances and the context of use. When implementing that obligation, <br /> the characteristics of natural persons belonging to vulnerable groups due to their age or disability should be taken <br /> into account to the extent the AI system is intended to interact with those groups as well. Moreover, natural persons <br /> should be notified when they are exposed to AI systems that, by processing their biometric data, can identify or infer <br /> the emotions or intentions of those persons or assign them to specific categories. Such specific categories can relate <br /> to aspects such as sex, age, hair colour, eye colour, tattoos, personal traits, ethnic origin, personal preferences and <br /> interests. Such information and notifications should be provided in accessible formats for persons with disabilities.<br /> OJ L, 12.7.</p>
Show original text

AI systems can now create large amounts of fake content that looks real to humans. This technology is becoming more powerful and widely available, which creates serious problems. It increases the risk of spreading false information, manipulating people, committing fraud, impersonating others, and deceiving consumers. To address these risks, companies that make these AI systems must add technical tools that mark and identify AI-generated or AI-edited content in a way that computers can read. These tools should be reliable, work together well, effective, and strong whenever technically possible. Examples include watermarks, metadata tags, cryptographic methods to verify content origin, logging systems, fingerprints, or similar techniques. Companies should consider the different types of content they work with and follow current industry standards when adding these tools. These marking and detection methods can be built into the AI system itself or the AI model, including general-purpose models that create content. This helps downstream providers of AI systems meet this requirement.

<p>as sex, age, hair colour, eye colour, tattoos, personal traits, ethnic origin, personal preferences and <br /> interests. Such information and notifications should be provided in accessible formats for persons with disabilities.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 33/144</p> <p>(133)<br /> A variety of AI systems can generate large quantities of synthetic content that becomes increasingly hard for humans <br /> to distinguish from human-generated and authentic content. The wide availability and increasing capabilities of <br /> those systems have a significant impact on the integrity and trust in the information ecosystem, raising new risks of <br /> misinformation and manipulation at scale, fraud, impersonation and consumer deception. In light of those impacts, <br /> the fast technological pace and the need for new methods and techniques to trace origin of information, it is <br /> appropriate to require providers of those systems to embed technical solutions that enable marking in a machine <br /> readable format and detection that the output has been generated or manipulated by an AI system and not a human. <br /> Such techniques and methods should be sufficiently reliable, interoperable, effective and robust as far as this is <br /> technically feasible, taking into account available techniques or a combination of such techniques, such as <br /> watermarks, metadata identifications, cryptographic methods for proving provenance and authenticity of content, <br /> logging methods, fingerprints or other techniques, as may be appropriate. When implementing this obligation, <br /> providers should also take into account the specificities and the limitations of the different types of content and the <br /> relevant technological and market developments in the field, as reflected in the generally acknowledged state of the <br /> art. Such techniques and methods can be implemented at the level of the AI system or at the level of the AI model, <br /> including general-purpose AI models generating content, thereby facilitating fulfilment of this obligation by the <br /> downstream provider of the AI system.</p>
Show original text

AI providers and users must mark content created or changed by AI systems to show it's artificial. This requirement applies to both the AI system itself and the AI model level. However, simple editing tools and systems that don't significantly change the original data don't need this marking. When AI is used to create fake images, audio, or video that look real (deepfakes), users must clearly label the content as artificially created or manipulated. This transparency rule doesn't prevent people from using AI for creative, satirical, artistic, or fictional works. In these cases, creators only need to disclose that AI-generated or manipulated content exists, in a way that doesn't interfere with how people view or enjoy the work. This approach protects both freedom of expression and the rights of others.

<p>Such techniques and methods can be implemented at the level of the AI system or at the level of the AI model, <br /> including general-purpose AI models generating content, thereby facilitating fulfilment of this obligation by the <br /> downstream provider of the AI system. To remain proportionate, it is appropriate to envisage that this marking <br /> obligation should not cover AI systems performing primarily an assistive function for standard editing or AI systems <br /> not substantially altering the input data provided by the deployer or the semantics thereof.<br /> (134)<br /> Further to the technical solutions employed by the providers of the AI system, deployers who use an AI system to <br /> generate or manipulate image, audio or video content that appreciably resembles existing persons, objects, places, <br /> entities or events and would falsely appear to a person to be authentic or truthful (deep fakes), should also clearly <br /> and distinguishably disclose that the content has been artificially created or manipulated by labelling the AI output <br /> accordingly and disclosing its artificial origin. Compliance with this transparency obligation should not be <br /> interpreted as indicating that the use of the AI system or its output impedes the right to freedom of expression and <br /> the right to freedom of the arts and sciences guaranteed in the Charter, in particular where the content is part of an <br /> evidently creative, satirical, artistic, fictional or analogous work or programme, subject to appropriate safeguards for <br /> the rights and freedoms of third parties. In those cases, the transparency obligation for deep fakes set out in this <br /> Regulation is limited to disclosure of the existence of such generated or manipulated content in an appropriate <br /> manner that does not hamper the display or enjoyment of the work, including its normal exploitation and use, while <br /> maintaining the utility and quality of the work.</p>
Show original text

Platforms must disclose when content is AI-generated or manipulated in a way that doesn't interfere with how people view or use the work. This same disclosure requirement applies to AI-generated text published to inform the public about important issues, unless the content has been reviewed and approved by a human editor who takes responsibility for it. The European Commission can support the creation of industry guidelines to help implement these requirements effectively, including making detection tools available and encouraging cooperation between different organizations involved in sharing or verifying content. This helps the public identify AI-generated material. AI system providers and operators must enable detection and disclosure of artificially generated or manipulated outputs. This is especially important for very large online platforms and search engines, which must identify and reduce risks from AI-generated or manipulated content. These risks include potential harm to democracy, public discussion, elections, and the spread of false information.

<p>limited to disclosure of the existence of such generated or manipulated content in an appropriate <br /> manner that does not hamper the display or enjoyment of the work, including its normal exploitation and use, while <br /> maintaining the utility and quality of the work. In addition, it is also appropriate to envisage a similar disclosure <br /> obligation in relation to AI-generated or manipulated text to the extent it is published with the purpose of informing <br /> the public on matters of public interest unless the AI-generated content has undergone a process of human review or <br /> editorial control and a natural or legal person holds editorial responsibility for the publication of the content.<br /> (135)<br /> Without prejudice to the mandatory nature and full applicability of the transparency obligations, the Commission <br /> may also encourage and facilitate the drawing up of codes of practice at Union level to facilitate the effective <br /> implementation of the obligations regarding the detection and labelling of artificially generated or manipulated <br /> content, including to support practical arrangements for making, as appropriate, the detection mechanisms <br /> accessible and facilitating cooperation with other actors along the value chain, disseminating content or checking its <br /> authenticity and provenance to enable the public to effectively distinguish AI-generated content.<br /> (136)<br /> The obligations placed on providers and deployers of certain AI systems in this Regulation to enable the detection <br /> and disclosure that the outputs of those systems are artificially generated or manipulated are particularly relevant to <br /> facilitate the effective implementation of Regulation (EU) 2022/2065. This applies in particular as regards the <br /> obligations of providers of very large online platforms or very large online search engines to identify and mitigate <br /> systemic risks that may arise from the dissemination of content that has been artificially generated or manipulated, <br /> in particular the risk of the actual or foreseeable negative effects on democratic processes, civic discourse and <br /> electoral processes, including through disinformation.</p>
Show original text

The EU requires AI systems to label artificially generated or manipulated content to prevent risks to democracy, public discussion, and elections, including false information. This labeling requirement does not override existing rules about illegal content or change how content is assessed for legality. Hosting service providers must still follow their legal obligations regarding illegal content reports.

Following transparency rules for AI systems does not mean the AI or its output is legal under EU or national law. Companies using AI systems must also follow other transparency laws that apply to them.

AI technology is developing quickly and needs government oversight. To support innovation while protecting the public, EU countries must create at least one AI regulatory sandbox at the national level. These sandboxes allow companies to develop and test new AI systems under careful government supervision before releasing them to the market.

<p>mitigate <br /> systemic risks that may arise from the dissemination of content that has been artificially generated or manipulated, <br /> in particular the risk of the actual or foreseeable negative effects on democratic processes, civic discourse and <br /> electoral processes, including through disinformation. The requirement to label content generated by AI systems <br /> under this Regulation is without prejudice to the obligation in Article 16(6) of Regulation (EU) 2022/2065 for <br /> providers of hosting services to process notices on illegal content received pursuant to Article 16(1) of that <br /> Regulation and should not influence the assessment and the decision on the illegality of the specific content. That <br /> assessment should be performed solely with reference to the rules governing the legality of the content.<br /> EN<br /> OJ L, 12.7.2024<br /> 34/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>(137)<br /> Compliance with the transparency obligations for the AI systems covered by this Regulation should not be <br /> interpreted as indicating that the use of the AI system or its output is lawful under this Regulation or other Union <br /> and Member State law and should be without prejudice to other transparency obligations for deployers of AI systems <br /> laid down in Union or national law.<br /> (138)<br /> AI is a rapidly developing family of technologies that requires regulatory oversight and a safe and controlled space <br /> for experimentation, while ensuring responsible innovation and integration of appropriate safeguards and risk <br /> mitigation measures. To ensure a legal framework that promotes innovation, is future-proof and resilient to <br /> disruption, Member States should ensure that their national competent authorities establish at least one AI <br /> regulatory sandbox at national level to facilitate the development and testing of innovative AI systems under strict <br /> regulatory oversight before these systems are placed on the market or otherwise put into service.</p>
Show original text

Each country must set up at least one AI regulatory sandbox—a controlled testing environment where companies can develop and test new AI systems under government supervision before releasing them to the public. Countries can meet this requirement by joining an existing sandbox or creating one together with other countries, as long as it provides adequate coverage. These sandboxes can be physical, digital, or hybrid spaces that test both physical and digital products. The organizations running these sandboxes must provide sufficient funding and staff.

AI regulatory sandboxes serve several important purposes: they encourage innovation by providing a safe space to test new AI systems before market launch while ensuring compliance with regulations; they give companies and regulators more confidence and clarity about AI development; they help authorities understand AI's benefits, risks, and impacts; they allow regulators and businesses to learn from real-world testing to improve future laws; they promote cooperation and knowledge-sharing among involved authorities; and they help new companies and small businesses enter the market faster. These sandboxes should be easily accessible across all EU countries, with special attention to making them available to small and medium-sized businesses and startups. Companies should be able to participate when they face legal uncertainty about developing and testing AI innovations in the EU.

<p>States should ensure that their national competent authorities establish at least one AI <br /> regulatory sandbox at national level to facilitate the development and testing of innovative AI systems under strict <br /> regulatory oversight before these systems are placed on the market or otherwise put into service. Member States <br /> could also fulfil this obligation through participating in already existing regulatory sandboxes or establishing jointly <br /> a sandbox with one or more Member States’ competent authorities, insofar as this participation provides equivalent <br /> level of national coverage for the participating Member States. AI regulatory sandboxes could be established in <br /> physical, digital or hybrid form and may accommodate physical as well as digital products. Establishing authorities <br /> should also ensure that the AI regulatory sandboxes have the adequate resources for their functioning, including <br /> financial and human resources.<br /> (139)<br /> The objectives of the AI regulatory sandboxes should be to foster AI innovation by establishing a controlled <br /> experimentation and testing environment in the development and pre-marketing phase with a view to ensuring <br /> compliance of the innovative AI systems with this Regulation and other relevant Union and national law. Moreover, <br /> the AI regulatory sandboxes should aim to enhance legal certainty for innovators and the competent authorities’ <br /> oversight and understanding of the opportunities, emerging risks and the impacts of AI use, to facilitate regulatory <br /> learning for authorities and undertakings, including with a view to future adaptions of the legal framework, to <br /> support cooperation and the sharing of best practices with the authorities involved in the AI regulatory sandbox, <br /> and to accelerate access to markets, including by removing barriers for SMEs, including start-ups. AI regulatory <br /> sandboxes should be widely available throughout the Union, and particular attention should be given to their <br /> accessibility for SMEs, including start-ups. The participation in the AI regulatory sandbox should focus on issues that <br /> raise legal uncertainty for providers and prospective providers to innovate, experiment with AI in the Union and <br /> contribute to evidence-based regulatory learning.</p>
Show original text

AI regulatory sandboxes should be accessible to small and medium-sized businesses and startups. These sandboxes allow companies to develop, test, and validate AI systems in a controlled environment before releasing them to the market. This helps address legal uncertainties and supports evidence-based regulation. Supervisors must monitor the entire development process and identify any significant risks. If serious problems are found, development must stop until they are fixed. National authorities can partner with other organizations like standards bodies, research labs, and civil society groups to oversee these sandboxes. To ensure consistency across the EU and reduce costs, common implementation rules and cooperation frameworks should be established. These AI sandboxes work alongside other regulatory sandboxes for different laws. When appropriate, authorities managing other sandboxes should also use them to ensure AI systems comply with AI regulations. With agreement from authorities and participants, real-world testing can be conducted within the sandbox framework.

<p>accessibility for SMEs, including start-ups. The participation in the AI regulatory sandbox should focus on issues that <br /> raise legal uncertainty for providers and prospective providers to innovate, experiment with AI in the Union and <br /> contribute to evidence-based regulatory learning. The supervision of the AI systems in the AI regulatory sandbox <br /> should therefore cover their development, training, testing and validation before the systems are placed on the <br /> market or put into service, as well as the notion and occurrence of substantial modification that may require a new <br /> conformity assessment procedure. Any significant risks identified during the development and testing of such AI <br /> systems should result in adequate mitigation and, failing that, in the suspension of the development and testing <br /> process. Where appropriate, national competent authorities establishing AI regulatory sandboxes should cooperate <br /> with other relevant authorities, including those supervising the protection of fundamental rights, and could allow for <br /> the involvement of other actors within the AI ecosystem such as national or European standardisation organisations, <br /> notified bodies, testing and experimentation facilities, research and experimentation labs, European Digital <br /> Innovation Hubs and relevant stakeholder and civil society organisations. To ensure uniform implementation across <br /> the Union and economies of scale, it is appropriate to establish common rules for the AI regulatory sandboxes’ <br /> implementation and a framework for cooperation between the relevant authorities involved in the supervision of the <br /> sandboxes. AI regulatory sandboxes established under this Regulation should be without prejudice to other law <br /> allowing for the establishment of other sandboxes aiming to ensure compliance with law other than this Regulation. <br /> Where appropriate, relevant competent authorities in charge of those other regulatory sandboxes should consider <br /> the benefits of using those sandboxes also for the purpose of ensuring compliance of AI systems with this <br /> Regulation. Upon agreement between the national competent authorities and the participants in the AI regulatory <br /> sandbox, testing in real world conditions may also be operated and supervised in the framework of the AI regulatory <br /> sandbox.</p>
Show original text

AI companies and potential providers in the regulatory sandbox can test their systems in real-world conditions with approval from national authorities. They are allowed to use personal data that was collected for other purposes to develop AI systems that serve the public interest, but only under specific conditions set by EU data protection laws (Regulations 2016/679 and 2018/1725, and Directive 2016/680). All other data protection rules still apply. Companies in the sandbox must put safeguards in place and work closely with authorities to identify and reduce any serious risks to safety, health, or people's rights during development and testing.

<p>of ensuring compliance of AI systems with this <br /> Regulation. Upon agreement between the national competent authorities and the participants in the AI regulatory <br /> sandbox, testing in real world conditions may also be operated and supervised in the framework of the AI regulatory <br /> sandbox.<br /> (140)<br /> This Regulation should provide the legal basis for the providers and prospective providers in the AI regulatory <br /> sandbox to use personal data collected for other purposes for developing certain AI systems in the public interest <br /> within the AI regulatory sandbox, only under specified conditions, in accordance with Article 6(4) and Article 9(2), <br /> point (g), of Regulation (EU) 2016/679, and Articles 5, 6 and 10 of Regulation (EU) 2018/1725, and without <br /> prejudice to Article 4(2) and Article 10 of Directive (EU) 2016/680. All other obligations of data controllers and <br /> rights of data subjects under Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680 remain <br /> applicable. In particular, this Regulation should not provide a legal basis in the meaning of Article 22(2), point (b) of <br /> Regulation (EU) 2016/679 and Article 24(2), point (b) of Regulation (EU) 2018/1725. Providers and prospective <br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 35/144</p> <p>providers in the AI regulatory sandbox should ensure appropriate safeguards and cooperate with the competent <br /> authorities, including by following their guidance and acting expeditiously and in good faith to adequately mitigate <br /> any identified significant risks to safety, health, and fundamental rights that may arise during the development, <br /> testing and experimentation in that sandbox.</p>
Show original text

To help high-risk AI systems reach the market faster, companies can test them in real-world conditions without joining an AI regulatory sandbox. However, safety protections must be in place. Companies must get informed consent from people participating in tests, except in law enforcement cases where asking for consent would interfere with testing. This consent is separate from data protection consent. To reduce risks and allow oversight, companies must: submit a testing plan to authorities, register the testing in an EU database (with some exceptions), set time limits on testing, add extra protections for vulnerable groups, and have a written agreement between the company and those deploying the system. Authorities must be involved throughout to ensure safety, health, and fundamental rights are protected.

<p>competent <br /> authorities, including by following their guidance and acting expeditiously and in good faith to adequately mitigate <br /> any identified significant risks to safety, health, and fundamental rights that may arise during the development, <br /> testing and experimentation in that sandbox.<br /> (141)<br /> In order to accelerate the process of development and the placing on the market of the high-risk AI systems listed in <br /> an annex to this Regulation, it is important that providers or prospective providers of such systems may also benefit <br /> from a specific regime for testing those systems in real world conditions, without participating in an AI regulatory <br /> sandbox. However, in such cases, taking into account the possible consequences of such testing on individuals, it <br /> should be ensured that appropriate and sufficient guarantees and conditions are introduced by this Regulation for <br /> providers or prospective providers. Such guarantees should include, inter alia, requesting informed consent of <br /> natural persons to participate in testing in real world conditions, with the exception of law enforcement where the <br /> seeking of informed consent would prevent the AI system from being tested. Consent of subjects to participate in <br /> such testing under this Regulation is distinct from, and without prejudice to, consent of data subjects for the <br /> processing of their personal data under the relevant data protection law. It is also important to minimise the risks <br /> and enable oversight by competent authorities and therefore require prospective providers to have a real-world <br /> testing plan submitted to competent market surveillance authority, register the testing in dedicated sections in the EU <br /> database subject to some limited exceptions, set limitations on the period for which the testing can be done and <br /> require additional safeguards for persons belonging to certain vulnerable groups, as well as a written agreement <br /> defining the roles and responsibilities of prospective providers and deployers and effective oversight by competent <br /> personnel involved in the real world testing.</p>
Show original text

Real-world AI testing requires additional protections for vulnerable groups, a written agreement outlining the roles and responsibilities of providers and deployers, and proper oversight by qualified personnel. AI systems must allow their predictions, recommendations, and decisions to be reversed or ignored. Personal data must be protected and deleted when participants withdraw consent, in accordance with EU data protection laws. When transferring testing data to other countries, appropriate safeguards under EU law must be applied. For personal data, this includes following EU data protection transfer rules. For non-personal data, safeguards must comply with EU Regulations 2022/868 and 2023/2854. To promote positive social and environmental outcomes, EU Member States should fund and support AI research and development projects that benefit society and the environment. Examples include AI solutions that improve accessibility for people with disabilities, reduce inequality, or help meet environmental goals. These projects should involve collaboration between AI developers, experts in inequality and non-discrimination, accessibility specialists, consumer advocates, environmental experts, digital rights specialists, and academics.

<p>can be done and <br /> require additional safeguards for persons belonging to certain vulnerable groups, as well as a written agreement <br /> defining the roles and responsibilities of prospective providers and deployers and effective oversight by competent <br /> personnel involved in the real world testing. Furthermore, it is appropriate to envisage additional safeguards to <br /> ensure that the predictions, recommendations or decisions of the AI system can be effectively reversed and <br /> disregarded and that personal data is protected and is deleted when the subjects have withdrawn their consent to <br /> participate in the testing without prejudice to their rights as data subjects under the Union data protection law. As <br /> regards transfer of data, it is also appropriate to envisage that data collected and processed for the purpose of testing <br /> in real-world conditions should be transferred to third countries only where appropriate and applicable safeguards <br /> under Union law are implemented, in particular in accordance with bases for transfer of personal data under Union <br /> law on data protection, while for non-personal data appropriate safeguards are put in place in accordance with <br /> Union law, such as Regulations (EU) 2022/868 (42) and (EU) 2023/2854 (43) of the European Parliament and of the <br /> Council.<br /> (142)<br /> To ensure that AI leads to socially and environmentally beneficial outcomes, Member States are encouraged to <br /> support and promote research and development of AI solutions in support of socially and environmentally <br /> beneficial outcomes, such as AI-based solutions to increase accessibility for persons with disabilities, tackle <br /> socio-economic inequalities, or meet environmental targets, by allocating sufficient resources, including public and <br /> Union funding, and, where appropriate and provided that the eligibility and selection criteria are fulfilled, <br /> considering in particular projects which pursue such objectives. Such projects should be based on the principle of <br /> interdisciplinary cooperation between AI developers, experts on inequality and non-discrimination, accessibility, <br /> consumer, environmental, and digital rights, as well as academics.</p>
Show original text

AI development projects should bring together different experts including AI developers, inequality specialists, accessibility experts, consumer advocates, environmental specialists, digital rights experts, and academics working as a team.

To support innovation, Member States should focus on helping small and medium-sized businesses (SMEs) and startups that create or use AI systems. They should do this by raising awareness and sharing information about AI regulations.

Member States should give SMEs and startups located in the EU priority access to AI regulatory sandboxes, as long as they meet the requirements. Other companies can also access these sandboxes if they meet the same conditions.

Member States should create communication channels to help SMEs, startups, and other innovators understand and follow AI regulations. These channels should work together to give consistent guidance. Member States should also help SMEs participate in developing AI standards and should pay special attention to the unique needs of SME providers.

<p>ing in particular projects which pursue such objectives. Such projects should be based on the principle of <br /> interdisciplinary cooperation between AI developers, experts on inequality and non-discrimination, accessibility, <br /> consumer, environmental, and digital rights, as well as academics.<br /> (143)<br /> In order to promote and protect innovation, it is important that the interests of SMEs, including start-ups, that are <br /> providers or deployers of AI systems are taken into particular account. To that end, Member States should develop <br /> initiatives, which are targeted at those operators, including on awareness raising and information communication. <br /> Member States should provide SMEs, including start-ups, that have a registered office or a branch in the Union, with <br /> priority access to the AI regulatory sandboxes provided that they fulfil the eligibility conditions and selection criteria <br /> and without precluding other providers and prospective providers to access the sandboxes provided the same <br /> conditions and criteria are fulfilled. Member States should utilise existing channels and where appropriate, establish <br /> new dedicated channels for communication with SMEs, including start-ups, deployers, other innovators and, as <br /> appropriate, local public authorities, to support SMEs throughout their development path by providing guidance <br /> and responding to queries about the implementation of this Regulation. Where appropriate, these channels should <br /> work together to create synergies and ensure homogeneity in their guidance to SMEs, including start-ups, and <br /> deployers. Additionally, Member States should facilitate the participation of SMEs and other relevant stakeholders in <br /> the standardisation development processes. Moreover, the specific interests and needs of providers that are SMEs, <br /> EN<br /> OJ L, 12.7.</p>
Show original text

Member States should help small and medium-sized enterprises (SMEs) and other stakeholders participate in developing standards. When notified bodies set fees for conformity assessments, they should consider the specific needs of SMEs and start-ups. The Commission should regularly review certification and compliance costs for SMEs and start-ups through open consultations and work with Member States to reduce these costs. Translation costs for required documents and communication with authorities can be a major expense, especially for smaller companies. Member States should consider allowing one of their accepted languages for provider documentation and communication to be a language widely understood by companies operating across borders. These requirements are based on the Data Governance Act (Regulation EU 2022/868) and the Data Act (Regulation EU 2023/2854).

<p>. Additionally, Member States should facilitate the participation of SMEs and other relevant stakeholders in <br /> the standardisation development processes. Moreover, the specific interests and needs of providers that are SMEs, <br /> EN<br /> OJ L, 12.7.2024<br /> 36/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> (42)<br /> Regulation (EU) 2022/868 of the European Parliament and of the Council of 30 May 2022 on European data governance and <br /> amending Regulation (EU) 2018/1724 (Data Governance Act) (OJ L 152, 3.6.2022, p. 1).<br /> (43)<br /> Regulation (EU) 2023/2854 of the European Parliament and of the Council of 13 December 2023 on harmonised rules on fair <br /> access to and use of data and amending Regulation (EU) 2017/2394 and Directive (EU) 2020/1828 (Data Act) (OJ L, 2023/2854, <br /> 22.12.2023, ELI: http://data.europa.eu/eli/reg/2023/2854/oj).</p> <p>including start-ups, should be taken into account when notified bodies set conformity assessment fees. The <br /> Commission should regularly assess the certification and compliance costs for SMEs, including start-ups, through <br /> transparent consultations and should work with Member States to lower such costs. For example, translation costs <br /> related to mandatory documentation and communication with authorities may constitute a significant cost for <br /> providers and other operators, in particular those of a smaller scale. Member States should possibly ensure that one <br /> of the languages determined and accepted by them for relevant providers’ documentation and for communication <br /> with operators is one which is broadly understood by the largest possible number of cross-border deployers.</p>
Show original text

Member States should choose at least one widely understood language for AI provider documentation and operator communication to help cross-border businesses. To support small and medium-sized enterprises (SMEs) and startups, the Commission will provide standardized templates and create a single information platform with clear guidance on this Regulation. The Commission will also run awareness campaigns and help improve AI procurement practices across the EU. Medium-sized companies that recently became too large to qualify as small enterprises should still receive these support measures, as they may lack the legal resources and training needed to comply with this Regulation. To encourage innovation, EU funding programs like Digital Europe and Horizon Europe should help achieve this Regulation's goals. To reduce implementation risks and help providers, SMEs, startups, and testing organizations comply, the AI-on-demand platform, European Digital Innovation Hubs, and testing facilities established by the Commission and Member States should provide support and guidance.

<p>those of a smaller scale. Member States should possibly ensure that one <br /> of the languages determined and accepted by them for relevant providers’ documentation and for communication <br /> with operators is one which is broadly understood by the largest possible number of cross-border deployers. In order <br /> to address the specific needs of SMEs, including start-ups, the Commission should provide standardised templates for <br /> the areas covered by this Regulation, upon request of the Board. Additionally, the Commission should complement <br /> Member States’ efforts by providing a single information platform with easy-to-use information with regards to this <br /> Regulation for all providers and deployers, by organising appropriate communication campaigns to raise awareness <br /> about the obligations arising from this Regulation, and by evaluating and promoting the convergence of best <br /> practices in public procurement procedures in relation to AI systems. Medium-sized enterprises which until recently <br /> qualified as small enterprises within the meaning of the Annex to Commission Recommendation 2003/361/EC (44) <br /> should have access to those support measures, as those new medium-sized enterprises may sometimes lack the legal <br /> resources and training necessary to ensure proper understanding of, and compliance with, this Regulation.<br /> (144)<br /> In order to promote and protect innovation, the AI-on-demand platform, all relevant Union funding programmes <br /> and projects, such as Digital Europe Programme, Horizon Europe, implemented by the Commission and the Member <br /> States at Union or national level should, as appropriate, contribute to the achievement of the objectives of this <br /> Regulation.<br /> (145)<br /> In order to minimise the risks to implementation resulting from lack of knowledge and expertise in the market as <br /> well as to facilitate compliance of providers, in particular SMEs, including start-ups, and notified bodies with their <br /> obligations under this Regulation, the AI-on-demand platform, the European Digital Innovation Hubs and the <br /> testing and experimentation facilities established by the Commission and the Member States at Union or national <br /> level should contribute to the implementation of this Regulation.</p>
Show original text

To help implement this AI regulation, several organizations should provide support: the AI-on-demand platform, the European Digital Innovation Hubs, and testing facilities created by the Commission and Member States. These organizations can offer technical and scientific assistance to AI providers and authorized inspection bodies. To make compliance easier and less expensive for small companies, microenterprises should be allowed to set up quality management systems in a simplified way. This reduces their costs and paperwork while still maintaining safety standards for high-risk AI systems. The Commission will create guidelines explaining which parts microenterprises can simplify. The Commission should also help testing facilities work with authorized laboratories and expert groups that already assess medical devices and other products under EU law. This includes expert panels and reference laboratories for medical devices. Finally, this regulation needs a governance structure that coordinates implementation across EU countries, builds AI capabilities at the EU level, and involves all relevant AI stakeholders.

<p>obligations under this Regulation, the AI-on-demand platform, the European Digital Innovation Hubs and the <br /> testing and experimentation facilities established by the Commission and the Member States at Union or national <br /> level should contribute to the implementation of this Regulation. Within their respective mission and fields of <br /> competence, the AI-on-demand platform, the European Digital Innovation Hubs and the testing and <br /> experimentation Facilities are able to provide in particular technical and scientific support to providers and <br /> notified bodies.<br /> (146)<br /> Moreover, in light of the very small size of some operators and in order to ensure proportionality regarding costs of <br /> innovation, it is appropriate to allow microenterprises to fulfil one of the most costly obligations, namely to <br /> establish a quality management system, in a simplified manner which would reduce the administrative burden and <br /> the costs for those enterprises without affecting the level of protection and the need for compliance with the <br /> requirements for high-risk AI systems. The Commission should develop guidelines to specify the elements of the <br /> quality management system to be fulfilled in this simplified manner by microenterprises.<br /> (147)<br /> It is appropriate that the Commission facilitates, to the extent possible, access to testing and experimentation <br /> facilities to bodies, groups or laboratories established or accredited pursuant to any relevant Union harmonisation <br /> legislation and which fulfil tasks in the context of conformity assessment of products or devices covered by that <br /> Union harmonisation legislation. This is, in particular, the case as regards expert panels, expert laboratories and <br /> reference laboratories in the field of medical devices pursuant to Regulations (EU) 2017/745 and (EU) 2017/746.<br /> (148)<br /> This Regulation should establish a governance framework that both allows to coordinate and support the <br /> application of this Regulation at national level, as well as build capabilities at Union level and integrate stakeholders <br /> in the field of AI.</p>
Show original text

This regulation creates a governance system to coordinate and support AI implementation across EU member states while building expertise at the EU level and involving stakeholders. The AI Office, established by Commission Decision, leads this effort by developing EU expertise in AI and implementing EU AI laws. Member States must support the AI Office to strengthen EU capabilities and the digital single market. The regulation also establishes three bodies: a Board with Member State representatives, a scientific panel to include scientists, and an advisory forum for stakeholder input at both EU and national levels. The EU will build expertise by using existing resources and partnering with related initiatives, including the EuroHPC Joint Undertaking and AI testing facilities under the Digital Europe Programme.

<p>2017/746.<br /> (148)<br /> This Regulation should establish a governance framework that both allows to coordinate and support the <br /> application of this Regulation at national level, as well as build capabilities at Union level and integrate stakeholders <br /> in the field of AI. The effective implementation and enforcement of this Regulation require a governance framework <br /> that allows to coordinate and build up central expertise at Union level. The AI Office was established by Commission <br /> Decision (45) and has as its mission to develop Union expertise and capabilities in the field of AI and to contribute to <br /> the implementation of Union law on AI. Member States should facilitate the tasks of the AI Office with a view to <br /> support the development of Union expertise and capabilities at Union level and to strengthen the functioning of the <br /> digital single market. Furthermore, a Board composed of representatives of the Member States, a scientific panel to <br /> integrate the scientific community and an advisory forum to contribute stakeholder input to the implementation of <br /> this Regulation, at Union and national level, should be established. The development of Union expertise and <br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 37/144<br /> (44)<br /> Commission Recommendation of 6 May 2003 concerning the definition of micro, small and medium-sized enterprises (OJ L 124, <br /> 20.5.2003, p. 36).<br /> (45)<br /> Commission Decision of 24.1.2024 establishing the European Artificial Intelligence Office C(2024) 390.</p> <p>capabilities should also include making use of existing resources and expertise, in particular through synergies with <br /> structures built up in the context of the Union level enforcement of other law and synergies with related initiatives at <br /> Union level, such as the EuroHPC Joint Undertaking and the AI testing and experimentation facilities under the <br /> Digital Europe Programme.</p>
Show original text

To ensure this Regulation is implemented smoothly and consistently across the EU, a Board should be created. The Board will include representatives from each Member State and represent different parts of the AI industry. Its main responsibilities are to provide advice and guidance on implementing this Regulation, including enforcement, technical standards, and AI-related questions. Member States can choose any qualified public official to represent them on the Board. The Board will create two permanent sub-groups: one for market surveillance authorities and one for notified bodies. These sub-groups will share information and coordinate their work. The market surveillance sub-group will also serve as the administrative cooperation group under EU Regulation 2019/1020. The Commission will support the market surveillance sub-group by conducting market studies to identify areas needing urgent coordination. The Board can also create additional temporary sub-groups as needed to address specific issues. These structures will work together with other EU initiatives like the EuroHPC Joint Undertaking and AI testing facilities under the Digital Europe Programme.

<p>with <br /> structures built up in the context of the Union level enforcement of other law and synergies with related initiatives at <br /> Union level, such as the EuroHPC Joint Undertaking and the AI testing and experimentation facilities under the <br /> Digital Europe Programme.<br /> (149)<br /> In order to facilitate a smooth, effective and harmonised implementation of this Regulation a Board should be <br /> established. The Board should reflect the various interests of the AI eco-system and be composed of representatives <br /> of the Member States. The Board should be responsible for a number of advisory tasks, including issuing opinions, <br /> recommendations, advice or contributing to guidance on matters related to the implementation of this Regulation, <br /> including on enforcement matters, technical specifications or existing standards regarding the requirements <br /> established in this Regulation and providing advice to the Commission and the Member States and their national <br /> competent authorities on specific questions related to AI. In order to give some flexibility to Member States in the <br /> designation of their representatives in the Board, such representatives may be any persons belonging to public <br /> entities who should have the relevant competences and powers to facilitate coordination at national level and <br /> contribute to the achievement of the Board’s tasks. The Board should establish two standing sub-groups to provide <br /> a platform for cooperation and exchange among market surveillance authorities and notifying authorities on issues <br /> related, respectively, to market surveillance and notified bodies. The standing subgroup for market surveillance <br /> should act as the administrative cooperation group (ADCO) for this Regulation within the meaning of Article 30 of <br /> Regulation (EU) 2019/1020. In accordance with Article 33 of that Regulation, the Commission should support the <br /> activities of the standing subgroup for market surveillance by undertaking market evaluations or studies, in <br /> particular with a view to identifying aspects of this Regulation requiring specific and urgent coordination among <br /> market surveillance authorities. The Board may establish other standing or temporary sub-groups as appropriate for <br /> the purpose of examining specific issues.</p>
Show original text

The Board will conduct market studies to identify areas of this Regulation that need urgent coordination among market surveillance authorities. It can create additional working groups as needed to examine specific issues. The Board will also work with relevant EU organizations, expert groups, and networks involved in EU laws on data, digital products, and services.

To involve stakeholders in implementing this Regulation, an advisory forum will be established to provide advice and technical expertise to the Board and the Commission. This forum will include a balanced mix of commercial and non-commercial representatives, including industry, start-ups, small and medium-sized enterprises (SMEs), universities, civil society organizations, labor representatives, the Fundamental Rights Agency, ENISA, and European standardization bodies (CEN, CENELEC, and ETSI).

To help enforce this Regulation and monitor general-purpose AI models, an independent scientific panel of AI experts will be created. These experts will be selected based on their current scientific and technical knowledge of AI and must work impartially and objectively while keeping information confidential. Member States can request support from these experts to strengthen their own enforcement capabilities.

<p>market evaluations or studies, in <br /> particular with a view to identifying aspects of this Regulation requiring specific and urgent coordination among <br /> market surveillance authorities. The Board may establish other standing or temporary sub-groups as appropriate for <br /> the purpose of examining specific issues. The Board should also cooperate, as appropriate, with relevant Union <br /> bodies, experts groups and networks active in the context of relevant Union law, including in particular those active <br /> under relevant Union law on data, digital products and services.<br /> (150)<br /> With a view to ensuring the involvement of stakeholders in the implementation and application of this Regulation, <br /> an advisory forum should be established to advise and provide technical expertise to the Board and the Commission. <br /> To ensure a varied and balanced stakeholder representation between commercial and non-commercial interest and, <br /> within the category of commercial interests, with regards to SMEs and other undertakings, the advisory forum <br /> should comprise inter alia industry, start-ups, SMEs, academia, civil society, including the social partners, as well as <br /> the Fundamental Rights Agency, ENISA, the European Committee for Standardization (CEN), the European <br /> Committee for Electrotechnical Standardization (CENELEC) and the European Telecommunications Standards <br /> Institute (ETSI).<br /> (151)<br /> To support the implementation and enforcement of this Regulation, in particular the monitoring activities of the AI <br /> Office as regards general-purpose AI models, a scientific panel of independent experts should be established. The <br /> independent experts constituting the scientific panel should be selected on the basis of up-to-date scientific or <br /> technical expertise in the field of AI and should perform their tasks with impartiality, objectivity and ensure the <br /> confidentiality of information and data obtained in carrying out their tasks and activities. To allow the reinforcement <br /> of national capacities necessary for the effective enforcement of this Regulation, Member States should be able to <br /> request support from the pool of experts constituting the scientific panel for their enforcement activities.</p>
Show original text

Member States play a crucial role in enforcing this AI Regulation. Each Member State must appoint at least one notifying authority and one market surveillance authority to oversee compliance. Member States can choose any public entity for these roles based on their national needs. To improve efficiency and create a single point of contact for the public and other organizations, each Member State should designate one market surveillance authority as the main contact point. These national authorities must act independently, fairly, and without bias to ensure the Regulation is properly applied. Their staff must avoid conflicts of interest and follow confidentiality rules. To strengthen enforcement capabilities, Member States can request help from a panel of scientific experts. The EU should also establish AI testing support structures to assist Member States in enforcing this Regulation effectively.

<p>obtained in carrying out their tasks and activities. To allow the reinforcement <br /> of national capacities necessary for the effective enforcement of this Regulation, Member States should be able to <br /> request support from the pool of experts constituting the scientific panel for their enforcement activities.<br /> (152)<br /> In order to support adequate enforcement as regards AI systems and reinforce the capacities of the Member States, <br /> Union AI testing support structures should be established and made available to the Member States.<br /> (153)<br /> Member States hold a key role in the application and enforcement of this Regulation. In that respect, each Member <br /> State should designate at least one notifying authority and at least one market surveillance authority as national <br /> competent authorities for the purpose of supervising the application and implementation of this Regulation. <br /> Member States may decide to appoint any kind of public entity to perform the tasks of the national competent <br /> authorities within the meaning of this Regulation, in accordance with their specific national organisational <br /> characteristics and needs. In order to increase organisation efficiency on the side of Member States and to set a single <br /> point of contact vis-à-vis the public and other counterparts at Member State and Union levels, each Member State <br /> should designate a market surveillance authority to act as a single point of contact.<br /> EN<br /> OJ L, 12.7.2024<br /> 38/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>(154)<br /> The national competent authorities should exercise their powers independently, impartially and without bias, so as <br /> to safeguard the principles of objectivity of their activities and tasks and to ensure the application and <br /> implementation of this Regulation. The members of these authorities should refrain from any action incompatible <br /> with their duties and should be subject to confidentiality rules under this Regulation.</p>
Show original text

Authorities must maintain objectivity and follow confidentiality rules when implementing this Regulation. Their members must avoid actions that conflict with their duties.

Providers of high-risk AI systems must have a system to monitor their products after they are sold or deployed. This allows them to learn from real-world use, improve their systems, and take corrective action quickly. Post-market monitoring should include how the AI system works with other AI systems, devices, and software. However, it should not include sensitive operational data from law enforcement authorities. This monitoring is especially important because some AI systems continue to learn after being released to the market. Providers must also report serious incidents to relevant authorities. Serious incidents include: deaths or serious health damage, major disruption to critical infrastructure operations, violations of EU laws protecting fundamental rights, or serious damage to property or the environment.

To enforce this Regulation effectively, the market surveillance and compliance system established by EU Regulation 2019/1020 applies fully. Market surveillance authorities designated under this Regulation have all enforcement powers from both this Regulation and Regulation 2019/1020. These authorities must exercise their powers independently, fairly, and without bias.

<p>to safeguard the principles of objectivity of their activities and tasks and to ensure the application and <br /> implementation of this Regulation. The members of these authorities should refrain from any action incompatible <br /> with their duties and should be subject to confidentiality rules under this Regulation.<br /> (155)<br /> In order to ensure that providers of high-risk AI systems can take into account the experience on the use of high-risk <br /> AI systems for improving their systems and the design and development process or can take any possible corrective <br /> action in a timely manner, all providers should have a post-market monitoring system in place. Where relevant, <br /> post-market monitoring should include an analysis of the interaction with other AI systems including other devices <br /> and software. Post-market monitoring should not cover sensitive operational data of deployers which are law <br /> enforcement authorities. This system is also key to ensure that the possible risks emerging from AI systems which <br /> continue to ‘learn’ after being placed on the market or put into service can be more efficiently and timely addressed. <br /> In this context, providers should also be required to have a system in place to report to the relevant authorities any <br /> serious incidents resulting from the use of their AI systems, meaning incident or malfunctioning leading to death or <br /> serious damage to health, serious and irreversible disruption of the management and operation of critical <br /> infrastructure, infringements of obligations under Union law intended to protect fundamental rights or serious <br /> damage to property or the environment.<br /> (156)<br /> In order to ensure an appropriate and effective enforcement of the requirements and obligations set out by this <br /> Regulation, which is Union harmonisation legislation, the system of market surveillance and compliance of products <br /> established by Regulation (EU) 2019/1020 should apply in its entirety. Market surveillance authorities designated <br /> pursuant to this Regulation should have all enforcement powers laid down in this Regulation and in Regulation (EU) <br /> 2019/1020 and should exercise their powers and carry out their duties independently, impartially and without bias.</p>
Show original text

Market surveillance authorities designated under this Regulation must have all enforcement powers from this Regulation and Regulation (EU) 2019/1020. They must exercise these powers independently, fairly, and without bias. While most AI systems are not subject to specific requirements under this Regulation, surveillance authorities can take action against any AI system that poses a risk. For EU institutions, agencies, and bodies covered by this Regulation, the European Data Protection Supervisor should be designated as the market surveillance authority, without preventing Member States from designating their own national authorities. Surveillance activities must not interfere with the independence of supervised entities when required by EU law. This Regulation does not affect the powers and independence of national authorities that enforce EU laws protecting fundamental rights, such as equality bodies and data protection authorities. These national authorities should have access to documentation created under this Regulation when needed for their work. A special safeguard procedure must be established to quickly enforce action against AI systems that risk health, safety, or fundamental rights. This procedure applies to high-risk AI systems presenting a risk, prohibited AI systems illegally placed on the market or put into service, and AI systems made available in violation of transparency requirements that present a risk.

<p>surveillance authorities designated <br /> pursuant to this Regulation should have all enforcement powers laid down in this Regulation and in Regulation (EU) <br /> 2019/1020 and should exercise their powers and carry out their duties independently, impartially and without bias. <br /> Although the majority of AI systems are not subject to specific requirements and obligations under this Regulation, <br /> market surveillance authorities may take measures in relation to all AI systems when they present a risk in <br /> accordance with this Regulation. Due to the specific nature of Union institutions, agencies and bodies falling within <br /> the scope of this Regulation, it is appropriate to designate the European Data Protection Supervisor as a competent <br /> market surveillance authority for them. This should be without prejudice to the designation of national competent <br /> authorities by the Member States. Market surveillance activities should not affect the ability of the supervised entities <br /> to carry out their tasks independently, when such independence is required by Union law.<br /> (157)<br /> This Regulation is without prejudice to the competences, tasks, powers and independence of relevant national public <br /> authorities or bodies which supervise the application of Union law protecting fundamental rights, including equality <br /> bodies and data protection authorities. Where necessary for their mandate, those national public authorities or <br /> bodies should also have access to any documentation created under this Regulation. A specific safeguard procedure <br /> should be set for ensuring adequate and timely enforcement against AI systems presenting a risk to health, safety and <br /> fundamental rights. The procedure for such AI systems presenting a risk should be applied to high-risk AI systems <br /> presenting a risk, prohibited systems which have been placed on the market, put into service or used in violation of <br /> the prohibited practices laid down in this Regulation and AI systems which have been made available in violation of <br /> the transparency requirements laid down in this Regulation and present a risk.</p>
Show original text

AI systems that are sold, put into use, or used in ways that break the rules in this Regulation are prohibited. The same applies to AI systems that are made available without meeting the required transparency standards and that pose a risk. Financial institutions in the EU must follow internal governance and risk management rules when using AI systems. To ensure these rules are applied consistently, the authorities that supervise financial institutions—including those defined in EU Regulation 575/2013 and Directives 2008/48/EC and 2009/138/EC—must enforce both this Regulation and the relevant EU financial services laws.

<p>which have been placed on the market, put into service or used in violation of <br /> the prohibited practices laid down in this Regulation and AI systems which have been made available in violation of <br /> the transparency requirements laid down in this Regulation and present a risk.<br /> (158)<br /> Union financial services law includes internal governance and risk-management rules and requirements which are <br /> applicable to regulated financial institutions in the course of provision of those services, including when they make <br /> use of AI systems. In order to ensure coherent application and enforcement of the obligations under this Regulation <br /> and relevant rules and requirements of the Union financial services legal acts, the competent authorities for the <br /> supervision and enforcement of those legal acts, in particular competent authorities as defined in Regulation (EU) <br /> No 575/2013 of the European Parliament and of the Council (46) and Directives 2008/48/EC (47), 2009/138/EC (48), <br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 39/144<br /> (46)<br /> Regulation (EU) No 575/2013 of the European Parliament and of the Council of 26 June 2013 on prudential requirements for credit <br /> institutions and investment firms and amending Regulation (EU) No 648/2012 (OJ L 176, 27.6.2013, p. 1).<br /> (47)<br /> Directive 2008/48/EC of the European Parliament and of the Council of 23 April 2008 on credit agreements for consumers and <br /> repealing Council Directive 87/102/EEC (OJ L 133, 22.5.2008, p. 66).</p>
Show original text

Financial regulators designated under EU Directives 2013/36/EU, 2014/17/EU, and 2016/97 should supervise artificial intelligence (AI) systems used by banks and financial institutions. These authorities must enforce all requirements of this Regulation and have powers to conduct market surveillance activities, which can be integrated into their existing supervisory procedures. For credit institutions participating in the Single Supervisory Mechanism (established by EU Regulation 1024/2013), national supervisory authorities must promptly report to the European Central Bank any information discovered during their market surveillance that may be relevant to the Central Bank's work. This applies to AI systems provided or used by regulated financial institutions, unless Member States choose to assign these surveillance tasks to a different authority.

<p>and of the Council of 23 April 2008 on credit agreements for consumers and <br /> repealing Council Directive 87/102/EEC (OJ L 133, 22.5.2008, p. 66).<br /> (48)<br /> Directive 2009/138/EC of the European Parliament and of the Council of 25 November 2009 on the taking-up and pursuit of the <br /> business of Insurance and Reinsurance (Solvency II) (OJ L 335, 17.12.2009, p. 1).</p> <p>2013/36/EU (49), 2014/17/EU (50) and (EU) 2016/97 (51) of the European Parliament and of the Council, should be <br /> designated, within their respective competences, as competent authorities for the purpose of supervising the <br /> implementation of this Regulation, including for market surveillance activities, as regards AI systems provided or <br /> used by regulated and supervised financial institutions unless Member States decide to designate another authority to <br /> fulfil these market surveillance tasks. Those competent authorities should have all powers under this Regulation and <br /> Regulation (EU) 2019/1020 to enforce the requirements and obligations of this Regulation, including powers to <br /> carry our ex post market surveillance activities that can be integrated, as appropriate, into their existing supervisory <br /> mechanisms and procedures under the relevant Union financial services law. It is appropriate to envisage that, when <br /> acting as market surveillance authorities under this Regulation, the national authorities responsible for the <br /> supervision of credit institutions regulated under Directive 2013/36/EU, which are participating in the Single <br /> Supervisory Mechanism established by Council Regulation (EU) No 1024/2013 (52), should report, without delay, to <br /> the European Central Bank any information identified in the course of their market surveillance activities that may <br /> be of potential interest for the European Central Bank’s</p>
Show original text

Financial regulators must report important information to the European Central Bank without delay if it relates to the Central Bank's supervisory duties under Regulation (EU) No 1024/2013. To align this regulation with rules for banks under Directive 2013/36/EU, certain provider obligations regarding risk management, post-market monitoring, and documentation should be integrated into existing banking rules. To prevent duplication, some exceptions are allowed for quality management systems and monitoring of high-risk AI systems when they apply to regulated banks. The same approach applies to insurance companies, reinsurance firms, insurance holding companies under Directive 2009/138/EC, insurance intermediaries under Directive (EU) 2016/97, and other financial institutions with internal governance requirements under EU financial services law, ensuring fair and consistent treatment across the financial sector. Market surveillance authorities overseeing high-risk AI systems used in biometrics, law enforcement, migration, asylum, border control, justice administration, and democratic processes must have strong investigative and corrective powers. These authorities must be able to access all personal data being processed and obtain all necessary information to perform their duties, and they must exercise these powers independently.

<p>Regulation (EU) No 1024/2013 (52), should report, without delay, to <br /> the European Central Bank any information identified in the course of their market surveillance activities that may <br /> be of potential interest for the European Central Bank’s prudential supervisory tasks as specified in that Regulation. <br /> To further enhance the consistency between this Regulation and the rules applicable to credit institutions regulated <br /> under Directive 2013/36/EU, it is also appropriate to integrate some of the providers’ procedural obligations in <br /> relation to risk management, post marketing monitoring and documentation into the existing obligations and <br /> procedures under Directive 2013/36/EU. In order to avoid overlaps, limited derogations should also be envisaged in <br /> relation to the quality management system of providers and the monitoring obligation placed on deployers of <br /> high-risk AI systems to the extent that these apply to credit institutions regulated by Directive 2013/36/EU. The <br /> same regime should apply to insurance and re-insurance undertakings and insurance holding companies under <br /> Directive 2009/138/EC and the insurance intermediaries under Directive (EU) 2016/97 and other types of financial <br /> institutions subject to requirements regarding internal governance, arrangements or processes established pursuant <br /> to the relevant Union financial services law to ensure consistency and equal treatment in the financial sector.<br /> (159)<br /> Each market surveillance authority for high-risk AI systems in the area of biometrics, as listed in an annex to this <br /> Regulation insofar as those systems are used for the purposes of law enforcement, migration, asylum and border <br /> control management, or the administration of justice and democratic processes, should have effective investigative <br /> and corrective powers, including at least the power to obtain access to all personal data that are being processed and <br /> to all information necessary for the performance of its tasks. The market surveillance authorities should be able to <br /> exercise their powers by acting with complete independence.</p>
Show original text

Market surveillance authorities need full access to personal data and information required to perform their duties. They must work independently without restrictions. These authorities and the Commission can work together on joint activities, including investigations, to ensure compliance with AI regulations. These joint efforts focus on high-risk AI systems that pose serious risks across multiple Member States. The AI Office will support these joint investigations. To prevent overlapping responsibilities, when an AI system uses a general-purpose AI model provided by the same company, supervision should be clearly defined at both the EU and national levels.

<p>corrective powers, including at least the power to obtain access to all personal data that are being processed and <br /> to all information necessary for the performance of its tasks. The market surveillance authorities should be able to <br /> exercise their powers by acting with complete independence. Any limitations of their access to sensitive operational <br /> data under this Regulation should be without prejudice to the powers conferred to them by Directive <br /> (EU) 2016/680. No exclusion on disclosing data to national data protection authorities under this Regulation should <br /> affect the current or future powers of those authorities beyond the scope of this Regulation.<br /> (160)<br /> The market surveillance authorities and the Commission should be able to propose joint activities, including joint <br /> investigations, to be conducted by market surveillance authorities or market surveillance authorities jointly with the <br /> Commission, that have the aim of promoting compliance, identifying non-compliance, raising awareness and <br /> providing guidance in relation to this Regulation with respect to specific categories of high-risk AI systems that are <br /> found to present a serious risk across two or more Member States. Joint activities to promote compliance should be <br /> carried out in accordance with Article 9 of Regulation (EU) 2019/1020. The AI Office should provide coordination <br /> support for joint investigations.<br /> (161)<br /> It is necessary to clarify the responsibilities and competences at Union and national level as regards AI systems that <br /> are built on general-purpose AI models. To avoid overlapping competences, where an AI system is based on <br /> a general-purpose AI model and the model and system are provided by the same provider, the supervision should <br /> EN<br /> OJ L, 12.7.</p>
Show original text

To prevent duplicate oversight, when an AI system uses a general-purpose AI model and both are provided by the same company, supervision should be coordinated. Several EU directives govern financial services: Directive 2013/36/EU regulates credit institutions and investment firms; Directive 2014/17/EU covers consumer credit for home purchases; Directive 2016/97/EU addresses insurance distribution; and Regulation 1024/2013 gives the European Central Bank responsibility for supervising credit institutions.

<p>-purpose AI models. To avoid overlapping competences, where an AI system is based on <br /> a general-purpose AI model and the model and system are provided by the same provider, the supervision should <br /> EN<br /> OJ L, 12.7.2024<br /> 40/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> (49)<br /> Directive 2013/36/EU of the European Parliament and of the Council of 26 June 2013 on access to the activity of credit institutions <br /> and the prudential supervision of credit institutions and investment firms, amending Directive 2002/87/EC and repealing <br /> Directives 2006/48/EC and 2006/49/EC (OJ L 176, 27.6.2013, p. 338).<br /> (50)<br /> Directive 2014/17/EU of the European Parliament and of the Council of 4 February 2014 on credit agreements for consumers <br /> relating to residential immovable property and amending Directives 2008/48/EC and 2013/36/EU and Regulation (EU) <br /> No 1093/2010 (OJ L 60, 28.2.2014, p. 34).<br /> (51)<br /> Directive (EU) 2016/97 of the European Parliament and of the Council of 20 January 2016 on insurance distribution (OJ L 26, <br /> 2.2.2016, p. 19).<br /> (52)<br /> Council Regulation (EU) No 1024/2013 of 15 October 2013 conferring specific tasks on the European Central Bank concerning <br /> policies relating to the prudential supervision of credit institutions (OJ L 287, 29.10.2013, p. 63).</p>
Show original text

The European Central Bank received specific tasks on October 15, 2013 to oversee credit institutions. AI system supervision happens at the European Union level through the AI Office, which acts as a market surveillance authority under EU Regulation 2019/1020. National authorities handle supervision of most AI systems. However, when general-purpose AI systems can be used for high-risk purposes, market surveillance authorities must work with the AI Office to check compliance and report findings to the Board and other authorities. If a national authority cannot complete an investigation into a high-risk AI system because it cannot access information about the underlying general-purpose AI model, it can request help from the AI Office. In these cases, the cross-border assistance procedures from EU Regulation 2019/1020 apply. To use expertise and resources efficiently across the EU, the Commission has the power to supervise and enforce rules for general-purpose AI model providers. The AI Office can investigate violations of these rules on its own, based on monitoring results, or when asked by market surveillance authorities.

<p>of 15 October 2013 conferring specific tasks on the European Central Bank concerning <br /> policies relating to the prudential supervision of credit institutions (OJ L 287, 29.10.2013, p. 63).</p> <p>take place at Union level through the AI Office, which should have the powers of a market surveillance authority <br /> within the meaning of Regulation (EU) 2019/1020 for this purpose. In all other cases, national market surveillance <br /> authorities remain responsible for the supervision of AI systems. However, for general-purpose AI systems that can <br /> be used directly by deployers for at least one purpose that is classified as high-risk, market surveillance authorities <br /> should cooperate with the AI Office to carry out evaluations of compliance and inform the Board and other market <br /> surveillance authorities accordingly. Furthermore, market surveillance authorities should be able to request <br /> assistance from the AI Office where the market surveillance authority is unable to conclude an investigation on <br /> a high-risk AI system because of its inability to access certain information related to the general-purpose AI model <br /> on which the high-risk AI system is built. In such cases, the procedure regarding mutual assistance in cross-border <br /> cases in Chapter VI of Regulation (EU) 2019/1020 should apply mutatis mutandis.<br /> (162)<br /> To make best use of the centralised Union expertise and synergies at Union level, the powers of supervision and <br /> enforcement of the obligations on providers of general-purpose AI models should be a competence of the <br /> Commission. The AI Office should be able to carry out all necessary actions to monitor the effective implementation <br /> of this Regulation as regards general-purpose AI models. It should be able to investigate possible infringements of <br /> the rules on providers of general-purpose AI models both on its own initiative, following the results of its <br /> monitoring activities, or upon request from market surveillance authorities in line with the conditions set out in this <br /> Regulation.</p>
Show original text

The AI Office will monitor whether providers of general-purpose AI models follow the rules. It can start investigations on its own, based on its monitoring activities, or when market surveillance authorities request it. Downstream providers can also file complaints about rule violations. A scientific panel will help the AI Office by monitoring general-purpose AI models and alerting the AI Office when it suspects a model poses a serious risk to the EU or meets criteria for classification as a high-risk systemic AI model. The scientific panel can request the Commission to obtain necessary documentation and information from providers. The AI Office has the authority to ensure providers comply with their obligations. It can investigate violations by requesting documents and information, conducting evaluations, and requiring providers to take corrective actions. When evaluating models, the AI Office may hire independent experts to conduct these evaluations on its behalf.

<p>possible infringements of <br /> the rules on providers of general-purpose AI models both on its own initiative, following the results of its <br /> monitoring activities, or upon request from market surveillance authorities in line with the conditions set out in this <br /> Regulation. To support effective monitoring of the AI Office, it should provide for the possibility that downstream <br /> providers lodge complaints about possible infringements of the rules on providers of general-purpose AI models and <br /> systems.<br /> (163)<br /> With a view to complementing the governance systems for general-purpose AI models, the scientific panel should <br /> support the monitoring activities of the AI Office and may, in certain cases, provide qualified alerts to the AI Office <br /> which trigger follow-ups, such as investigations. This should be the case where the scientific panel has reason to <br /> suspect that a general-purpose AI model poses a concrete and identifiable risk at Union level. Furthermore, this <br /> should be the case where the scientific panel has reason to suspect that a general-purpose AI model meets the criteria <br /> that would lead to a classification as general-purpose AI model with systemic risk. To equip the scientific panel with <br /> the information necessary for the performance of those tasks, there should be a mechanism whereby the scientific <br /> panel can request the Commission to require documentation or information from a provider.<br /> (164)<br /> The AI Office should be able to take the necessary actions to monitor the effective implementation of and <br /> compliance with the obligations for providers of general-purpose AI models laid down in this Regulation. The AI <br /> Office should be able to investigate possible infringements in accordance with the powers provided for in this <br /> Regulation, including by requesting documentation and information, by conducting evaluations, as well as by <br /> requesting measures from providers of general-purpose AI models. When conducting evaluations, in order to make <br /> use of independent expertise, the AI Office should be able to involve independent experts to carry out the <br /> evaluations on its behalf.</p>
Show original text

The AI Office should be able to hire independent experts to help evaluate general-purpose AI models. Providers must follow the rules, which can be enforced by requiring them to take corrective actions, reduce risks, or remove models from the market if needed. Providers of general-purpose AI models have the same procedural rights as outlined in EU Regulation 2019/1020, unless this Regulation provides more specific rights. To encourage ethical and trustworthy AI in the EU, providers of non-high-risk AI systems should be encouraged to create codes of conduct that voluntarily apply some or all of the requirements that apply to high-risk systems. These codes should be adjusted based on the system's purpose, lower risk level, available technical solutions, and industry best practices like model and data cards. All AI providers and deployers, whether their systems are high-risk or not, should also be encouraged to voluntarily adopt additional requirements based on the EU's Ethics Guidelines for Trustworthy AI.

<p>well as by <br /> requesting measures from providers of general-purpose AI models. When conducting evaluations, in order to make <br /> use of independent expertise, the AI Office should be able to involve independent experts to carry out the <br /> evaluations on its behalf. Compliance with the obligations should be enforceable, inter alia, through requests to take <br /> appropriate measures, including risk mitigation measures in the case of identified systemic risks as well as restricting <br /> the making available on the market, withdrawing or recalling the model. As a safeguard, where needed beyond the <br /> procedural rights provided for in this Regulation, providers of general-purpose AI models should have the <br /> procedural rights provided for in Article 18 of Regulation (EU) 2019/1020, which should apply mutatis mutandis, <br /> without prejudice to more specific procedural rights provided for by this Regulation.<br /> (165)<br /> The development of AI systems other than high-risk AI systems in accordance with the requirements of this <br /> Regulation may lead to a larger uptake of ethical and trustworthy AI in the Union. Providers of AI systems that are <br /> not high-risk should be encouraged to create codes of conduct, including related governance mechanisms, intended <br /> to foster the voluntary application of some or all of the mandatory requirements applicable to high-risk AI systems, <br /> adapted in light of the intended purpose of the systems and the lower risk involved and taking into account the <br /> available technical solutions and industry best practices such as model and data cards. Providers and, as appropriate, <br /> deployers of all AI systems, high-risk or not, and AI models should also be encouraged to apply on a voluntary basis <br /> additional requirements related, for example, to the elements of the Union’s Ethics Guidelines for Trustworthy AI, <br /> OJ L, 12.7.</p>
Show original text

AI companies are encouraged to voluntarily adopt additional standards beyond legal requirements. These standards should include ethical practices from the EU's Trustworthy AI Guidelines, environmental responsibility, AI education programs, inclusive design that considers vulnerable groups and people with disabilities, involvement of stakeholders (businesses, civil society, universities, research organizations, unions, and consumer groups), and diverse development teams with gender balance. To make these voluntary codes work effectively, they need clear goals and measurable performance indicators. They should be developed with input from relevant stakeholders. The EU Commission may create initiatives to remove technical barriers that prevent data sharing across borders for AI development, including improving data access and compatibility between different data types. Non-high-risk AI systems are not required to meet the strict requirements for high-risk systems, but they must still be safe when sold or used. EU Regulation 2023/988 serves as a safety requirement for these lower-risk AI systems.

<p>high-risk or not, and AI models should also be encouraged to apply on a voluntary basis <br /> additional requirements related, for example, to the elements of the Union’s Ethics Guidelines for Trustworthy AI, <br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 41/144</p> <p>environmental sustainability, AI literacy measures, inclusive and diverse design and development of AI systems, <br /> including attention to vulnerable persons and accessibility to persons with disability, stakeholders’ participation with <br /> the involvement, as appropriate, of relevant stakeholders such as business and civil society organisations, academia, <br /> research organisations, trade unions and consumer protection organisations in the design and development of AI <br /> systems, and diversity of the development teams, including gender balance. To ensure that the voluntary codes of <br /> conduct are effective, they should be based on clear objectives and key performance indicators to measure the <br /> achievement of those objectives. They should also be developed in an inclusive way, as appropriate, with the <br /> involvement of relevant stakeholders such as business and civil society organisations, academia, research <br /> organisations, trade unions and consumer protection organisation. The Commission may develop initiatives, <br /> including of a sectoral nature, to facilitate the lowering of technical barriers hindering cross-border exchange of data <br /> for AI development, including on data access infrastructure, semantic and technical interoperability of different types <br /> of data.<br /> (166)<br /> It is important that AI systems related to products that are not high-risk in accordance with this Regulation and thus <br /> are not required to comply with the requirements set out for high-risk AI systems are nevertheless safe when placed <br /> on the market or put into service. To contribute to this objective, Regulation (EU) 2023/988 of the European <br /> Parliament and of the Council (53) would apply as a safety net.</p>
Show original text

Systems are safe when released to the market or put into use. Regulation (EU) 2023/988 from the European Parliament and Council provides additional safety protection to support this goal.

To ensure good cooperation between authorities at the EU and national levels, everyone involved in applying this Regulation must keep information and data confidential, following EU or national laws. They must protect intellectual property rights, business secrets, the proper implementation of this Regulation, public and national security, and the integrity of legal proceedings and classified information.

This Regulation must be enforced through penalties and other measures. Member States must ensure the Regulation is followed by setting effective, fair, and strong penalties for violations, while respecting the principle that no one can be punished twice for the same offense. To create consistent penalties across the EU, maximum fines for specific violations are set. When deciding on fine amounts, Member States must consider all relevant facts, including the type, seriousness, and length of the violation, its impact, and the size of the company involved (especially small businesses and startups). The European Data Protection Supervisor has the power to impose fines on EU institutions, agencies, and organizations covered by this Regulation.

<p>systems are nevertheless safe when placed <br /> on the market or put into service. To contribute to this objective, Regulation (EU) 2023/988 of the European <br /> Parliament and of the Council (53) would apply as a safety net.<br /> (167)<br /> In order to ensure trustful and constructive cooperation of competent authorities on Union and national level, all <br /> parties involved in the application of this Regulation should respect the confidentiality of information and data <br /> obtained in carrying out their tasks, in accordance with Union or national law. They should carry out their tasks and <br /> activities in such a manner as to protect, in particular, intellectual property rights, confidential business information <br /> and trade secrets, the effective implementation of this Regulation, public and national security interests, the integrity <br /> of criminal and administrative proceedings, and the integrity of classified information.<br /> (168)<br /> Compliance with this Regulation should be enforceable by means of the imposition of penalties and other <br /> enforcement measures. Member States should take all necessary measures to ensure that the provisions of this <br /> Regulation are implemented, including by laying down effective, proportionate and dissuasive penalties for their <br /> infringement, and to respect the ne bis in idem principle. In order to strengthen and harmonise administrative <br /> penalties for infringement of this Regulation, the upper limits for setting the administrative fines for certain specific <br /> infringements should be laid down. When assessing the amount of the fines, Member States should, in each <br /> individual case, take into account all relevant circumstances of the specific situation, with due regard in particular to <br /> the nature, gravity and duration of the infringement and of its consequences and to the size of the provider, in <br /> particular if the provider is an SME, including a start-up. The European Data Protection Supervisor should have the <br /> power to impose fines on Union institutions, agencies and bodies falling within the scope of this Regulation.</p>
Show original text

The European Data Protection Supervisor can fine EU institutions, agencies, and bodies that break this regulation. Companies providing general-purpose AI models must follow certain rules, and fines can be imposed if they don't comply or ignore Commission requests. These fines must be fair and follow time limits. The Court of Justice of the European Union can review all Commission decisions and has full power to change penalties. People and organizations whose rights are harmed by AI systems can already use existing EU and national laws for help. Additionally, anyone who believes this regulation has been broken can file a complaint with the market surveillance authority. People have the right to get an explanation when a decision made by an AI system causes them legal problems or significantly harms them in other ways.

<p>the size of the provider, in <br /> particular if the provider is an SME, including a start-up. The European Data Protection Supervisor should have the <br /> power to impose fines on Union institutions, agencies and bodies falling within the scope of this Regulation.<br /> (169)<br /> Compliance with the obligations on providers of general-purpose AI models imposed under this Regulation should <br /> be enforceable, inter alia, by means of fines. To that end, appropriate levels of fines should also be laid down for <br /> infringement of those obligations, including the failure to comply with measures requested by the Commission in <br /> accordance with this Regulation, subject to appropriate limitation periods in accordance with the principle of <br /> proportionality. All decisions taken by the Commission under this Regulation are subject to review by the Court of <br /> Justice of the European Union in accordance with the TFEU, including the unlimited jurisdiction of the Court of <br /> Justice with regard to penalties pursuant to Article 261 TFEU.<br /> (170)<br /> Union and national law already provide effective remedies to natural and legal persons whose rights and freedoms <br /> are adversely affected by the use of AI systems. Without prejudice to those remedies, any natural or legal person that <br /> has grounds to consider that there has been an infringement of this Regulation should be entitled to lodge <br /> a complaint to the relevant market surveillance authority.<br /> (171)<br /> Affected persons should have the right to obtain an explanation where a deployer’s decision is based mainly upon <br /> the output from certain high-risk AI systems that fall within the scope of this Regulation and where that decision <br /> produces legal effects or similarly significantly affects those persons in a way that they consider to have an adverse <br /> EN<br /> OJ L, 12.7.</p>
Show original text

High-risk AI systems covered by this Regulation must provide clear explanations to people when AI decisions significantly affect their health, safety, or fundamental rights. These explanations should be understandable and help people exercise their legal rights. However, this requirement does not apply to AI systems that are already exempt under EU or national law, or where similar protections already exist under other EU laws. People who report violations of this Regulation should be protected as whistleblowers under EU Directive 2019/1937.

<p>certain high-risk AI systems that fall within the scope of this Regulation and where that decision <br /> produces legal effects or similarly significantly affects those persons in a way that they consider to have an adverse <br /> EN<br /> OJ L, 12.7.2024<br /> 42/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> (53)<br /> Regulation (EU) 2023/988 of the European Parliament and of the Council of 10 May 2023 on general product safety, amending <br /> Regulation (EU) No 1025/2012 of the European Parliament and of the Council and Directive (EU) 2020/1828 of the European <br /> Parliament and the Council, and repealing Directive 2001/95/EC of the European Parliament and of the Council and Council <br /> Directive 87/357/EEC (OJ L 135, 23.5.2023, p. 1).</p> <p>impact on their health, safety or fundamental rights. That explanation should be clear and meaningful and should <br /> provide a basis on which the affected persons are able to exercise their rights. The right to obtain an explanation <br /> should not apply to the use of AI systems for which exceptions or restrictions follow from Union or national law <br /> and should apply only to the extent this right is not already provided for under Union law.<br /> (172)<br /> Persons acting as whistleblowers on the infringements of this Regulation should be protected under the Union law. <br /> Directive (EU) 2019/1937 of the European Parliament and of the Council (54) should therefore apply to the reporting <br /> of infringements of this Regulation and the protection of persons reporting such infringements.</p>
Show original text

EU Directive 2019/1937 applies to reporting violations of this Regulation and protecting people who report them. The European Commission has the power to update the rules as needed, including: what makes an AI system high-risk, the list of high-risk AI systems, technical documentation requirements, EU conformity declarations, conformity assessment procedures, quality management standards, thresholds and benchmarks for classifying general-purpose AI models with systemic risk, criteria for identifying high-risk general-purpose AI models, and transparency requirements for AI providers. When preparing these updates, the Commission must consult with experts following the Better Law-Making Agreement of April 13, 2016. The European Parliament and Council must receive all documents at the same time as Member States' experts, and their representatives must have equal access to Commission expert group meetings.

<p>under the Union law. <br /> Directive (EU) 2019/1937 of the European Parliament and of the Council (54) should therefore apply to the reporting <br /> of infringements of this Regulation and the protection of persons reporting such infringements.<br /> (173)<br /> In order to ensure that the regulatory framework can be adapted where necessary, the power to adopt acts in <br /> accordance with Article 290 TFEU should be delegated to the Commission to amend the conditions under which an <br /> AI system is not to be considered to be high-risk, the list of high-risk AI systems, the provisions regarding technical <br /> documentation, the content of the EU declaration of conformity the provisions regarding the conformity assessment <br /> procedures, the provisions establishing the high-risk AI systems to which the conformity assessment procedure <br /> based on assessment of the quality management system and assessment of the technical documentation should <br /> apply, the threshold, benchmarks and indicators, including by supplementing those benchmarks and indicators, in <br /> the rules for the classification of general-purpose AI models with systemic risk, the criteria for the designation of <br /> general-purpose AI models with systemic risk, the technical documentation for providers of general-purpose AI <br /> models and the transparency information for providers of general-purpose AI models. It is of particular importance <br /> that the Commission carry out appropriate consultations during its preparatory work, including at expert level, and <br /> that those consultations be conducted in accordance with the principles laid down in the Interinstitutional <br /> Agreement of 13 April 2016 on Better Law-Making (55). In particular, to ensure equal participation in the <br /> preparation of delegated acts, the European Parliament and the Council receive all documents at the same time as <br /> Member States’ experts, and their experts systematically have access to meetings of Commission expert groups <br /> dealing with the preparation of delegated acts.</p>
Show original text

The European Parliament and Council receive all documents about delegated acts at the same time as Member States' experts. Their experts can attend Commission expert group meetings that prepare these delegated acts.

Because technology changes quickly and this Regulation requires technical expertise, the Commission must review it by August 2, 2029, and every four years after that, reporting to the European Parliament and Council. The Commission must also assess annually whether to update the list of high-risk AI systems and prohibited practices.

By August 2, 2028, and every four years thereafter, the Commission must evaluate and report on: the list of high-risk areas in the Regulation's annex, AI systems covered by transparency rules, how well the supervision and governance system works, and progress on energy-efficient standards for general-purpose AI models.

By August 2, 2028, and every three years thereafter, the Commission must assess how well voluntary codes of conduct work for non-high-risk AI systems and whether additional requirements are needed.

To implement this Regulation consistently, the Commission receives implementing powers under Regulation (EU) No 182/2011 of the European Parliament and Council.

<p>the <br /> preparation of delegated acts, the European Parliament and the Council receive all documents at the same time as <br /> Member States’ experts, and their experts systematically have access to meetings of Commission expert groups <br /> dealing with the preparation of delegated acts.<br /> (174)<br /> Given the rapid technological developments and the technical expertise required to effectively apply this Regulation, <br /> the Commission should evaluate and review this Regulation by 2 August 2029 and every four years thereafter and <br /> report to the European Parliament and the Council. In addition, taking into account the implications for the scope of <br /> this Regulation, the Commission should carry out an assessment of the need to amend the list of high-risk AI <br /> systems and the list of prohibited practices once a year. Moreover, by 2 August 2028 and every four years thereafter, <br /> the Commission should evaluate and report to the European Parliament and to the Council on the need to amend <br /> the list of high-risk areas headings in the annex to this Regulation, the AI systems within the scope of the <br /> transparency obligations, the effectiveness of the supervision and governance system and the progress on the <br /> development of standardisation deliverables on energy efficient development of general-purpose AI models, <br /> including the need for further measures or actions. Finally, by 2 August 2028 and every three years thereafter, the <br /> Commission should evaluate the impact and effectiveness of voluntary codes of conduct to foster the application of <br /> the requirements provided for high-risk AI systems in the case of AI systems other than high-risk AI systems and <br /> possibly other additional requirements for such AI systems.<br /> (175)<br /> In order to ensure uniform conditions for the implementation of this Regulation, implementing powers should be <br /> conferred on the Commission. Those powers should be exercised in accordance with Regulation (EU) No 182/2011 <br /> of the European Parliament and of the Council (56).</p>
Show original text

The Commission should be given the power to create uniform rules for implementing this Regulation, following the procedures set out in EU Regulation 182/2011. This Regulation aims to improve the internal market and encourage the development of trustworthy, human-centered AI systems while protecting health, safety, fundamental rights (including democracy and the rule of law), and the environment from harmful AI effects. It also supports innovation. Since Member States alone cannot achieve these goals effectively, and these issues are better handled at the EU level due to their scale and impact, the EU may adopt measures following the principle of subsidiarity as outlined in Article 5 of the Treaty on European Union.

<p>uniform conditions for the implementation of this Regulation, implementing powers should be <br /> conferred on the Commission. Those powers should be exercised in accordance with Regulation (EU) No 182/2011 <br /> of the European Parliament and of the Council (56).<br /> (176)<br /> Since the objective of this Regulation, namely to improve the functioning of the internal market and to promote the <br /> uptake of human centric and trustworthy AI, while ensuring a high level of protection of health, safety, fundamental <br /> rights enshrined in the Charter, including democracy, the rule of law and environmental protection against harmful <br /> effects of AI systems in the Union and supporting innovation, cannot be sufficiently achieved by the Member States <br /> and can rather, by reason of the scale or effects of the action, be better achieved at Union level, the Union may adopt <br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 43/144<br /> (54)<br /> Directive (EU) 2019/1937 of the European Parliament and of the Council of 23 October 2019 on the protection of persons who <br /> report breaches of Union law (OJ L 305, 26.11.2019, p. 17).<br /> (55)<br /> OJ L 123, 12.5.2016, p. 1.<br /> (56)<br /> Regulation (EU) No 182/2011 of the European Parliament and of the Council of 16 February 2011 laying down the rules and <br /> general principles concerning mechanisms for control by Member States of the Commission’s exercise of implementing powers (OJ <br /> L 55, 28.2.2011, p. 13).</p> <p>measures in accordance with the principle of subsidiarity as set out in Article 5 TEU.</p>
Show original text

This regulation follows the principles of subsidiarity and proportionality, meaning it only does what is necessary to achieve its goals. To give businesses time to adapt and prevent market disruption, high-risk AI systems already in use before the regulation takes effect will only need to comply if they undergo significant design or purpose changes after the effective date. A 'significant change' means the same thing as a 'substantial modification' under this regulation. As an exception for public accountability, operators of AI systems that are part of large government IT systems and operators of high-risk AI systems used by public authorities must comply by the end of 2030 and August 2, 2030, respectively. Companies providing high-risk AI systems are encouraged to start following the regulation's requirements voluntarily during the transition period. The regulation becomes effective on August 2, 2026.

<p>States of the Commission’s exercise of implementing powers (OJ <br /> L 55, 28.2.2011, p. 13).</p> <p>measures in accordance with the principle of subsidiarity as set out in Article 5 TEU. In accordance with the <br /> principle of proportionality as set out in that Article, this Regulation does not go beyond what is necessary in order <br /> to achieve that objective.<br /> (177)<br /> In order to ensure legal certainty, ensure an appropriate adaptation period for operators and avoid disruption to the <br /> market, including by ensuring continuity of the use of AI systems, it is appropriate that this Regulation applies to the <br /> high-risk AI systems that have been placed on the market or put into service before the general date of application <br /> thereof, only if, from that date, those systems are subject to significant changes in their design or intended purpose. <br /> It is appropriate to clarify that, in this respect, the concept of significant change should be understood as equivalent <br /> in substance to the notion of substantial modification, which is used with regard only to high-risk AI systems <br /> pursuant to this Regulation. On an exceptional basis and in light of public accountability, operators of AI systems <br /> which are components of the large-scale IT systems established by the legal acts listed in an annex to this Regulation <br /> and operators of high-risk AI systems that are intended to be used by public authorities should, respectively, take the <br /> necessary steps to comply with the requirements of this Regulation by end of 2030 and by 2 August 2030.<br /> (178)<br /> Providers of high-risk AI systems are encouraged to start to comply, on a voluntary basis, with the relevant <br /> obligations of this Regulation already during the transitional period.<br /> (179)<br /> This Regulation should apply from 2 August 2026.</p>
Show original text

Companies that provide high-risk AI systems are encouraged to start following this regulation's rules voluntarily before the official deadline. This regulation takes effect on August 2, 2026. However, because certain uses of AI pose serious risks, some rules will start earlier. Specifically, bans on dangerous AI uses and general rules apply from February 2, 2025. Rules about oversight bodies and governance structures apply from August 2, 2025. Companies that create general-purpose AI models must follow their obligations starting August 2, 2025. Industry guidelines must be ready by May 2, 2025 so companies can prove they follow the rules on time. The AI Office will keep classification rules updated as technology changes. Countries must create penalty rules, including fines, and enforce them by August 2, 2026. Penalty rules take effect on August 2, 2025.

<p>)<br /> Providers of high-risk AI systems are encouraged to start to comply, on a voluntary basis, with the relevant <br /> obligations of this Regulation already during the transitional period.<br /> (179)<br /> This Regulation should apply from 2 August 2026. However, taking into account the unacceptable risk associated <br /> with the use of AI in certain ways, the prohibitions as well as the general provisions of this Regulation should already <br /> apply from 2 February 2025. While the full effect of those prohibitions follows with the establishment of the <br /> governance and enforcement of this Regulation, anticipating the application of the prohibitions is important to take <br /> account of unacceptable risks and to have an effect on other procedures, such as in civil law. Moreover, the <br /> infrastructure related to the governance and the conformity assessment system should be operational before <br /> 2 August 2026, therefore the provisions on notified bodies and governance structure should apply from 2 August <br /> 2025. Given the rapid pace of technological advancements and adoption of general-purpose AI models, obligations <br /> for providers of general-purpose AI models should apply from 2 August 2025. Codes of practice should be ready by <br /> 2 May 2025 in view of enabling providers to demonstrate compliance on time. The AI Office should ensure that <br /> classification rules and procedures are up to date in light of technological developments. In addition, Member States <br /> should lay down and notify to the Commission the rules on penalties, including administrative fines, and ensure that <br /> they are properly and effectively implemented by the date of application of this Regulation. Therefore the provisions <br /> on penalties should apply from 2 August 2025.</p>
Show original text

Member states must establish and inform the Commission about their penalty rules, including administrative fines, and ensure these rules are properly enforced by August 2, 2025, when this Regulation takes effect.

The European Data Protection Supervisor and the European Data Protection Board were consulted and provided their joint opinion on June 18, 2021.

CHAPTER I - GENERAL PROVISIONS

Article 1: Purpose
This Regulation aims to strengthen the EU's internal market and encourage the development of trustworthy, human-centered artificial intelligence (AI). It protects public health, safety, fundamental rights, democracy, the rule of law, and the environment from harmful AI systems while supporting innovation.

This Regulation establishes:
(a) Common rules for selling, deploying, and using AI systems in the EU
(b) Bans on certain AI practices
(c) Special requirements for high-risk AI systems and responsibilities for companies operating them
(d) Common transparency rules for specific AI systems
(e) Common rules for selling general-purpose AI models
(f) Rules for market oversight, monitoring, governance, and enforcement
(g) Support measures for innovation, especially for small and medium-sized businesses and startups

Article 2: Scope

<p>down and notify to the Commission the rules on penalties, including administrative fines, and ensure that <br /> they are properly and effectively implemented by the date of application of this Regulation. Therefore the provisions <br /> on penalties should apply from 2 August 2025.<br /> (180)<br /> The European Data Protection Supervisor and the European Data Protection Board were consulted in accordance <br /> with Article 42(1) and (2) of Regulation (EU) 2018/1725 and delivered their joint opinion on 18 June 2021,<br /> HAVE ADOPTED THIS REGULATION:<br /> CHAPTER I<br /> GENERAL PROVISIONS<br /> Article 1<br /> Subject matter`<br /> 1.<br /> The purpose of this Regulation is to improve the functioning of the internal market and promote the uptake of <br /> human-centric and trustworthy artificial intelligence (AI), while ensuring a high level of protection of health, safety, <br /> fundamental rights enshrined in the Charter, including democracy, the rule of law and environmental protection, against <br /> the harmful effects of AI systems in the Union and supporting innovation.<br /> 2.<br /> This Regulation lays down:<br /> (a) harmonised rules for the placing on the market, the putting into service, and the use of AI systems in the Union;<br /> EN<br /> OJ L, 12.7.2024<br /> 44/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>(b) prohibitions of certain AI practices;<br /> (c) specific requirements for high-risk AI systems and obligations for operators of such systems;<br /> (d) harmonised transparency rules for certain AI systems;<br /> (e) harmonised rules for the placing on the market of general-purpose AI models;<br /> (f) rules on market monitoring, market surveillance, governance and enforcement;<br /> (g) measures to support innovation, with a particular focus on SMEs, including start-ups.<br /> Article 2<br /> Scope<br /> 1.</p>
Show original text

This regulation covers AI systems and general-purpose AI models sold or used in the EU. It applies to companies that provide or use these systems, whether they are based in the EU or another country. It also applies to companies that import, distribute, or sell AI systems, as well as to people affected by these systems in the EU. For high-risk AI systems that are part of products already regulated by EU law, only specific articles of this regulation apply. The regulation does not cover areas outside EU law or affect how member states handle national security matters.

<p>on the market of general-purpose AI models;<br /> (f) rules on market monitoring, market surveillance, governance and enforcement;<br /> (g) measures to support innovation, with a particular focus on SMEs, including start-ups.<br /> Article 2<br /> Scope<br /> 1.<br /> This Regulation applies to:<br /> (a) providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models <br /> in the Union, irrespective of whether those providers are established or located within the Union or in a third country;<br /> (b) deployers of AI systems that have their place of establishment or are located within the Union;<br /> (c) providers and deployers of AI systems that have their place of establishment or are located in a third country, where the <br /> output produced by the AI system is used in the Union;<br /> (d) importers and distributors of AI systems;<br /> (e) product manufacturers placing on the market or putting into service an AI system together with their product and <br /> under their own name or trademark;<br /> (f) authorised representatives of providers, which are not established in the Union;<br /> (g) affected persons that are located in the Union.<br /> 2.<br /> For AI systems classified as high-risk AI systems in accordance with Article 6(1) related to products covered by the <br /> Union harmonisation legislation listed in Section B of Annex I, only Article 6(1), Articles 102 to 109 and Article 112 apply. <br /> Article 57 applies only in so far as the requirements for high-risk AI systems under this Regulation have been integrated in <br /> that Union harmonisation legislation.<br /> 3.<br /> This Regulation does not apply to areas outside the scope of Union law, and shall not, in any event, affect the <br /> competences of the Member States concerning national security, regardless of the type of entity entrusted by the Member <br /> States with carrying out tasks in relation to those competences.</p>
Show original text

This regulation does not affect the EU member states' authority over national security matters, regardless of which organizations handle those responsibilities. AI systems used exclusively for military, defense, or national security purposes are not covered by this regulation, whether they are sold in the EU or not. Foreign governments and international organizations can use AI systems without following this regulation if they are cooperating with the EU or its member states on law enforcement and judicial matters, as long as they protect people's fundamental rights and freedoms. This regulation does not change the rules about liability for internet service providers under EU Regulation 2022/2065. AI systems and models created solely for scientific research and development are also exempt from this regulation.

<p>the scope of Union law, and shall not, in any event, affect the <br /> competences of the Member States concerning national security, regardless of the type of entity entrusted by the Member <br /> States with carrying out tasks in relation to those competences.<br /> This Regulation does not apply to AI systems where and in so far they are placed on the market, put into service, or used <br /> with or without modification exclusively for military, defence or national security purposes, regardless of the type of entity <br /> carrying out those activities.<br /> This Regulation does not apply to AI systems which are not placed on the market or put into service in the Union, where <br /> the output is used in the Union exclusively for military, defence or national security purposes, regardless of the type of <br /> entity carrying out those activities.<br /> 4.<br /> This Regulation applies neither to public authorities in a third country nor to international organisations falling <br /> within the scope of this Regulation pursuant to paragraph 1, where those authorities or organisations use AI systems in the <br /> framework of international cooperation or agreements for law enforcement and judicial cooperation with the Union or <br /> with one or more Member States, provided that such a third country or international organisation provides adequate <br /> safeguards with respect to the protection of fundamental rights and freedoms of individuals.<br /> 5.<br /> This Regulation shall not affect the application of the provisions on the liability of providers of intermediary services <br /> as set out in Chapter II of Regulation (EU) 2022/2065.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 45/144</p> <p>6.<br /> This Regulation does not apply to AI systems or AI models, including their output, specifically developed and put into <br /> service for the sole purpose of scientific research and development.<br /> 7.</p>
Show original text

This regulation has several exceptions and limitations:

  1. It does not apply to AI systems created only for scientific research and development.

  2. Personal data protection laws still apply to any personal data used under this regulation. This regulation does not override existing EU data protection laws (Regulations 2016/679, 2018/1725, Directive 2002/58/EC, or Regulation 2016/680), except for specific articles mentioned.

  3. It does not apply to research, testing, or development of AI systems before they are released to the public or put to use. However, real-world testing is not excluded.

  4. It does not affect other EU laws about consumer protection and product safety.

  5. It does not apply to individuals using AI systems for personal, non-work purposes.

  6. The EU and member countries can create stronger laws to protect workers' rights regarding AI use by employers, and can support worker-friendly agreements.

  7. It does not apply to free and open-source AI systems, unless they are released as high-risk AI systems or fall under specific articles (Article 5 or 50).

<p>/2024/1689/oj<br /> 45/144</p> <p>6.<br /> This Regulation does not apply to AI systems or AI models, including their output, specifically developed and put into <br /> service for the sole purpose of scientific research and development.<br /> 7.<br /> Union law on the protection of personal data, privacy and the confidentiality of communications applies to personal <br /> data processed in connection with the rights and obligations laid down in this Regulation. This Regulation shall not affect <br /> Regulation (EU) 2016/679 or (EU) 2018/1725, or Directive 2002/58/EC or (EU) 2016/680, without prejudice to Article <br /> 10(5) and Article 59 of this Regulation.<br /> 8.<br /> This Regulation does not apply to any research, testing or development activity regarding AI systems or AI models <br /> prior to their being placed on the market or put into service. Such activities shall be conducted in accordance with <br /> applicable Union law. Testing in real world conditions shall not be covered by that exclusion.<br /> 9.<br /> This Regulation is without prejudice to the rules laid down by other Union legal acts related to consumer protection <br /> and product safety.<br /> 10.<br /> This Regulation does not apply to obligations of deployers who are natural persons using AI systems in the course of <br /> a purely personal non-professional activity.<br /> 11.<br /> This Regulation does not preclude the Union or Member States from maintaining or introducing laws, regulations or <br /> administrative provisions which are more favourable to workers in terms of protecting their rights in respect of the use of <br /> AI systems by employers, or from encouraging or allowing the application of collective agreements which are more <br /> favourable to workers.<br /> 12.<br /> This Regulation does not apply to AI systems released under free and open-source licences, unless they are placed on <br /> the market or put into service as high-risk AI systems or as an AI system that falls under Article 5 or 50.</p>
Show original text

This regulation does not apply to AI systems released under free and open-source licenses, unless they are high-risk AI systems or fall under Article 5 or 50.

Article 3: Definitions

For this regulation, these terms mean:

(1) AI system: A machine-based system designed to work with varying levels of independence. It can learn and adapt after being released. It takes input and produces outputs like predictions, content, recommendations, or decisions that can affect physical or virtual environments.

(2) Risk: The likelihood that harm will occur multiplied by how severe that harm would be.

(3) Provider: Any person, company, government body, or organization that creates an AI system or general-purpose AI model, or has one created, and then releases it to the market or puts it into use under their own name or trademark—whether for payment or free.

(4) Deployer: Any person, company, government body, or organization that uses an AI system under their control, except when using it for personal, non-professional purposes.

(5) Authorized representative: A person or company located in the European Union who has received written permission from a provider to perform the provider's obligations and follow the procedures required by this regulation.

(6) Importer: A person or company located in the European Union that sells an AI system that carries the name or trademark of a person or company from outside the European Union.

<p>12.<br /> This Regulation does not apply to AI systems released under free and open-source licences, unless they are placed on <br /> the market or put into service as high-risk AI systems or as an AI system that falls under Article 5 or 50.<br /> Article 3<br /> Definitions<br /> For the purposes of this Regulation, the following definitions apply:<br /> (1)<br /> ‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may <br /> exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, <br /> how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or <br /> virtual environments;<br /> (2)<br /> ‘risk’ means the combination of the probability of an occurrence of harm and the severity of that harm;<br /> (3)<br /> ‘provider’ means a natural or legal person, public authority, agency or other body that develops an AI system or <br /> a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the <br /> market or puts the AI system into service under its own name or trademark, whether for payment or free of charge;<br /> (4)<br /> ‘deployer’ means a natural or legal person, public authority, agency or other body using an AI system under its <br /> authority except where the AI system is used in the course of a personal non-professional activity;<br /> (5)<br /> ‘authorised representative’ means a natural or legal person located or established in the Union who has received and <br /> accepted a written mandate from a provider of an AI system or a general-purpose AI model to, respectively, perform <br /> and carry out on its behalf the obligations and procedures established by this Regulation;<br /> (6)<br /> ‘importer’ means a natural or legal person located or established in the Union that places on the market an AI system <br /> that bears the name or trademark of a natural or legal person established in a third country;<br /> (7)</p>
Show original text

This section defines key terms used in EU AI regulations:

  1. Importer: A person or company in the EU that sells an AI system branded with a name or trademark from a company outside the EU.

  2. Distributor: A person or company in the supply chain (not the original provider or importer) that supplies an AI system to the EU market.

  3. Operator: Any person or company involved in providing, manufacturing, deploying, importing, or distributing an AI system.

  4. Placing on the market: The first time an AI system is made available for sale or use in the EU.

  5. Making available on the market: Supplying an AI system for sale or use in the EU as part of a business activity, whether paid or free.

  6. Putting into service: Delivering an AI system to a user or company in the EU for its first use as intended.

  7. Intended purpose: How the provider says the AI system should be used, including the specific situation and conditions, as described in instructions, marketing materials, and technical documents.

  8. Reasonably foreseeable misuse: Using an AI system in a way it was not designed for, but which could happen due to normal human behavior or interaction with other systems.

  9. Safety component: A part of a product or AI system that is important for safety (definition continues).

<p>;<br /> (6)<br /> ‘importer’ means a natural or legal person located or established in the Union that places on the market an AI system <br /> that bears the name or trademark of a natural or legal person established in a third country;<br /> (7)<br /> ‘distributor’ means a natural or legal person in the supply chain, other than the provider or the importer, that makes <br /> an AI system available on the Union market;<br /> (8)<br /> ‘operator’ means a provider, product manufacturer, deployer, authorised representative, importer or distributor;<br /> EN<br /> OJ L, 12.7.2024<br /> 46/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>(9)<br /> ‘placing on the market’ means the first making available of an AI system or a general-purpose AI model on the Union <br /> market;<br /> (10) ‘making available on the market’ means the supply of an AI system or a general-purpose AI model for distribution or <br /> use on the Union market in the course of a commercial activity, whether in return for payment or free of charge;<br /> (11) ‘putting into service’ means the supply of an AI system for first use directly to the deployer or for own use in the Union <br /> for its intended purpose;<br /> (12) ‘intended purpose’ means the use for which an AI system is intended by the provider, including the specific context <br /> and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional <br /> or sales materials and statements, as well as in the technical documentation;<br /> (13) ‘reasonably foreseeable misuse’ means the use of an AI system in a way that is not in accordance with its intended <br /> purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems, <br /> including other AI systems;<br /> (14) ‘safety component’ means a component of a product or of an AI system which fulf</p>
Show original text

This section defines key terms related to AI systems and safety regulations:

A 'safety component' is any part of a product or AI system that protects people or property from harm. If it fails or breaks, it could endanger health and safety.

'Instructions for use' are guidelines provided by the AI system's creator to explain what the system is designed to do and how to use it correctly.

A 'recall of an AI system' means the creator takes back the system or removes it from use.

A 'withdrawal of an AI system' means preventing the system from being sold or distributed in the market.

'Performance of an AI system' refers to how well the system accomplishes its intended purpose.

A 'notifying authority' is the government agency responsible for checking and approving organizations that test AI systems for safety compliance.

'Conformity assessment' is the process of verifying that a high-risk AI system meets all required safety and quality standards.

A 'conformity assessment body' is an independent organization that tests, certifies, and inspects AI systems to ensure they meet standards.

A 'notified body' is a conformity assessment body officially recognized and approved by the government.

A 'substantial modification' is a significant change made to an AI system after it has been released to the public that was not planned during the original safety testing.

<p>accordance with its intended <br /> purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems, <br /> including other AI systems;<br /> (14) ‘safety component’ means a component of a product or of an AI system which fulfils a safety function for that product <br /> or AI system, or the failure or malfunctioning of which endangers the health and safety of persons or property;<br /> (15) ‘instructions for use’ means the information provided by the provider to inform the deployer of, in particular, an AI <br /> system’s intended purpose and proper use;<br /> (16) ‘recall of an AI system’ means any measure aiming to achieve the return to the provider or taking out of service or <br /> disabling the use of an AI system made available to deployers;<br /> (17) ‘withdrawal of an AI system’ means any measure aiming to prevent an AI system in the supply chain being made <br /> available on the market;<br /> (18) ‘performance of an AI system’ means the ability of an AI system to achieve its intended purpose;<br /> (19) ‘notifying authority’ means the national authority responsible for setting up and carrying out the necessary procedures <br /> for the assessment, designation and notification of conformity assessment bodies and for their monitoring;<br /> (20) ‘conformity assessment’ means the process of demonstrating whether the requirements set out in Chapter III, Section 2 <br /> relating to a high-risk AI system have been fulfilled;<br /> (21) ‘conformity assessment body’ means a body that performs third-party conformity assessment activities, including <br /> testing, certification and inspection;<br /> (22) ‘notified body’ means a conformity assessment body notified in accordance with this Regulation and other relevant <br /> Union harmonisation legislation;<br /> (23) ‘substantial modification’ means a change to an AI system after its placing on the market or putting into service which <br /> is not foreseen or planned in the initial conformity assessment carried out by the provider and as a result of which the </p>
Show original text

This text defines 6 key terms related to AI systems and regulations:

  1. Substantial Modification: Any change made to an AI system after it is sold or used that was not planned in the original safety check. This includes changes that affect whether the system meets required standards or change what it was designed to do.

  2. CE Marking: A label that providers put on AI systems to show they meet all required safety and quality standards.

  3. Post-Market Monitoring System: The process providers use to collect information about how their AI systems perform after being sold or put into use. This helps them identify problems and take corrective actions if needed.

  4. Market Surveillance Authority: The government agency responsible for checking that AI systems follow regulations.

  5. Harmonised Standard: A technical standard that has been officially recognized and agreed upon across the EU.

  6. Common Specification: A set of technical guidelines that explains how to meet certain requirements under this regulation.

  7. Training Data: Information used to teach an AI system by adjusting its internal settings.

  8. Validation Data: Information used to test and evaluate how well an AI system works.

<p>23) ‘substantial modification’ means a change to an AI system after its placing on the market or putting into service which <br /> is not foreseen or planned in the initial conformity assessment carried out by the provider and as a result of which the <br /> compliance of the AI system with the requirements set out in Chapter III, Section 2 is affected or results in <br /> a modification to the intended purpose for which the AI system has been assessed;<br /> (24) ‘CE marking’ means a marking by which a provider indicates that an AI system is in conformity with the requirements <br /> set out in Chapter III, Section 2 and other applicable Union harmonisation legislation providing for its affixing;<br /> (25) ‘post-market monitoring system’ means all activities carried out by providers of AI systems to collect and review <br /> experience gained from the use of AI systems they place on the market or put into service for the purpose of <br /> identifying any need to immediately apply any necessary corrective or preventive actions;<br /> (26) ‘market surveillance authority’ means the national authority carrying out the activities and taking the measures <br /> pursuant to Regulation (EU) 2019/1020;<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 47/144</p> <p>(27) ‘harmonised standard’ means a harmonised standard as defined in Article 2(1), point (c), of Regulation (EU) <br /> No 1025/2012;<br /> (28) ‘common specification’ means a set of technical specifications as defined in Article 2, point (4) of Regulation (EU) <br /> No 1025/2012, providing means to comply with certain requirements established under this Regulation;<br /> (29) ‘training data’ means data used for training an AI system through fitting its learnable parameters;<br /> (30) ‘validation data’ means data used for providing an evaluation</p>
Show original text

This section defines key terms used in AI regulations:

Training data: Information used to teach an AI system by adjusting its learnable settings.

Validation data: Information used to test a trained AI system and adjust its non-learnable settings and learning process to avoid problems like underfitting or overfitting.

Validation data set: A separate collection of data or a portion of the training data, which can be divided in fixed or flexible ways.

Testing data: Information used to independently evaluate an AI system and verify it works as expected before being released to the market or put into use.

Input data: Information given to or collected directly by an AI system that it uses to produce results.

Biometric data: Personal information created by analyzing a person's physical, physiological, or behavioral characteristics, such as facial images or fingerprints.

Biometric identification: Using automated technology to recognize and identify a person by comparing their biometric data against stored records in a database.

Biometric verification: Using automated technology to confirm a person's identity by comparing their biometric data to previously recorded biometric data (including authentication).

Special categories of personal data: Sensitive personal information as defined in EU data protection regulations (EU 2016/679, EU 2016/680, and EU 2018/1725).

<p>2, providing means to comply with certain requirements established under this Regulation;<br /> (29) ‘training data’ means data used for training an AI system through fitting its learnable parameters;<br /> (30) ‘validation data’ means data used for providing an evaluation of the trained AI system and for tuning its non-learnable <br /> parameters and its learning process in order, inter alia, to prevent underfitting or overfitting;<br /> (31) ‘validation data set’ means a separate data set or part of the training data set, either as a fixed or variable split;<br /> (32) ‘testing data’ means data used for providing an independent evaluation of the AI system in order to confirm the <br /> expected performance of that system before its placing on the market or putting into service;<br /> (33) ‘input data’ means data provided to or directly acquired by an AI system on the basis of which the system produces an <br /> output;<br /> (34) ‘biometric data’ means personal data resulting from specific technical processing relating to the physical, physiological <br /> or behavioural characteristics of a natural person, such as facial images or dactyloscopic data;<br /> (35) ‘biometric identification’ means the automated recognition of physical, physiological, behavioural, or psychological <br /> human features for the purpose of establishing the identity of a natural person by comparing biometric data of that <br /> individual to biometric data of individuals stored in a database;<br /> (36) ‘biometric verification’ means the automated, one-to-one verification, including authentication, of the identity of <br /> natural persons by comparing their biometric data to previously provided biometric data;<br /> (37) ‘special categories of personal data’ means the categories of personal data referred to in Article 9(1) of Regulation (EU) <br /> 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10(1) of Regulation (EU) 2018/1725;<br /> (38) ‘sensitive operational</p>
Show original text

This text defines key terms related to AI systems and data protection regulations:

  • Sensitive operational data: Information about preventing, detecting, investigating, or prosecuting crimes. Sharing this data could harm criminal investigations.

  • Emotion recognition system: An AI tool that identifies or guesses what emotions or intentions people have based on their biological measurements (like facial features).

  • Biometric categorisation system: An AI tool that sorts people into groups based on their biological measurements, unless it's a minor part of another service and technically necessary.

  • Remote biometric identification system: An AI tool that identifies people without their knowledge or participation, usually from a distance, by comparing their biological measurements to a database.

  • Real-time remote biometric identification system: A remote identification system that captures, compares, and identifies people almost instantly or with very short delays to prevent people from avoiding detection.

  • Post-remote biometric identification system: A remote identification system that is not real-time (meaning there is a delay between capturing data and identification).

  • Publicly accessible space: Any physical location, whether publicly or privately owned, that the general public can enter, even if there are some access restrictions or capacity limits.

<p>of Regulation (EU) <br /> 2016/679, Article 10 of Directive (EU) 2016/680 and Article 10(1) of Regulation (EU) 2018/1725;<br /> (38) ‘sensitive operational data’ means operational data related to activities of prevention, detection, investigation or <br /> prosecution of criminal offences, the disclosure of which could jeopardise the integrity of criminal proceedings;<br /> (39) ‘emotion recognition system’ means an AI system for the purpose of identifying or inferring emotions or intentions of <br /> natural persons on the basis of their biometric data;<br /> (40) ‘biometric categorisation system’ means an AI system for the purpose of assigning natural persons to specific <br /> categories on the basis of their biometric data, unless it is ancillary to another commercial service and strictly <br /> necessary for objective technical reasons;<br /> (41) ‘remote biometric identification system’ means an AI system for the purpose of identifying natural persons, without <br /> their active involvement, typically at a distance through the comparison of a person’s biometric data with the <br /> biometric data contained in a reference database;<br /> (42) ‘real-time remote biometric identification system’ means a remote biometric identification system, whereby the <br /> capturing of biometric data, the comparison and the identification all occur without a significant delay, comprising <br /> not only instant identification, but also limited short delays in order to avoid circumvention;<br /> (43) ‘post-remote biometric identification system’ means a remote biometric identification system other than a real-time <br /> remote biometric identification system;<br /> (44) ‘publicly accessible space’ means any publicly or privately owned physical place accessible to an undetermined number <br /> of natural persons, regardless of whether certain conditions for access may apply, and regardless of the potential <br /> capacity restrictions;<br /> EN<br /> OJ L, 12.7.</p>
Show original text

A 'public place' is any location owned by the public or private sector that is open to many people, even if there are rules about who can enter or limits on how many people can be there. A 'law enforcement authority' is any government agency or organization that prevents, investigates, detects, or prosecutes crimes, or carries out criminal punishments and protects public safety. 'Law enforcement' refers to the work these authorities do to prevent and investigate crimes, prosecute offenders, and protect the public. The 'AI Office' is the European Commission's department responsible for overseeing artificial intelligence systems and making sure they follow the rules, as established by the Commission on January 24, 2024. A 'national competent authority' is either a notifying authority or a market surveillance authority. For AI systems used by EU institutions, the European Data Protection Supervisor takes on this role instead. A 'serious incident' is a problem or failure of an AI system that causes direct or indirect harm.

<p>’ means any publicly or privately owned physical place accessible to an undetermined number <br /> of natural persons, regardless of whether certain conditions for access may apply, and regardless of the potential <br /> capacity restrictions;<br /> EN<br /> OJ L, 12.7.2024<br /> 48/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>(45) ‘law enforcement authority’ means:<br /> (a) any public authority competent for the prevention, investigation, detection or prosecution of criminal offences or <br /> the execution of criminal penalties, including the safeguarding against and the prevention of threats to public <br /> security; or<br /> (b) any other body or entity entrusted by Member State law to exercise public authority and public powers for the <br /> purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of <br /> criminal penalties, including the safeguarding against and the prevention of threats to public security;<br /> (46) ‘law enforcement’ means activities carried out by law enforcement authorities or on their behalf for the prevention, <br /> investigation, detection or prosecution of criminal offences or the execution of criminal penalties, including <br /> safeguarding against and preventing threats to public security;<br /> (47) ‘AI Office’ means the Commission’s function of contributing to the implementation, monitoring and supervision of AI <br /> systems and general-purpose AI models, and AI governance, provided for in Commission Decision of 24 January <br /> 2024; references in this Regulation to the AI Office shall be construed as references to the Commission;<br /> (48) ‘national competent authority’ means a notifying authority or a market surveillance authority; as regards AI systems <br /> put into service or used by Union institutions, agencies, offices and bodies, references to national competent <br /> authorities or market surveillance authorities in this Regulation shall be construed as references to the European Data <br /> Protection Supervisor;<br /> (49) ‘serious incident’ means an incident or malfunctioning of an AI system that directly or indirectly leads to any of the </p>
Show original text

The European Data Protection Supervisor is the authority responsible for data protection oversight. A 'serious incident' occurs when an AI system causes significant harm, including: death or serious injury to a person; major damage to critical infrastructure; violation of EU laws protecting fundamental rights; or serious damage to property or the environment. 'Personal data' refers to information about individuals as defined in EU Regulation 2016/679. 'Non-personal data' is any other type of data. 'Profiling' means analyzing personal data to create patterns about individuals, as defined in the same regulation. A 'real-world testing plan' is a document outlining how, where, when, and with whom an AI system will be tested outside controlled environments. A 'sandbox plan' is an agreement between an AI provider and a government authority describing the goals, rules, timeline, and methods for testing a new AI system. An 'AI regulatory sandbox' is a controlled testing environment created by authorities where companies can develop and test innovative AI systems under supervision for a limited period. 'AI literacy' refers to the knowledge and skills that AI providers, users, and affected people need to understand their rights and responsibilities under AI regulations and make informed decisions.

<p>authorities or market surveillance authorities in this Regulation shall be construed as references to the European Data <br /> Protection Supervisor;<br /> (49) ‘serious incident’ means an incident or malfunctioning of an AI system that directly or indirectly leads to any of the <br /> following:<br /> (a) the death of a person, or serious harm to a person’s health;<br /> (b) a serious and irreversible disruption of the management or operation of critical infrastructure;<br /> (c) the infringement of obligations under Union law intended to protect fundamental rights;<br /> (d) serious harm to property or the environment;<br /> (50) ‘personal data’ means personal data as defined in Article 4, point (1), of Regulation (EU) 2016/679;<br /> (51) ‘non-personal data’ means data other than personal data as defined in Article 4, point (1), of Regulation (EU) <br /> 2016/679;<br /> (52) ‘profiling’ means profiling as defined in Article 4, point (4), of Regulation (EU) 2016/679;<br /> (53) ‘real-world testing plan’ means a document that describes the objectives, methodology, geographical, population and <br /> temporal scope, monitoring, organisation and conduct of testing in real-world conditions;<br /> (54) ‘sandbox plan’ means a document agreed between the participating provider and the competent authority describing <br /> the objectives, conditions, timeframe, methodology and requirements for the activities carried out within the sandbox;<br /> (55) ‘AI regulatory sandbox’ means a controlled framework set up by a competent authority which offers providers or <br /> prospective providers of AI systems the possibility to develop, train, validate and test, where appropriate in real-world <br /> conditions, an innovative AI system, pursuant to a sandbox plan for a limited time under regulatory supervision;<br /> (56) ‘AI literacy’ means skills, knowledge and understanding that allow providers, deployers and affected persons, taking <br /> into account their respective rights and obligations in the context of this Regulation, to make an informed</p>
Show original text

This text defines six key terms related to AI regulation:

  1. AI Literacy: The skills, knowledge, and understanding that help providers, deployers, and affected people make informed decisions about using AI systems. It includes awareness of AI's benefits, risks, and potential harms.

  2. Testing in Real-World Conditions: Temporary testing of an AI system in actual environments (not labs or simulations) to gather reliable data and verify the system meets regulatory requirements. This testing does not count as selling or using the AI system commercially, as long as specific conditions are met.

  3. Subject: A person who participates in real-world AI testing.

  4. Informed Consent: A person's free, clear, and voluntary agreement to participate in testing after being told all relevant information about what the testing involves.

  5. Deep Fake: AI-created or altered images, audio, or video that look like real people, objects, places, or events but are actually fake and designed to appear authentic.

  6. Widespread Infringement: Any action or failure to act that breaks EU laws protecting people's interests and harms or could harm the collective interests of many individuals.

<p>a limited time under regulatory supervision;<br /> (56) ‘AI literacy’ means skills, knowledge and understanding that allow providers, deployers and affected persons, taking <br /> into account their respective rights and obligations in the context of this Regulation, to make an informed deployment <br /> of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause;<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 49/144</p> <p>(57) ‘testing in real-world conditions’ means the temporary testing of an AI system for its intended purpose in real-world <br /> conditions outside a laboratory or otherwise simulated environment, with a view to gathering reliable and robust data <br /> and to assessing and verifying the conformity of the AI system with the requirements of this Regulation and it does <br /> not qualify as placing the AI system on the market or putting it into service within the meaning of this Regulation, <br /> provided that all the conditions laid down in Article 57 or 60 are fulfilled;<br /> (58) ‘subject’, for the purpose of real-world testing, means a natural person who participates in testing in real-world <br /> conditions;<br /> (59) ‘informed consent’ means a subject’s freely given, specific, unambiguous and voluntary expression of his or her <br /> willingness to participate in a particular testing in real-world conditions, after having been informed of all aspects of <br /> the testing that are relevant to the subject’s decision to participate;<br /> (60) ‘deep fake’ means AI-generated or manipulated image, audio or video content that resembles existing persons, objects, <br /> places, entities or events and would falsely appear to a person to be authentic or truthful;<br /> (61) ‘widespread infringement’ means any act or omission contrary to Union law protecting the interest of individuals, <br /> which:<br /> (a) has harmed or is likely to harm the collective interests of individuals residing in at</p>
Show original text

This text defines several important terms:

  1. 'Widespread infringement' - An illegal action or failure to act that breaks EU laws protecting people's interests. It's considered widespread if it harms or could harm the collective interests of people in at least two EU countries (other than where the action occurred or where the company is based), OR if the same illegal practice harms people's interests in at least three EU countries at the same time by the same operator.

  2. 'Critical infrastructure' - Infrastructure that is defined as critical in EU Directive 2022/2557.

  3. 'General-purpose AI model' - An artificial intelligence system trained on large amounts of data that can perform many different tasks well. It can be used in various applications and systems. This does not include AI models still being tested before release to the market.

  4. 'High-impact capabilities' - AI abilities that match or exceed the most advanced general-purpose AI models available.

  5. 'Systemic risk' - A risk specific to advanced AI models that could significantly affect the EU market and cause serious harm to public health, safety, security, or fundamental rights.

<p>to be authentic or truthful;<br /> (61) ‘widespread infringement’ means any act or omission contrary to Union law protecting the interest of individuals, <br /> which:<br /> (a) has harmed or is likely to harm the collective interests of individuals residing in at least two Member States other <br /> than the Member State in which:<br /> (i) the act or omission originated or took place;<br /> (ii) the provider concerned, or, where applicable, its authorised representative is located or established; or<br /> (iii) the deployer is established, when the infringement is committed by the deployer;<br /> (b) has caused, causes or is likely to cause harm to the collective interests of individuals and has common features, <br /> including the same unlawful practice or the same interest being infringed, and is occurring concurrently, <br /> committed by the same operator, in at least three Member States;<br /> (62) ‘critical infrastructure’ means critical infrastructure as defined in Article 2, point (4), of Directive (EU) 2022/2557;<br /> (63) ‘general-purpose AI model’ means an AI model, including where such an AI model is trained with a large amount of <br /> data using self-supervision at scale, that displays significant generality and is capable of competently performing <br /> a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into <br /> a variety of downstream systems or applications, except AI models that are used for research, development or <br /> prototyping activities before they are placed on the market;<br /> (64) ‘high-impact capabilities’ means capabilities that match or exceed the capabilities recorded in the most advanced <br /> general-purpose AI models;<br /> (65) ‘systemic risk’ means a risk that is specific to the high-impact capabilities of general-purpose AI models, having <br /> a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects <br /> on public health, safety, public security, fundamental rights, or the</p>
Show original text

High-impact general-purpose AI models can significantly affect the EU market and may cause serious harm to public health, safety, security, fundamental rights, or society overall. These risks can spread widely across the entire supply chain.

A general-purpose AI system is an AI system built on a general-purpose AI model that can be used for many different purposes. It can be used directly or combined with other AI systems.

A floating-point operation is a mathematical calculation involving floating-point numbers—a type of number that computers represent using a fixed-precision integer multiplied by a fixed-base integer exponent.

A downstream provider is any company that creates or offers an AI system (including general-purpose AI systems) by incorporating an AI model. This model may be their own or obtained from another company through a contract.

According to Article 4 on AI literacy, companies that provide or use AI systems must ensure their employees and anyone operating these systems on their behalf have adequate knowledge of AI. This training should match the person's technical skills, experience, education, and the specific way the AI system will be used, including who it will affect.

<p>to the high-impact capabilities of general-purpose AI models, having <br /> a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects <br /> on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale <br /> across the value chain;<br /> (66) ‘general-purpose AI system’ means an AI system which is based on a general-purpose AI model and which has the <br /> capability to serve a variety of purposes, both for direct use as well as for integration in other AI systems;<br /> (67) ‘floating-point operation’ means any mathematical operation or assignment involving floating-point numbers, which <br /> are a subset of the real numbers typically represented on computers by an integer of fixed precision scaled by an <br /> integer exponent of a fixed base;<br /> (68) ‘downstream provider’ means a provider of an AI system, including a general-purpose AI system, which integrates an <br /> AI model, regardless of whether the AI model is provided by themselves and vertically integrated or provided by <br /> another entity based on contractual relations.<br /> EN<br /> OJ L, 12.7.2024<br /> 50/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>Article 4<br /> AI literacy<br /> Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of <br /> their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their <br /> technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering <br /> the persons or groups of persons on whom the AI systems are to be used.<br /> CHAPTER II<br /> PROHIBITED AI PRACTICES<br /> Article 5<br /> Prohibited AI practices<br /> 1.</p>
Show original text

AI systems are prohibited from being sold, used, or put into service in the following ways:

(a) Using hidden techniques or manipulative tactics that bypass a person's awareness to distort their behavior and prevent them from making informed decisions. This includes causing someone to make a choice they would not normally make, resulting in significant harm to them or others.

(b) Exploiting vulnerable people or groups—such as children, people with disabilities, or those in difficult economic situations—by manipulating their behavior in ways that cause significant harm to them or others.

(c) Using AI to rate or rank people or groups based on their social behavior or personal characteristics over time (called a 'social score'). This is prohibited when the score leads to unfair treatment of people in situations unrelated to where the original data came from, or when the unfair treatment is unjustified or disproportionate.

<p>AI systems are to be used in, and considering <br /> the persons or groups of persons on whom the AI systems are to be used.<br /> CHAPTER II<br /> PROHIBITED AI PRACTICES<br /> Article 5<br /> Prohibited AI practices<br /> 1.<br /> The following AI practices shall be prohibited:<br /> (a) the placing on the market, the putting into service or the use of an AI system that deploys subliminal techniques beyond <br /> a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of <br /> materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an <br /> informed decision, thereby causing them to take a decision that they would not have otherwise taken in a manner that <br /> causes or is reasonably likely to cause that person, another person or group of persons significant harm;<br /> (b) the placing on the market, the putting into service or the use of an AI system that exploits any of the vulnerabilities of <br /> a natural person or a specific group of persons due to their age, disability or a specific social or economic situation, with <br /> the objective, or the effect, of materially distorting the behaviour of that person or a person belonging to that group in <br /> a manner that causes or is reasonably likely to cause that person or another person significant harm;<br /> (c) the placing on the market, the putting into service or the use of AI systems for the evaluation or classification of natural <br /> persons or groups of persons over a certain period of time based on their social behaviour or known, inferred or <br /> predicted personal or personality characteristics, with the social score leading to either or both of the following:<br /> (i) detrimental or unfavourable treatment of certain natural persons or groups of persons in social contexts that are <br /> unrelated to the contexts in which the data was originally generated or collected;<br /> (ii) detrimental or unfavourable treatment of certain natural persons or groups of persons that is unjustified or <br /> dispro</p>
Show original text

This text outlines prohibited uses of artificial intelligence systems:

(i) Using data in ways unrelated to its original purpose or context.

(ii) Treating certain people or groups unfairly in a way that is unjustified or excessive compared to their actions.

(d) AI systems cannot be used to predict whether someone will commit a crime based only on their personality traits or profile. However, AI can assist humans in assessing whether someone was involved in a crime if that assessment is based on concrete, verifiable facts directly connected to the crime.

(e) AI systems cannot create or expand facial recognition databases by collecting facial images from the internet or security cameras without a specific target.

(f) AI systems cannot be used to detect people's emotions in workplaces or schools, except when used for medical or safety purposes.

(g) Biometric systems cannot categorize people based on their biometric data to determine or guess their race, political views, union membership, religious beliefs, sexual life, or sexual orientation.

<p>persons or groups of persons in social contexts that are <br /> unrelated to the contexts in which the data was originally generated or collected;<br /> (ii) detrimental or unfavourable treatment of certain natural persons or groups of persons that is unjustified or <br /> disproportionate to their social behaviour or its gravity;<br /> (d) the placing on the market, the putting into service for this specific purpose, or the use of an AI system for making risk <br /> assessments of natural persons in order to assess or predict the risk of a natural person committing a criminal offence, <br /> based solely on the profiling of a natural person or on assessing their personality traits and characteristics; this <br /> prohibition shall not apply to AI systems used to support the human assessment of the involvement of a person in <br /> a criminal activity, which is already based on objective and verifiable facts directly linked to a criminal activity;<br /> (e) the placing on the market, the putting into service for this specific purpose, or the use of AI systems that create or <br /> expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage;<br /> (f) the placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions <br /> of a natural person in the areas of workplace and education institutions, except where the use of the AI system is <br /> intended to be put in place or into the market for medical or safety reasons;<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 51/144</p> <p>(g) the placing on the market, the putting into service for this specific purpose, or the use of biometric categorisation <br /> systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political <br /> opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation; this prohibition does <br /> not</p>
Show original text

Systems that use biometric data (like facial recognition) to categorize people based on race, political views, union membership, religion, beliefs, or sexual orientation are prohibited. However, this ban does not apply to organizing or filtering biometric datasets (such as photos) that were legally obtained, or to biometric data used in law enforcement.

Law enforcement agencies cannot use real-time biometric identification systems in public spaces unless it is absolutely necessary for one of these specific purposes:

  1. Finding victims of kidnapping, human trafficking, or sexual exploitation, or locating missing persons
  2. Preventing an immediate and serious threat to someone's life, safety, or a terrorist attack
  3. Finding and identifying a person suspected of committing a serious crime (one listed in Annex II that carries a prison sentence of at least four years) for investigation, prosecution, or punishment

These rules do not affect other uses of biometric data outside of law enforcement, which are covered by separate regulations.

<p>ometric categorisation <br /> systems that categorise individually natural persons based on their biometric data to deduce or infer their race, political <br /> opinions, trade union membership, religious or philosophical beliefs, sex life or sexual orientation; this prohibition does <br /> not cover any labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data or <br /> categorizing of biometric data in the area of law enforcement;<br /> (h) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law <br /> enforcement, unless and in so far as such use is strictly necessary for one of the following objectives:<br /> (i) the targeted search for specific victims of abduction, trafficking in human beings or sexual exploitation of human <br /> beings, as well as the search for missing persons;<br /> (ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or <br /> a genuine and present or genuine and foreseeable threat of a terrorist attack;<br /> (iii) the localisation or identification of a person suspected of having committed a criminal offence, for the purpose of <br /> conducting a criminal investigation or prosecution or executing a criminal penalty for offences referred to in <br /> Annex II and punishable in the Member State concerned by a custodial sentence or a detention order for <br /> a maximum period of at least four years.<br /> Point (h) of the first subparagraph is without prejudice to Article 9 of Regulation (EU) 2016/679 for the processing of <br /> biometric data for purposes other than law enforcement.<br /> 2.</p>
Show original text

Real-time facial recognition systems can be used by law enforcement in public spaces, but only to identify specific individuals who are being targeted. Before using these systems, authorities must consider: (1) how serious the situation is and what harm could occur without the system, and (2) how the system might affect people's rights and freedoms. The use must follow strict rules set by national law, including limits on when, where, and to whom the system applies. Law enforcement must complete a rights impact assessment (as described in Article 27) and register the system in an EU database (as described in Article 49) before using it. In urgent situations, authorities can start using the system before registering it, but they must complete the registration as soon as possible. These rules do not affect other uses of biometric data that are covered by EU Regulation 2016/679.

<p>maximum period of at least four years.<br /> Point (h) of the first subparagraph is without prejudice to Article 9 of Regulation (EU) 2016/679 for the processing of <br /> biometric data for purposes other than law enforcement.<br /> 2.<br /> The use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law <br /> enforcement for any of the objectives referred to in paragraph 1, first subparagraph, point (h), shall be deployed for the <br /> purposes set out in that point only to confirm the identity of the specifically targeted individual, and it shall take into <br /> account the following elements:<br /> (a) the nature of the situation giving rise to the possible use, in particular the seriousness, probability and scale of the harm <br /> that would be caused if the system were not used;<br /> (b) the consequences of the use of the system for the rights and freedoms of all persons concerned, in particular the <br /> seriousness, probability and scale of those consequences.<br /> In addition, the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of <br /> law enforcement for any of the objectives referred to in paragraph 1, first subparagraph, point (h), of this Article shall <br /> comply with necessary and proportionate safeguards and conditions in relation to the use in accordance with the national <br /> law authorising the use thereof, in particular as regards the temporal, geographic and personal limitations. The use of the <br /> ‘real-time’ remote biometric identification system in publicly accessible spaces shall be authorised only if the law <br /> enforcement authority has completed a fundamental rights impact assessment as provided for in Article 27 and has <br /> registered the system in the EU database according to Article 49. However, in duly justified cases of urgency, the use of such <br /> systems may be commenced without the registration in the EU database, provided that such registration is completed <br /> without undue delay.<br /> 3.</p>
Show original text

Real-time biometric identification systems used by law enforcement in public spaces must be registered in the EU database under Article 49. In urgent situations, these systems can be used before registration, but registration must be completed promptly afterward. Before using such a system, law enforcement must obtain prior approval from a court or an independent administrative authority in the country where it will be used. This approval is based on a written request and follows national rules. In genuine emergencies, use can begin without prior approval, but authorization must be requested within 24 hours. If authorization is denied, the system must stop immediately and all collected data and results must be deleted. The court or administrative authority will only grant approval if it has clear evidence that the system is necessary and proportionate to achieve the specific law enforcement objectives listed in the request, and that its use is limited to only what is strictly necessary in terms of time period, geographic area, and number of people affected.

<p>system in the EU database according to Article 49. However, in duly justified cases of urgency, the use of such <br /> systems may be commenced without the registration in the EU database, provided that such registration is completed <br /> without undue delay.<br /> 3.<br /> For the purposes of paragraph 1, first subparagraph, point (h) and paragraph 2, each use for the purposes of law <br /> enforcement of a ‘real-time’ remote biometric identification system in publicly accessible spaces shall be subject to a prior <br /> authorisation granted by a judicial authority or an independent administrative authority whose decision is binding of the <br /> Member State in which the use is to take place, issued upon a reasoned request and in accordance with the detailed rules of <br /> national law referred to in paragraph 5. However, in a duly justified situation of urgency, the use of such system may be <br /> commenced without an authorisation provided that such authorisation is requested without undue delay, at the latest <br /> within 24 hours. If such authorisation is rejected, the use shall be stopped with immediate effect and all the data, as well as <br /> the results and outputs of that use shall be immediately discarded and deleted.<br /> The competent judicial authority or an independent administrative authority whose decision is binding shall grant the <br /> authorisation only where it is satisfied, on the basis of objective evidence or clear indications presented to it, that the use of <br /> the ‘real-time’ remote biometric identification system concerned is necessary for, and proportionate to, achieving one of the <br /> EN<br /> OJ L, 12.7.2024<br /> 52/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>objectives specified in paragraph 1, first subparagraph, point (h), as identified in the request and, in particular, remains <br /> limited to what is strictly necessary concerning the period of time as well as the geographic and personal scope.</p>
Show original text

Real-time biometric identification systems used by law enforcement in public spaces must be limited to what is absolutely necessary in terms of time period, geographic area, and the people involved. No decision that negatively affects a person can be based solely on this system's results. Each use of these systems must be reported to the market surveillance authority and national data protection authority following national rules. The report must include minimum required information but cannot contain sensitive operational details. Member States can choose to allow law enforcement to use these systems in public spaces under specific conditions and limits. Each Member State must create national laws that detail how to request, approve, and use these systems, as well as how to supervise and report on them. These laws must specify which objectives and which types of crimes allow authorities to use these systems. Member States must inform the Commission of these rules within 30 days of adopting them.

<p>/oj</p> <p>objectives specified in paragraph 1, first subparagraph, point (h), as identified in the request and, in particular, remains <br /> limited to what is strictly necessary concerning the period of time as well as the geographic and personal scope. In deciding <br /> on the request, that authority shall take into account the elements referred to in paragraph 2. No decision that produces an <br /> adverse legal effect on a person may be taken based solely on the output of the ‘real-time’ remote biometric identification <br /> system.<br /> 4.<br /> Without prejudice to paragraph 3, each use of a ‘real-time’ remote biometric identification system in publicly <br /> accessible spaces for law enforcement purposes shall be notified to the relevant market surveillance authority and the <br /> national data protection authority in accordance with the national rules referred to in paragraph 5. The notification shall, as <br /> a minimum, contain the information specified under paragraph 6 and shall not include sensitive operational data.<br /> 5.<br /> A Member State may decide to provide for the possibility to fully or partially authorise the use of ‘real-time’ remote <br /> biometric identification systems in publicly accessible spaces for the purposes of law enforcement within the limits and <br /> under the conditions listed in paragraph 1, first subparagraph, point (h), and paragraphs 2 and 3. Member States concerned <br /> shall lay down in their national law the necessary detailed rules for the request, issuance and exercise of, as well as <br /> supervision and reporting relating to, the authorisations referred to in paragraph 3. Those rules shall also specify in respect <br /> of which of the objectives listed in paragraph 1, first subparagraph, point (h), including which of the criminal offences <br /> referred to in point (h)(iii) thereof, the competent authorities may be authorised to use those systems for the purposes of <br /> law enforcement. Member States shall notify those rules to the Commission at the latest 30 days following the adoption <br /> thereof.</p>
Show original text

Competent authorities may be allowed to use biometric identification systems for law enforcement purposes. Member States must inform the Commission of these rules within 30 days of adopting them. Member States can also create stricter laws about using remote biometric identification systems if they choose.

National market surveillance authorities and data protection authorities in Member States that use real-time remote biometric identification systems in public spaces for law enforcement must submit yearly reports to the Commission. The Commission will provide a template for these reports that includes information about authorization decisions made by courts or independent administrative bodies and their outcomes.

The Commission will publish yearly reports on the use of real-time remote biometric identification systems in public spaces for law enforcement purposes. These reports will be based on combined data from all Member States. However, sensitive operational information about law enforcement activities will not be included in these public reports.

This article does not override other prohibitions that exist under European Union law, particularly when an AI system violates other EU regulations.

The following section covers high-risk AI systems and the rules for classifying them.

<p>to in point (h)(iii) thereof, the competent authorities may be authorised to use those systems for the purposes of <br /> law enforcement. Member States shall notify those rules to the Commission at the latest 30 days following the adoption <br /> thereof. Member States may introduce, in accordance with Union law, more restrictive laws on the use of remote biometric <br /> identification systems.<br /> 6.<br /> National market surveillance authorities and the national data protection authorities of Member States that have been <br /> notified of the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for law enforcement <br /> purposes pursuant to paragraph 4 shall submit to the Commission annual reports on such use. For that purpose, the <br /> Commission shall provide Member States and national market surveillance and data protection authorities with a template, <br /> including information on the number of the decisions taken by competent judicial authorities or an independent <br /> administrative authority whose decision is binding upon requests for authorisations in accordance with paragraph 3 and <br /> their result.<br /> 7.<br /> The Commission shall publish annual reports on the use of real-time remote biometric identification systems in <br /> publicly accessible spaces for law enforcement purposes, based on aggregated data in Member States on the basis of the <br /> annual reports referred to in paragraph 6. Those annual reports shall not include sensitive operational data of the related <br /> law enforcement activities.<br /> 8.<br /> This Article shall not affect the prohibitions that apply where an AI practice infringes other Union law.<br /> CHAPTER III<br /> HIGH-RISK AI SYSTEMS<br /> SECTION 1<br /> Classification of AI systems as high-risk<br /> Article 6<br /> Classification rules for high-risk AI systems<br /> 1.</p>
Show original text

This section explains which AI systems are classified as high-risk under EU law. An AI system is considered high-risk if it meets both of these conditions: (1) it is designed to be a safety component of a product or is itself a product covered by EU safety laws listed in Annex I, and (2) that product or AI system must be checked and approved by a third party before it can be sold or used, as required by EU safety laws in Annex I. Additionally, any AI systems listed in Annex III are automatically considered high-risk. However, an AI system from Annex III does not need to be classified as high-risk if it does not create a significant risk of harm to people's health, safety, or fundamental rights, and does not significantly affect important decisions.

<p>prohibitions that apply where an AI practice infringes other Union law.<br /> CHAPTER III<br /> HIGH-RISK AI SYSTEMS<br /> SECTION 1<br /> Classification of AI systems as high-risk<br /> Article 6<br /> Classification rules for high-risk AI systems<br /> 1.<br /> Irrespective of whether an AI system is placed on the market or put into service independently of the products <br /> referred to in points (a) and (b), that AI system shall be considered to be high-risk where both of the following conditions <br /> are fulfilled:<br /> (a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by <br /> the Union harmonisation legislation listed in Annex I;<br /> (b) the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is <br /> required to undergo a third-party conformity assessment, with a view to the placing on the market or the putting into <br /> service of that product pursuant to the Union harmonisation legislation listed in Annex I.<br /> 2.<br /> In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall be <br /> considered to be high-risk.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 53/144</p> <p>3.<br /> By derogation from paragraph 2, an AI system referred to in Annex III shall not be considered to be high-risk where it <br /> does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not <br /> materially influencing the outcome of decision making.</p>
Show original text

An AI system listed in Annex III is not considered high-risk if it does not cause significant harm to people's health, safety, or fundamental rights, and does not substantially affect decision-making outcomes. This applies when the AI system: (a) performs a narrow, specific task; (b) improves results from work already completed by humans; (c) identifies patterns or changes in decision-making but does not replace or influence human decisions without proper human review; or (d) prepares information for assessments related to the uses listed in Annex III. However, any AI system in Annex III that profiles people is always considered high-risk. Companies that believe their Annex III AI system is not high-risk must document this assessment before selling or using it and must register it under Article 49(2). National authorities can request to see this documentation. By February 2, 2026, the European Commission will publish guidelines explaining how to apply these rules, along with examples of AI systems that are and are not high-risk.

<p>system referred to in Annex III shall not be considered to be high-risk where it <br /> does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not <br /> materially influencing the outcome of decision making.<br /> The first subparagraph shall apply where any of the following conditions is fulfilled:<br /> (a) the AI system is intended to perform a narrow procedural task;<br /> (b) the AI system is intended to improve the result of a previously completed human activity;<br /> (c) the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is <br /> not meant to replace or influence the previously completed human assessment, without proper human review; or<br /> (d) the AI system is intended to perform a preparatory task to an assessment relevant for the purposes of the use cases <br /> listed in Annex III.<br /> Notwithstanding the first subparagraph, an AI system referred to in Annex III shall always be considered to be high-risk <br /> where the AI system performs profiling of natural persons.<br /> 4.<br /> A provider who considers that an AI system referred to in Annex III is not high-risk shall document its assessment <br /> before that system is placed on the market or put into service. Such provider shall be subject to the registration obligation <br /> set out in Article 49(2). Upon request of national competent authorities, the provider shall provide the documentation of <br /> the assessment.<br /> 5.<br /> The Commission shall, after consulting the European Artificial Intelligence Board (the ‘Board’), and no later than <br /> 2 February 2026, provide guidelines specifying the practical implementation of this Article in line with Article 96 together <br /> with a comprehensive list of practical examples of use cases of AI systems that are high-risk and not high-risk.<br /> 6.</p>
Show original text

By February 2, 2026, the Commission must publish guidelines explaining how to apply this Article in practice. These guidelines should include many examples showing which AI systems are considered high-risk and which are not. The Commission has the power to update the conditions for high-risk AI systems listed in Annex III if evidence shows that certain systems in that list do not actually pose serious risks to people's health, safety, or fundamental rights. The Commission can also remove conditions from the list if evidence proves this is necessary to maintain the level of protection for health, safety, and fundamental rights. Any changes to these conditions must not reduce the overall protection level and must be consistent with other related regulations, while also considering new market and technology developments.

<p>later than <br /> 2 February 2026, provide guidelines specifying the practical implementation of this Article in line with Article 96 together <br /> with a comprehensive list of practical examples of use cases of AI systems that are high-risk and not high-risk.<br /> 6.<br /> The Commission is empowered to adopt delegated acts in accordance with Article 97 in order to amend paragraph 3, <br /> second subparagraph, of this Article by adding new conditions to those laid down therein, or by modifying them, where <br /> there is concrete and reliable evidence of the existence of AI systems that fall under the scope of Annex III, but do not pose <br /> a significant risk of harm to the health, safety or fundamental rights of natural persons.<br /> 7.<br /> The Commission shall adopt delegated acts in accordance with Article 97 in order to amend paragraph 3, second <br /> subparagraph, of this Article by deleting any of the conditions laid down therein, where there is concrete and reliable <br /> evidence that this is necessary to maintain the level of protection of health, safety and fundamental rights provided for by <br /> this Regulation.<br /> 8.<br /> Any amendment to the conditions laid down in paragraph 3, second subparagraph, adopted in accordance with <br /> paragraphs 6 and 7 of this Article shall not decrease the overall level of protection of health, safety and fundamental rights <br /> provided for by this Regulation and shall ensure consistency with the delegated acts adopted pursuant to Article 7(1), and <br /> take account of market and technological developments.<br /> Article 7<br /> Amendments to Annex III<br /> 1.</p>
Show original text

The Commission has the power to update Annex III by adding or changing examples of high-risk AI systems. This can happen when two conditions are met: (1) the AI systems are designed to be used in areas already listed in Annex III, and (2) the AI systems could harm people's health, safety, or fundamental rights in a way that is equal to or worse than the risks posed by AI systems already classified as high-risk in Annex III. When deciding whether to add a new AI system to the list, the Commission must consider: what the AI system is meant to do, how widely it is or will be used, what type and amount of data it processes (especially sensitive personal information), how much the AI system makes decisions on its own versus allowing humans to override those decisions, and whether the AI system has already caused harm or raised serious concerns about potential harm to health, safety, or fundamental rights based on reports or complaints to authorities.

<p>safety and fundamental rights <br /> provided for by this Regulation and shall ensure consistency with the delegated acts adopted pursuant to Article 7(1), and <br /> take account of market and technological developments.<br /> Article 7<br /> Amendments to Annex III<br /> 1.<br /> The Commission is empowered to adopt delegated acts in accordance with Article 97 to amend Annex III by adding <br /> or modifying use-cases of high-risk AI systems where both of the following conditions are fulfilled:<br /> (a) the AI systems are intended to be used in any of the areas listed in Annex III;<br /> (b) the AI systems pose a risk of harm to health and safety, or an adverse impact on fundamental rights, and that risk is <br /> equivalent to, or greater than, the risk of harm or of adverse impact posed by the high-risk AI systems already referred <br /> to in Annex III.<br /> EN<br /> OJ L, 12.7.2024<br /> 54/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>2.<br /> When assessing the condition under paragraph 1, point (b), the Commission shall take into account the following <br /> criteria:<br /> (a) the intended purpose of the AI system;<br /> (b) the extent to which an AI system has been used or is likely to be used;<br /> (c) the nature and amount of the data processed and used by the AI system, in particular whether special categories of <br /> personal data are processed;<br /> (d) the extent to which the AI system acts autonomously and the possibility for a human to override a decision or <br /> recommendations that may lead to potential harm;<br /> (e) the extent to which the use of an AI system has already caused harm to health and safety, has had an adverse impact on <br /> fundamental rights or has given rise to significant concerns in relation to the likelihood of such harm or adverse impact, <br /> as demonstrated, for example, by reports or documented allegations submitted to national competent authorities or by <br /> other reports, as appropriate</p>
Show original text

When evaluating AI systems, consider the following factors: (f) How serious could the harm be and how many people could it affect, especially if certain groups would be disproportionately impacted? (g) How dependent are people on the AI system's decisions—can they realistically choose not to use it? (h) Is there a power imbalance between the people affected and those operating the AI system? This includes differences in status, authority, knowledge, wealth, social position, or age. (i) Can the AI system's harmful decisions be easily fixed or reversed? Note that decisions affecting health, safety, or basic rights cannot be considered easily reversible. (j) What benefits would the AI system bring to individuals, groups, or society, including potential safety improvements? (k) What protections does existing EU law already provide: (i) ways for people to seek remedies for AI-related risks (excluding damage claims), and (ii) ways to prevent or reduce these risks.

<p>impact on <br /> fundamental rights or has given rise to significant concerns in relation to the likelihood of such harm or adverse impact, <br /> as demonstrated, for example, by reports or documented allegations submitted to national competent authorities or by <br /> other reports, as appropriate;<br /> (f) the potential extent of such harm or such adverse impact, in particular in terms of its intensity and its ability to affect <br /> multiple persons or to disproportionately affect a particular group of persons;<br /> (g) the extent to which persons who are potentially harmed or suffer an adverse impact are dependent on the outcome <br /> produced with an AI system, in particular because for practical or legal reasons it is not reasonably possible to opt-out <br /> from that outcome;<br /> (h) the extent to which there is an imbalance of power, or the persons who are potentially harmed or suffer an adverse <br /> impact are in a vulnerable position in relation to the deployer of an AI system, in particular due to status, authority, <br /> knowledge, economic or social circumstances, or age;<br /> (i) the extent to which the outcome produced involving an AI system is easily corrigible or reversible, taking into account <br /> the technical solutions available to correct or reverse it, whereby outcomes having an adverse impact on health, safety or <br /> fundamental rights, shall not be considered to be easily corrigible or reversible;<br /> (j) the magnitude and likelihood of benefit of the deployment of the AI system for individuals, groups, or society at large, <br /> including possible improvements in product safety;<br /> (k) the extent to which existing Union law provides for:<br /> (i) effective measures of redress in relation to the risks posed by an AI system, with the exclusion of claims for <br /> damages;<br /> (ii) effective measures to prevent or substantially minimise those risks.<br /> 3.</p>
Show original text

The Commission can remove AI systems from the high-risk list if two conditions are met: (1) the system no longer poses significant risks to fundamental rights, health, or safety based on established criteria, and (2) removing it does not reduce the overall level of protection for health, safety, and fundamental rights under EU law.

High-risk AI systems must follow the requirements in this section based on their intended use and current best practices in AI technology. Providers must use a risk management system (described in Article 9) to ensure compliance.

When a product contains an AI system that is subject to both this Regulation and other EU safety laws listed in Annex I, the provider is responsible for making sure the product meets all applicable requirements under those EU laws.

<p>to which existing Union law provides for:<br /> (i) effective measures of redress in relation to the risks posed by an AI system, with the exclusion of claims for <br /> damages;<br /> (ii) effective measures to prevent or substantially minimise those risks.<br /> 3.<br /> The Commission is empowered to adopt delegated acts in accordance with Article 97 to amend the list in Annex III <br /> by removing high-risk AI systems where both of the following conditions are fulfilled:<br /> (a) the high-risk AI system concerned no longer poses any significant risks to fundamental rights, health or safety, taking <br /> into account the criteria listed in paragraph 2;<br /> (b) the deletion does not decrease the overall level of protection of health, safety and fundamental rights under Union law.<br /> SECTION 2<br /> Requirements for high-risk AI systems<br /> Article 8<br /> Compliance with the requirements<br /> 1.<br /> High-risk AI systems shall comply with the requirements laid down in this Section, taking into account their intended <br /> purpose as well as the generally acknowledged state of the art on AI and AI-related technologies. The risk management <br /> system referred to in Article 9 shall be taken into account when ensuring compliance with those requirements.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 55/144</p> <p>2.<br /> Where a product contains an AI system, to which the requirements of this Regulation as well as requirements of the <br /> Union harmonisation legislation listed in Section A of Annex I apply, providers shall be responsible for ensuring that their <br /> product is fully compliant with all applicable requirements under applicable Union harmonisation legislation.</p>
Show original text

Providers of high-risk AI systems must ensure their products meet all requirements of this Regulation and related EU laws listed in Annex I, Section A. To avoid repeating work and reduce burden, providers can combine their testing, reporting, and documentation for AI compliance with existing documentation already required by EU laws.

Article 9: Risk Management System

  1. Providers must create, implement, document, and maintain a risk management system for high-risk AI systems.

  2. The risk management system is an ongoing process that continues throughout the entire life of the AI system. It requires regular review and updates. It includes four main steps:

(a) Identify and analyze known and foreseeable risks that the AI system could cause to health, safety, or fundamental rights when used as intended.

(b) Estimate and evaluate risks that may occur during normal use and during reasonably foreseeable misuse.

(c) Evaluate additional risks based on data collected from the post-market monitoring system (described in Article 72).

(d) Implement appropriate and targeted measures to manage the identified risks.

<p>which the requirements of this Regulation as well as requirements of the <br /> Union harmonisation legislation listed in Section A of Annex I apply, providers shall be responsible for ensuring that their <br /> product is fully compliant with all applicable requirements under applicable Union harmonisation legislation. In ensuring <br /> the compliance of high-risk AI systems referred to in paragraph 1 with the requirements set out in this Section, and in order <br /> to ensure consistency, avoid duplication and minimise additional burdens, providers shall have a choice of integrating, as <br /> appropriate, the necessary testing and reporting processes, information and documentation they provide with regard to <br /> their product into documentation and procedures that already exist and are required under the Union harmonisation <br /> legislation listed in Section A of Annex I.<br /> Article 9<br /> Risk management system<br /> 1.<br /> A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI <br /> systems.<br /> 2.<br /> The risk management system shall be understood as a continuous iterative process planned and run throughout the <br /> entire lifecycle of a high-risk AI system, requiring regular systematic review and updating. It shall comprise the following <br /> steps:<br /> (a) the identification and analysis of the known and the reasonably foreseeable risks that the high-risk AI system can pose <br /> to health, safety or fundamental rights when the high-risk AI system is used in accordance with its intended purpose;<br /> (b) the estimation and evaluation of the risks that may emerge when the high-risk AI system is used in accordance with its <br /> intended purpose, and under conditions of reasonably foreseeable misuse;<br /> (c) the evaluation of other risks possibly arising, based on the analysis of data gathered from the post-market monitoring <br /> system referred to in Article 72;<br /> (d) the adoption of appropriate and targeted risk management measures designed to address the risks identified pursuant to <br /> point (a).<br /> 3.</p>
Show original text

Companies must manage risks in high-risk AI systems by: (1) identifying potential risks from monitoring data, and (2) putting in place targeted measures to address those risks. Only risks that can reasonably be reduced or eliminated through better system design or technical information should be addressed. When implementing risk management measures, companies should consider how different requirements work together to reduce risks effectively. All remaining risks must be acceptable. To manage risks properly, companies should: (1) eliminate or reduce identified risks through good design when technically possible, (2) add safeguards for risks that cannot be eliminated, and (3) provide necessary information and training to users. Companies should also consider the user's knowledge, experience, training, and how the system will actually be used when deciding how to reduce risks.

<p>possibly arising, based on the analysis of data gathered from the post-market monitoring <br /> system referred to in Article 72;<br /> (d) the adoption of appropriate and targeted risk management measures designed to address the risks identified pursuant to <br /> point (a).<br /> 3.<br /> The risks referred to in this Article shall concern only those which may be reasonably mitigated or eliminated through <br /> the development or design of the high-risk AI system, or the provision of adequate technical information.<br /> 4.<br /> The risk management measures referred to in paragraph 2, point (d), shall give due consideration to the effects and <br /> possible interaction resulting from the combined application of the requirements set out in this Section, with a view to <br /> minimising risks more effectively while achieving an appropriate balance in implementing the measures to fulfil those <br /> requirements.<br /> 5.<br /> The risk management measures referred to in paragraph 2, point (d), shall be such that the relevant residual risk <br /> associated with each hazard, as well as the overall residual risk of the high-risk AI systems is judged to be acceptable.<br /> In identifying the most appropriate risk management measures, the following shall be ensured:<br /> (a) elimination or reduction of risks identified and evaluated pursuant to paragraph 2 in as far as technically feasible <br /> through adequate design and development of the high-risk AI system;<br /> (b) where appropriate, implementation of adequate mitigation and control measures addressing risks that cannot be <br /> eliminated;<br /> (c) provision of information required pursuant to Article 13 and, where appropriate, training to deployers.<br /> With a view to eliminating or reducing risks related to the use of the high-risk AI system, due consideration shall be given <br /> to the technical knowledge, experience, education, the training to be expected by the deployer, and the presumable context <br /> in which the system is intended to be used.<br /> EN<br /> OJ L, 12.7.2024<br /> 56/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>6.</p>
Show original text

High-risk AI systems must be tested to find the best ways to manage risks. Testing ensures these systems work reliably for their intended purpose and meet all requirements. Testing can happen in real-world conditions and must occur throughout development, but definitely before the system is released or used. Tests should use pre-defined metrics and thresholds that match the system's intended purpose. Developers must consider whether the system could negatively affect people under 18 or other vulnerable groups. If developers already follow risk management rules under other EU laws, they can combine those processes with these requirements. High-risk AI systems that use data to train AI models must be built using training, validation, and testing data that meets specific quality standards.

<p>which the system is intended to be used.<br /> EN<br /> OJ L, 12.7.2024<br /> 56/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>6.<br /> High-risk AI systems shall be tested for the purpose of identifying the most appropriate and targeted risk management <br /> measures. Testing shall ensure that high-risk AI systems perform consistently for their intended purpose and that they are in <br /> compliance with the requirements set out in this Section.<br /> 7.<br /> Testing procedures may include testing in real-world conditions in accordance with Article 60.<br /> 8.<br /> The testing of high-risk AI systems shall be performed, as appropriate, at any time throughout the development <br /> process, and, in any event, prior to their being placed on the market or put into service. Testing shall be carried out against <br /> prior defined metrics and probabilistic thresholds that are appropriate to the intended purpose of the high-risk AI system.<br /> 9.<br /> When implementing the risk management system as provided for in paragraphs 1 to 7, providers shall give <br /> consideration to whether in view of its intended purpose the high-risk AI system is likely to have an adverse impact on <br /> persons under the age of 18 and, as appropriate, other vulnerable groups.<br /> 10.<br /> For providers of high-risk AI systems that are subject to requirements regarding internal risk management processes <br /> under other relevant provisions of Union law, the aspects provided in paragraphs 1 to 9 may be part of, or combined with, <br /> the risk management procedures established pursuant to that law.<br /> Article 10<br /> Data and data governance<br /> 1.<br /> High-risk AI systems which make use of techniques involving the training of AI models with data shall be developed <br /> on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5 <br /> whenever such data sets are used.<br /> 2.</p>
Show original text

AI models must be trained, tested, and validated using high-quality data sets that follow specific standards. These data sets require proper management practices that address: the design choices made; how data was collected and its original purpose; data preparation steps like cleaning and labeling; what the data is meant to measure; whether enough suitable data exists; potential biases that could harm people, violate rights, or cause discrimination; steps to identify and reduce these biases; and any data gaps that prevent compliance with regulations. The data sets used must be relevant, representative, accurate, and complete for their intended purpose. They should have appropriate statistical properties for the people or groups who will use the AI system. These quality standards can be met by individual data sets or by combining multiple data sets.

<p>of techniques involving the training of AI models with data shall be developed <br /> on the basis of training, validation and testing data sets that meet the quality criteria referred to in paragraphs 2 to 5 <br /> whenever such data sets are used.<br /> 2.<br /> Training, validation and testing data sets shall be subject to data governance and management practices appropriate <br /> for the intended purpose of the high-risk AI system. Those practices shall concern in particular:<br /> (a) the relevant design choices;<br /> (b) data collection processes and the origin of data, and in the case of personal data, the original purpose of the data <br /> collection;<br /> (c) relevant data-preparation processing operations, such as annotation, labelling, cleaning, updating, enrichment and <br /> aggregation;<br /> (d) the formulation of assumptions, in particular with respect to the information that the data are supposed to measure and <br /> represent;<br /> (e) an assessment of the availability, quantity and suitability of the data sets that are needed;<br /> (f) examination in view of possible biases that are likely to affect the health and safety of persons, have a negative impact <br /> on fundamental rights or lead to discrimination prohibited under Union law, especially where data outputs influence <br /> inputs for future operations;<br /> (g) appropriate measures to detect, prevent and mitigate possible biases identified according to point (f);<br /> (h) the identification of relevant data gaps or shortcomings that prevent compliance with this Regulation, and how those <br /> gaps and shortcomings can be addressed.<br /> 3.<br /> Training, validation and testing data sets shall be relevant, sufficiently representative, and to the best extent possible, <br /> free of errors and complete in view of the intended purpose. They shall have the appropriate statistical properties, including, <br /> where applicable, as regards the persons or groups of persons in relation to whom the high-risk AI system is intended to be <br /> used. Those characteristics of the data sets may be met at the level of individual data sets or at the level of a combination <br /> thereof.<br /> 4.</p>
Show original text

High-risk AI systems must use data sets that reflect the specific people or groups they will affect. The data should match the particular location, context, behavior, or function where the AI system will be used. This can be done using individual data sets or a combination of them.

In rare cases, AI providers may use sensitive personal data to detect and fix bias in high-risk AI systems. However, this is only allowed if: (1) bias detection cannot be done using other types of data like synthetic or anonymized data; (2) the sensitive data has technical restrictions on reuse and uses the best available security and privacy protections, including pseudonymization; (3) the sensitive data is strictly controlled with limited access, proper documentation, and confidentiality requirements to prevent misuse and ensure only authorized people can access it; and (4) additional conditions must be met as outlined in EU Regulations 2016/679, 2018/1725, and Directive 2016/680.

<p>persons or groups of persons in relation to whom the high-risk AI system is intended to be <br /> used. Those characteristics of the data sets may be met at the level of individual data sets or at the level of a combination <br /> thereof.<br /> 4.<br /> Data sets shall take into account, to the extent required by the intended purpose, the characteristics or elements that <br /> are particular to the specific geographical, contextual, behavioural or functional setting within which the high-risk AI <br /> system is intended to be used.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 57/144</p> <p>5.<br /> To the extent that it is strictly necessary for the purpose of ensuring bias detection and correction in relation to the <br /> high-risk AI systems in accordance with paragraph (2), points (f) and (g) of this Article, the providers of such systems may <br /> exceptionally process special categories of personal data, subject to appropriate safeguards for the fundamental rights and <br /> freedoms of natural persons. In addition to the provisions set out in Regulations (EU) 2016/679 and (EU) 2018/1725 and <br /> Directive (EU) 2016/680, all the following conditions must be met in order for such processing to occur:<br /> (a) the bias detection and correction cannot be effectively fulfilled by processing other data, including synthetic or <br /> anonymised data;<br /> (b) the special categories of personal data are subject to technical limitations on the re-use of the personal data, and <br /> state-of-the-art security and privacy-preserving measures, including pseudonymisation;<br /> (c) the special categories of personal data are subject to measures to ensure that the personal data processed are secured, <br /> protected, subject to suitable safeguards, including strict controls and documentation of the access, to avoid misuse and <br /> ensure that only authorised persons have access to those personal data with appropriate confidentiality obligations;<br /> (d) the</p>
Show original text

Personal data must be kept secure with strict access controls and documentation to prevent misuse. Only authorized people with confidentiality obligations can access this data. Special categories of personal data cannot be shared with other parties. This sensitive data must be deleted once any bias is corrected or when its retention period ends, whichever happens first. Organizations must document why processing special categories of personal data was necessary to detect and fix biases, and explain why other data could not be used instead, according to EU Regulations 2016/679, 2018/1725, and Directive 2016/680. For high-risk AI systems that do not involve training AI models, these rules apply only to testing data sets. Before a high-risk AI system is released or used, companies must create and maintain technical documentation showing it meets all requirements. This documentation must be clear and complete so that national authorities and notified bodies can verify compliance. It must include all elements listed in Annex IV. Small and medium-sized enterprises (SMEs) and start-ups can provide simplified versions of the technical documentation. The Commission will create a simplified form designed for small and microenterprises.

<p>ensure that the personal data processed are secured, <br /> protected, subject to suitable safeguards, including strict controls and documentation of the access, to avoid misuse and <br /> ensure that only authorised persons have access to those personal data with appropriate confidentiality obligations;<br /> (d) the special categories of personal data are not to be transmitted, transferred or otherwise accessed by other parties;<br /> (e) the special categories of personal data are deleted once the bias has been corrected or the personal data has reached the <br /> end of its retention period, whichever comes first;<br /> (f) the records of processing activities pursuant to Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) <br /> 2016/680 include the reasons why the processing of special categories of personal data was strictly necessary to detect <br /> and correct biases, and why that objective could not be achieved by processing other data.<br /> 6.<br /> For the development of high-risk AI systems not using techniques involving the training of AI models, paragraphs 2 <br /> to 5 apply only to the testing data sets.<br /> Article 11<br /> Technical documentation<br /> 1.<br /> The technical documentation of a high-risk AI system shall be drawn up before that system is placed on the market or <br /> put into service and shall be kept up-to date.<br /> The technical documentation shall be drawn up in such a way as to demonstrate that the high-risk AI system complies with <br /> the requirements set out in this Section and to provide national competent authorities and notified bodies with the <br /> necessary information in a clear and comprehensive form to assess the compliance of the AI system with those <br /> requirements. It shall contain, at a minimum, the elements set out in Annex IV. SMEs, including start-ups, may provide the <br /> elements of the technical documentation specified in Annex IV in a simplified manner. To that end, the Commission shall <br /> establish a simplified technical documentation form targeted at the needs of small and microenterprises.</p>
Show original text

Small and medium-sized businesses (SMEs), including start-ups, can submit technical documentation in a simplified format. The Commission will create a simplified form designed for small and micro businesses to use. When an SME chooses this option, it must use this official form, which notified bodies (regulatory inspectors) will accept for compliance checks.

When a high-risk AI system that is part of a product covered by EU laws is sold or used, one complete technical documentation package must be prepared. This package must include all required information plus any additional information required by those EU laws.

The Commission has the power to update the technical documentation requirements (Annex IV) as technology advances, to ensure the documentation contains all necessary information to verify the system meets the required standards.

High-risk AI systems must be designed to automatically record events (logs) throughout their entire operational lifetime.

<p>SMEs, including start-ups, may provide the <br /> elements of the technical documentation specified in Annex IV in a simplified manner. To that end, the Commission shall <br /> establish a simplified technical documentation form targeted at the needs of small and microenterprises. Where an SME, <br /> including a start-up, opts to provide the information required in Annex IV in a simplified manner, it shall use the form <br /> referred to in this paragraph. Notified bodies shall accept the form for the purposes of the conformity assessment.<br /> 2.<br /> Where a high-risk AI system related to a product covered by the Union harmonisation legislation listed in Section <br /> A of Annex I is placed on the market or put into service, a single set of technical documentation shall be drawn up <br /> containing all the information set out in paragraph 1, as well as the information required under those legal acts.<br /> 3.<br /> The Commission is empowered to adopt delegated acts in accordance with Article 97 in order to amend Annex IV, <br /> where necessary, to ensure that, in light of technical progress, the technical documentation provides all the information <br /> necessary to assess the compliance of the system with the requirements set out in this Section.<br /> EN<br /> OJ L, 12.7.2024<br /> 58/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>Article 12<br /> Record-keeping<br /> 1.<br /> High-risk AI systems shall technically allow for the automatic recording of events (logs) over the lifetime of the <br /> system.<br /> 2.</p>
Show original text

Article 12 - Record-keeping: High-risk AI systems must automatically record events (logs) throughout their entire lifetime. These logs must track information relevant to identifying risks, monitoring the system after it is sold, and checking how the system operates. For certain high-risk AI systems, logs must specifically record: when the system is used (start and end times), which database was checked, what input data matched, and which people verified the results. Article 13 - Transparency and Information: High-risk AI systems must be designed so that users can clearly understand how the system works and what results it produces. The system must be transparent enough to help both the company providing it and the company using it follow all required rules.

<p>ropa.eu/eli/reg/2024/1689/oj</p> <p>Article 12<br /> Record-keeping<br /> 1.<br /> High-risk AI systems shall technically allow for the automatic recording of events (logs) over the lifetime of the <br /> system.<br /> 2.<br /> In order to ensure a level of traceability of the functioning of a high-risk AI system that is appropriate to the intended <br /> purpose of the system, logging capabilities shall enable the recording of events relevant for:<br /> (a) identifying situations that may result in the high-risk AI system presenting a risk within the meaning of Article 79(1) or <br /> in a substantial modification;<br /> (b) facilitating the post-market monitoring referred to in Article 72; and<br /> (c) monitoring the operation of high-risk AI systems referred to in Article 26(5).<br /> 3.<br /> For high-risk AI systems referred to in point 1 (a), of Annex III, the logging capabilities shall provide, at a minimum:<br /> (a) recording of the period of each use of the system (start date and time and end date and time of each use);<br /> (b) the reference database against which input data has been checked by the system;<br /> (c) the input data for which the search has led to a match;<br /> (d) the identification of the natural persons involved in the verification of the results, as referred to in Article 14(5).<br /> Article 13<br /> Transparency and provision of information to deployers<br /> 1.<br /> High-risk AI systems shall be designed and developed in such a way as to ensure that their operation is sufficiently <br /> transparent to enable deployers to interpret a system’s output and use it appropriately. An appropriate type and degree of <br /> transparency shall be ensured with a view to achieving compliance with the relevant obligations of the provider and <br /> deployer set out in Section 3.<br /> 2.</p>
Show original text

High-risk AI systems must come with clear instructions for users in a digital format or other appropriate form. These instructions should be easy to understand and contain all relevant information that users need to know.

The instructions must include:

  1. Provider Information: The name and contact details of the company that created the AI system, and its authorized representative if applicable.

  2. System Performance Details: A description of what the AI system is designed to do, including:
    - Its intended purpose
    - How accurate it is (with specific measurements), how reliable it is, and how secure it is against cyber attacks
    - Any known problems or situations that could affect its accuracy, reliability, or security
    - Any known risks to health, safety, or people's rights that could occur when using the system as intended or through foreseeable misuse
    - If applicable, information about the system's ability to explain its decisions

The goal is to ensure users have enough transparency and understanding to use the AI system properly and safely.

<p>ers to interpret a system’s output and use it appropriately. An appropriate type and degree of <br /> transparency shall be ensured with a view to achieving compliance with the relevant obligations of the provider and <br /> deployer set out in Section 3.<br /> 2.<br /> High-risk AI systems shall be accompanied by instructions for use in an appropriate digital format or otherwise that <br /> include concise, complete, correct and clear information that is relevant, accessible and comprehensible to deployers.<br /> 3.<br /> The instructions for use shall contain at least the following information:<br /> (a) the identity and the contact details of the provider and, where applicable, of its authorised representative;<br /> (b) the characteristics, capabilities and limitations of performance of the high-risk AI system, including:<br /> (i) its intended purpose;<br /> (ii) the level of accuracy, including its metrics, robustness and cybersecurity referred to in Article 15 against which the <br /> high-risk AI system has been tested and validated and which can be expected, and any known and foreseeable <br /> circumstances that may have an impact on that expected level of accuracy, robustness and cybersecurity;<br /> (iii) any known or foreseeable circumstance, related to the use of the high-risk AI system in accordance with its <br /> intended purpose or under conditions of reasonably foreseeable misuse, which may lead to risks to the health and <br /> safety or fundamental rights referred to in Article 9(2);<br /> (iv) where applicable, the technical capabilities and characteristics of the high-risk AI system to provide information <br /> that is relevant to explain its output;<br /> OJ L, 12.7.</p>
Show original text

High-risk AI systems must include clear documentation that explains how they work and make decisions. This documentation should cover: the system's technical abilities and how it produces results; how well it performs for specific groups of people it will be used with; details about the data used to train and test the system; guidance for users on understanding and properly using the system's outputs; any planned changes to the system noted during initial approval; human oversight tools and technical features to help users interpret results; the computing power and hardware needed, how long the system will last, and maintenance requirements including software updates; and mechanisms that allow users to properly collect, store, and review system activity logs. Additionally, high-risk AI systems must be designed with appropriate tools and interfaces so that human operators can effectively monitor and oversee them while they are being used.

<p>afety or fundamental rights referred to in Article 9(2);<br /> (iv) where applicable, the technical capabilities and characteristics of the high-risk AI system to provide information <br /> that is relevant to explain its output;<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 59/144</p> <p>(v) when appropriate, its performance regarding specific persons or groups of persons on which the system is <br /> intended to be used;<br /> (vi) when appropriate, specifications for the input data, or any other relevant information in terms of the training, <br /> validation and testing data sets used, taking into account the intended purpose of the high-risk AI system;<br /> (vii) where applicable, information to enable deployers to interpret the output of the high-risk AI system and use it <br /> appropriately;<br /> (c) the changes to the high-risk AI system and its performance which have been pre-determined by the provider at the <br /> moment of the initial conformity assessment, if any;<br /> (d) the human oversight measures referred to in Article 14, including the technical measures put in place to facilitate the <br /> interpretation of the outputs of the high-risk AI systems by the deployers;<br /> (e) the computational and hardware resources needed, the expected lifetime of the high-risk AI system and any necessary <br /> maintenance and care measures, including their frequency, to ensure the proper functioning of that AI system, including <br /> as regards software updates;<br /> (f) where relevant, a description of the mechanisms included within the high-risk AI system that allows deployers to <br /> properly collect, store and interpret the logs in accordance with Article 12.<br /> Article 14<br /> Human oversight<br /> 1.<br /> High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine <br /> interface tools, that they can be effectively overseen by natural persons during the period in which they are in use.<br /> 2.</p>
Show original text

Human Oversight of High-Risk AI Systems

  1. High-risk AI systems must be designed so that people can effectively monitor and control them while they are being used. The systems should include appropriate tools to help people oversee their operation.

  2. The goal of human oversight is to prevent or reduce risks to health, safety, and fundamental rights that may occur when a high-risk AI system is used as intended or misused in foreseeable ways. This is especially important when other safety measures are not enough to eliminate these risks.

  3. Oversight measures must match the level of risk, how independently the AI system operates, and how it will be used. Providers can implement these measures in two ways:
    (a) Build safeguards directly into the AI system before selling or deploying it, if technically possible.
    (b) Provide instructions for safeguards that the person or organization using the system should implement.

  4. When providing a high-risk AI system to users, the provider must ensure that people responsible for oversight can:
    (a) Understand what the AI system can and cannot do, and monitor how it works. This includes spotting and fixing problems, errors, and unexpected behavior.
    (b) Recognize the risk of over-trusting the AI system's decisions (automation bias), especially when the system provides information or recommendations that people will use to make decisions.
    (c) Correctly understand and interpret the AI system's output by using available explanation tools and methods.

<p>Human oversight<br /> 1.<br /> High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine <br /> interface tools, that they can be effectively overseen by natural persons during the period in which they are in use.<br /> 2.<br /> Human oversight shall aim to prevent or minimise the risks to health, safety or fundamental rights that may emerge <br /> when a high-risk AI system is used in accordance with its intended purpose or under conditions of reasonably foreseeable <br /> misuse, in particular where such risks persist despite the application of other requirements set out in this Section.<br /> 3.<br /> The oversight measures shall be commensurate with the risks, level of autonomy and context of use of the high-risk <br /> AI system, and shall be ensured through either one or both of the following types of measures:<br /> (a) measures identified and built, when technically feasible, into the high-risk AI system by the provider before it is placed <br /> on the market or put into service;<br /> (b) measures identified by the provider before placing the high-risk AI system on the market or putting it into service and <br /> that are appropriate to be implemented by the deployer.<br /> 4.<br /> For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the deployer in <br /> such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate:<br /> (a) to properly understand the relevant capacities and limitations of the high-risk AI system and be able to duly monitor its <br /> operation, including in view of detecting and addressing anomalies, dysfunctions and unexpected performance;<br /> (b) to remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk <br /> AI system (automation bias), in particular for high-risk AI systems used to provide information or recommendations for <br /> decisions to be taken by natural persons;<br /> (c) to correctly interpret the high-risk AI system’s output, taking into account, for example, the interpretation tools and <br /> methods</p>
Show original text

High-risk AI systems that provide information or recommendations to people must meet several important requirements:

  1. Users must be able to understand the system's output using available interpretation tools and methods.

  2. Users must have the ability to choose not to use the system or to ignore, override, or reverse its recommendations in any situation.

  3. Users must be able to stop or interrupt the system safely through a stop button or similar control.

  4. For AI systems that identify people, any action or decision based on that identification must be verified and confirmed by at least two qualified people with proper training and authority. However, this two-person verification requirement does not apply to AI systems used by law enforcement, immigration, border control, or asylum authorities if EU or national law determines it would be impractical.

  5. All high-risk AI systems must be designed and built to maintain appropriate levels of accuracy, robustness, and cybersecurity throughout their entire operational life.

<p>in particular for high-risk AI systems used to provide information or recommendations for <br /> decisions to be taken by natural persons;<br /> (c) to correctly interpret the high-risk AI system’s output, taking into account, for example, the interpretation tools and <br /> methods available;<br /> EN<br /> OJ L, 12.7.2024<br /> 60/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>(d) to decide, in any particular situation, not to use the high-risk AI system or to otherwise disregard, override or reverse <br /> the output of the high-risk AI system;<br /> (e) to intervene in the operation of the high-risk AI system or interrupt the system through a ‘stop’ button or a similar <br /> procedure that allows the system to come to a halt in a safe state.<br /> 5.<br /> For high-risk AI systems referred to in point 1(a) of Annex III, the measures referred to in paragraph 3 of this Article <br /> shall be such as to ensure that, in addition, no action or decision is taken by the deployer on the basis of the identification <br /> resulting from the system unless that identification has been separately verified and confirmed by at least two natural <br /> persons with the necessary competence, training and authority.<br /> The requirement for a separate verification by at least two natural persons shall not apply to high-risk AI systems used for <br /> the purposes of law enforcement, migration, border control or asylum, where Union or national law considers the <br /> application of this requirement to be disproportionate.<br /> Article 15<br /> Accuracy, robustness and cybersecurity<br /> 1.<br /> High-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level of accuracy, <br /> robustness, and cybersecurity, and that they perform consistently in those respects throughout their lifecycle.<br /> 2.</p>
Show original text

Accuracy, Robustness, and Cybersecurity

  1. High-risk AI systems must be designed to achieve appropriate levels of accuracy, robustness, and cybersecurity. They must maintain these standards consistently throughout their entire lifecycle.

  2. The Commission will work with relevant organizations, including measurement and benchmarking authorities, to develop standards and methods for measuring accuracy, robustness, and other important performance metrics for high-risk AI systems.

  3. High-risk AI systems must clearly state their accuracy levels and accuracy metrics in their user instructions.

  4. High-risk AI systems must be designed to handle errors, faults, and inconsistencies that may occur within the system or its environment, especially when interacting with people or other systems. Both technical and organizational measures are required. Robustness can be achieved through backup systems or fail-safe plans. AI systems that continue learning after being released must be designed to prevent biased outputs from affecting future operations. Any feedback loops must be properly managed with appropriate safeguards.

  5. High-risk AI systems must be protected against unauthorized attempts to change how they work, their outputs, or their performance by exploiting vulnerabilities. Cybersecurity protections must be appropriate for the specific circumstances and risks involved.

<p>ness and cybersecurity<br /> 1.<br /> High-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level of accuracy, <br /> robustness, and cybersecurity, and that they perform consistently in those respects throughout their lifecycle.<br /> 2.<br /> To address the technical aspects of how to measure the appropriate levels of accuracy and robustness set out in <br /> paragraph 1 and any other relevant performance metrics, the Commission shall, in cooperation with relevant stakeholders <br /> and organisations such as metrology and benchmarking authorities, encourage, as appropriate, the development of <br /> benchmarks and measurement methodologies.<br /> 3.<br /> The levels of accuracy and the relevant accuracy metrics of high-risk AI systems shall be declared in the accompanying <br /> instructions of use.<br /> 4.<br /> High-risk AI systems shall be as resilient as possible regarding errors, faults or inconsistencies that may occur within <br /> the system or the environment in which the system operates, in particular due to their interaction with natural persons or <br /> other systems. Technical and organisational measures shall be taken in this regard.<br /> The robustness of high-risk AI systems may be achieved through technical redundancy solutions, which may include <br /> backup or fail-safe plans.<br /> High-risk AI systems that continue to learn after being placed on the market or put into service shall be developed in such <br /> a way as to eliminate or reduce as far as possible the risk of possibly biased outputs influencing input for future operations <br /> (feedback loops), and as to ensure that any such feedback loops are duly addressed with appropriate mitigation measures.<br /> 5.<br /> High-risk AI systems shall be resilient against attempts by unauthorised third parties to alter their use, outputs or <br /> performance by exploiting system vulnerabilities.<br /> The technical solutions aiming to ensure the cybersecurity of high-risk AI systems shall be appropriate to the relevant <br /> circumstances and the risks.</p>
Show original text

High-risk AI systems must be protected against unauthorized changes to how they work or what they produce by fixing security weaknesses.

Security solutions should match the specific risks involved. These solutions must address AI-specific threats, including: preventing or detecting attacks that poison training data, corrupt pre-trained models, use tricky inputs to fool the AI, steal confidential information, or exploit model weaknesses.

Providers of high-risk AI systems must:
(a) Ensure their systems meet all required safety standards
(b) Display their name, business name, trademark, and contact address on the system, packaging, or documentation
(c) Have a quality management system in place
(d) Keep required documentation
(e) Keep automatic logs generated by the system
(f) Have the system tested and approved before selling or using it
(g) Create an official EU declaration confirming the system meets requirements
(h) Attach a CE mark to the system

<p>resilient against attempts by unauthorised third parties to alter their use, outputs or <br /> performance by exploiting system vulnerabilities.<br /> The technical solutions aiming to ensure the cybersecurity of high-risk AI systems shall be appropriate to the relevant <br /> circumstances and the risks.<br /> The technical solutions to address AI specific vulnerabilities shall include, where appropriate, measures to prevent, detect, <br /> respond to, resolve and control for attacks trying to manipulate the training data set (data poisoning), or pre-trained <br /> components used in training (model poisoning), inputs designed to cause the AI model to make a mistake (adversarial <br /> examples or model evasion), confidentiality attacks or model flaws.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 61/144</p> <p>SECTION 3<br /> Obligations of providers and deployers of high-risk AI systems and other parties<br /> Article 16<br /> Obligations of providers of high-risk AI systems<br /> Providers of high-risk AI systems shall:<br /> (a) ensure that their high-risk AI systems are compliant with the requirements set out in Section 2;<br /> (b) indicate on the high-risk AI system or, where that is not possible, on its packaging or its accompanying documentation, <br /> as applicable, their name, registered trade name or registered trade mark, the address at which they can be contacted;<br /> (c) have a quality management system in place which complies with Article 17;<br /> (d) keep the documentation referred to in Article 18;<br /> (e) when under their control, keep the logs automatically generated by their high-risk AI systems as referred to in <br /> Article 19;<br /> (f) ensure that the high-risk AI system undergoes the relevant conformity assessment procedure as referred to in Article 43, <br /> prior to its being placed on the market or put into service;<br /> (g) draw up an EU declaration of conformity in accordance with Article 47;<br /> (h) affix the CE</p>
Show original text

Providers of high-risk AI systems must complete several key requirements before selling or using their products:

  1. Complete the required conformity assessment process (Article 43) before market release
  2. Create an EU declaration of conformity (Article 47)
  3. Add a CE marking to the system, its packaging, or documentation to show it meets regulations (Article 48)
  4. Register the system as required (Article 49)
  5. Take corrective actions and provide information when needed (Article 20)
  6. Prove the system meets all requirements when asked by national authorities
  7. Ensure the system is accessible according to EU Directives 2016/2102 and 2019/882

Providers must also establish a quality management system that includes:
- A compliance strategy covering regulatory requirements and system modifications
- Design procedures including design control and verification
- Development procedures including quality control and assurance
- Testing and validation procedures to be performed before, during, and after development, with specified frequency

All quality management procedures must be documented in writing as policies, procedures, and instructions.

<p>es the relevant conformity assessment procedure as referred to in Article 43, <br /> prior to its being placed on the market or put into service;<br /> (g) draw up an EU declaration of conformity in accordance with Article 47;<br /> (h) affix the CE marking to the high-risk AI system or, where that is not possible, on its packaging or its accompanying <br /> documentation, to indicate conformity with this Regulation, in accordance with Article 48;<br /> (i) comply with the registration obligations referred to in Article 49(1);<br /> (j) take the necessary corrective actions and provide information as required in Article 20;<br /> (k) upon a reasoned request of a national competent authority, demonstrate the conformity of the high-risk AI system with <br /> the requirements set out in Section 2;<br /> (l) ensure that the high-risk AI system complies with accessibility requirements in accordance with Directives (EU) <br /> 2016/2102 and (EU) 2019/882.<br /> Article 17<br /> Quality management system<br /> 1.<br /> Providers of high-risk AI systems shall put a quality management system in place that ensures compliance with this <br /> Regulation. That system shall be documented in a systematic and orderly manner in the form of written policies, procedures <br /> and instructions, and shall include at least the following aspects:<br /> (a)<br /> a strategy for regulatory compliance, including compliance with conformity assessment procedures and procedures for <br /> the management of modifications to the high-risk AI system;<br /> (b) techniques, procedures and systematic actions to be used for the design, design control and design verification of the <br /> high-risk AI system;<br /> (c)<br /> techniques, procedures and systematic actions to be used for the development, quality control and quality assurance of <br /> the high-risk AI system;<br /> (d) examination, test and validation procedures to be carried out before, during and after the development of the high-risk <br /> AI system, and the frequency with which they have to be carried out;<br /> EN<br /> OJ L, 12.7.</p>
Show original text

High-risk AI systems must follow these requirements:

(d) Testing and validation must happen before, during, and after development, with a set schedule for how often tests occur.

(e) Technical standards must be applied. If existing standards don't fully cover all requirements, alternative methods must ensure the AI system meets all rules.

(f) Data management procedures are needed for all data activities, including collecting, analyzing, labeling, storing, filtering, and organizing data before the AI system is released or used.

(g) A risk management system must be in place (as described in Article 9).

(h) A system to monitor the AI system after it is released to the market must be set up and maintained (as described in Article 72).

(i) Procedures must exist for reporting serious problems (as described in Article 73).

(j) Clear communication processes must be established with government authorities, other relevant organizations, data providers, testing bodies, competitors, customers, and other interested parties.

(k) All important documents and information must be kept and organized.

(l) Resources must be managed properly, including measures to ensure a steady supply of necessary materials.

(m) Clear responsibility assignments must be documented, showing which managers and staff are responsible for each requirement.

These requirements should be adjusted based on the size of the organization providing the AI system.

<p>(d) examination, test and validation procedures to be carried out before, during and after the development of the high-risk <br /> AI system, and the frequency with which they have to be carried out;<br /> EN<br /> OJ L, 12.7.2024<br /> 62/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>(e)<br /> technical specifications, including standards, to be applied and, where the relevant harmonised standards are not <br /> applied in full or do not cover all of the relevant requirements set out in Section 2, the means to be used to ensure that <br /> the high-risk AI system complies with those requirements;<br /> (f)<br /> systems and procedures for data management, including data acquisition, data collection, data analysis, data labelling, <br /> data storage, data filtration, data mining, data aggregation, data retention and any other operation regarding the data <br /> that is performed before and for the purpose of the placing on the market or the putting into service of high-risk AI <br /> systems;<br /> (g) the risk management system referred to in Article 9;<br /> (h) the setting-up, implementation and maintenance of a post-market monitoring system, in accordance with Article 72;<br /> (i)<br /> procedures related to the reporting of a serious incident in accordance with Article 73;<br /> (j)<br /> the handling of communication with national competent authorities, other relevant authorities, including those <br /> providing or supporting the access to data, notified bodies, other operators, customers or other interested parties;<br /> (k) systems and procedures for record-keeping of all relevant documentation and information;<br /> (l)<br /> resource management, including security-of-supply related measures;<br /> (m) an accountability framework setting out the responsibilities of the management and other staff with regard to all the <br /> aspects listed in this paragraph.<br /> 2.<br /> The implementation of the aspects referred to in paragraph 1 shall be proportionate to the size of the provider’s <br /> organisation.</p>
Show original text

Management and staff must understand their responsibilities for all aspects covered in this section.

The requirements must be adjusted based on the size of the organization. However, all providers must follow the strict standards and protection levels needed to ensure their high-risk AI systems comply with this regulation.

Providers of high-risk AI systems that already have quality management systems or similar requirements under other EU laws can include these aspects as part of their existing systems.

Financial institutions that follow EU financial services rules can meet the quality management system requirements by following their internal governance rules instead. This applies to all requirements except certain specific points (g, h, and i). Any relevant standards should be considered.

Providers must keep the following documents available to national authorities for 10 years after placing or putting the high-risk AI system into service: the technical documentation, quality management system documentation, documentation of any changes approved by notified bodies (if applicable), decisions and documents from notified bodies (if applicable), and the EU declaration of conformity.

<p>setting out the responsibilities of the management and other staff with regard to all the <br /> aspects listed in this paragraph.<br /> 2.<br /> The implementation of the aspects referred to in paragraph 1 shall be proportionate to the size of the provider’s <br /> organisation. Providers shall, in any event, respect the degree of rigour and the level of protection required to ensure the <br /> compliance of their high-risk AI systems with this Regulation.<br /> 3.<br /> Providers of high-risk AI systems that are subject to obligations regarding quality management systems or an <br /> equivalent function under relevant sectoral Union law may include the aspects listed in paragraph 1 as part of the quality <br /> management systems pursuant to that law.<br /> 4.<br /> For providers that are financial institutions subject to requirements regarding their internal governance, arrangements <br /> or processes under Union financial services law, the obligation to put in place a quality management system, with the <br /> exception of paragraph 1, points (g), (h) and (i) of this Article, shall be deemed to be fulfilled by complying with the rules on <br /> internal governance arrangements or processes pursuant to the relevant Union financial services law. To that end, any <br /> harmonised standards referred to in Article 40 shall be taken into account.<br /> Article 18<br /> Documentation keeping<br /> 1.<br /> The provider shall, for a period ending 10 years after the high-risk AI system has been placed on the market or put <br /> into service, keep at the disposal of the national competent authorities:<br /> (a) the technical documentation referred to in Article 11;<br /> (b) the documentation concerning the quality management system referred to in Article 17;<br /> (c) the documentation concerning the changes approved by notified bodies, where applicable;<br /> (d) the decisions and other documents issued by the notified bodies, where applicable;<br /> (e) the EU declaration of conformity referred to in Article 47.<br /> OJ L, 12.7.</p>
Show original text

Providers must keep important documents available to national authorities for a set period. If a provider or their representative goes bankrupt or stops operating before this period ends, each Member State decides how to handle the documentation. Financial institutions must keep technical documentation according to financial services rules. High-risk AI system providers must keep automatic logs generated by their systems for at least six months, or longer if required by law. Financial institutions must keep these logs as part of their required financial services documentation. Providers must take corrective actions and inform relevant parties when issues arise.

<p>concerning the changes approved by notified bodies, where applicable;<br /> (d) the decisions and other documents issued by the notified bodies, where applicable;<br /> (e) the EU declaration of conformity referred to in Article 47.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 63/144</p> <p>2.<br /> Each Member State shall determine conditions under which the documentation referred to in paragraph 1 remains at <br /> the disposal of the national competent authorities for the period indicated in that paragraph for the cases when a provider <br /> or its authorised representative established on its territory goes bankrupt or ceases its activity prior to the end of that <br /> period.<br /> 3.<br /> Providers that are financial institutions subject to requirements regarding their internal governance, arrangements or <br /> processes under Union financial services law shall maintain the technical documentation as part of the documentation kept <br /> under the relevant Union financial services law.<br /> Article 19<br /> Automatically generated logs<br /> 1.<br /> Providers of high-risk AI systems shall keep the logs referred to in Article 12(1), automatically generated by their <br /> high-risk AI systems, to the extent such logs are under their control. Without prejudice to applicable Union or national law, <br /> the logs shall be kept for a period appropriate to the intended purpose of the high-risk AI system, of at least six months, <br /> unless provided otherwise in the applicable Union or national law, in particular in Union law on the protection of personal <br /> data.<br /> 2.<br /> Providers that are financial institutions subject to requirements regarding their internal governance, arrangements or <br /> processes under Union financial services law shall maintain the logs automatically generated by their high-risk AI systems <br /> as part of the documentation kept under the relevant financial services law.<br /> Article 20<br /> Corrective actions and duty of information<br /> 1.</p>
Show original text

Companies that provide high-risk AI systems must keep automatic logs created by these systems as part of their required financial records.

Article 20: Fixing Problems and Reporting Requirements

  1. If a company provides a high-risk AI system that does not follow this regulation, they must immediately fix it, remove it from the market, disable it, or recall it. They must tell all distributors, users, authorized representatives, and importers about these actions.

  2. If a high-risk AI system creates a serious risk, the provider must quickly investigate the problem with the user's help. They must inform the government authorities responsible for overseeing this AI system and the certification body that approved it. They must explain what went wrong and what steps they took to fix it.

Article 21: Working with Government Authorities

  1. When a government authority asks, companies providing high-risk AI systems must give them all information and documents needed to prove the system follows the rules. This information must be provided in a language the authority understands, using one of the official languages of the European Union.

  2. [Text continues]

<p>arrangements or <br /> processes under Union financial services law shall maintain the logs automatically generated by their high-risk AI systems <br /> as part of the documentation kept under the relevant financial services law.<br /> Article 20<br /> Corrective actions and duty of information<br /> 1.<br /> Providers of high-risk AI systems which consider or have reason to consider that a high-risk AI system that they have <br /> placed on the market or put into service is not in conformity with this Regulation shall immediately take the necessary <br /> corrective actions to bring that system into conformity, to withdraw it, to disable it, or to recall it, as appropriate. They shall <br /> inform the distributors of the high-risk AI system concerned and, where applicable, the deployers, the authorised <br /> representative and importers accordingly.<br /> 2.<br /> Where the high-risk AI system presents a risk within the meaning of Article 79(1) and the provider becomes aware of <br /> that risk, it shall immediately investigate the causes, in collaboration with the reporting deployer, where applicable, and <br /> inform the market surveillance authorities competent for the high-risk AI system concerned and, where applicable, the <br /> notified body that issued a certificate for that high-risk AI system in accordance with Article 44, in particular, of the nature <br /> of the non-compliance and of any relevant corrective action taken.<br /> Article 21<br /> Cooperation with competent authorities<br /> 1.<br /> Providers of high-risk AI systems shall, upon a reasoned request by a competent authority, provide that authority all <br /> the information and documentation necessary to demonstrate the conformity of the high-risk AI system with the <br /> requirements set out in Section 2, in a language which can be easily understood by the authority in one of the official <br /> languages of the institutions of the Union as indicated by the Member State concerned.<br /> 2.</p>
Show original text

Providers of high-risk AI systems must submit documentation to authorities in a language the authorities can easily understand, using one of the official languages of the European Union institutions.

When a competent authority requests it, providers must give that authority access to the automatic logs generated by their high-risk AI system, if the provider controls those logs.

Any information that a competent authority receives under these rules must be kept confidential according to Article 78.

Providers of high-risk AI systems that are based outside the European Union must appoint an authorized representative located within the Union before selling their systems in the EU market. This appointment must be done in writing.

The provider must allow its authorized representative to carry out the tasks described in the written agreement between them.

The authorized representative must perform the tasks specified in the agreement with the provider. When market surveillance authorities ask for it, the representative must provide a copy of the agreement in one of the official languages of the European Union institutions, as requested by the authority.

<p>the high-risk AI system with the <br /> requirements set out in Section 2, in a language which can be easily understood by the authority in one of the official <br /> languages of the institutions of the Union as indicated by the Member State concerned.<br /> 2.<br /> Upon a reasoned request by a competent authority, providers shall also give the requesting competent authority, as <br /> applicable, access to the automatically generated logs of the high-risk AI system referred to in Article 12(1), to the extent <br /> such logs are under their control.<br /> 3.<br /> Any information obtained by a competent authority pursuant to this Article shall be treated in accordance with the <br /> confidentiality obligations set out in Article 78.<br /> EN<br /> OJ L, 12.7.2024<br /> 64/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>Article 22<br /> Authorised representatives of providers of high-risk AI systems<br /> 1.<br /> Prior to making their high-risk AI systems available on the Union market, providers established in third countries <br /> shall, by written mandate, appoint an authorised representative which is established in the Union.<br /> 2.<br /> The provider shall enable its authorised representative to perform the tasks specified in the mandate received from the <br /> provider.<br /> 3.<br /> The authorised representative shall perform the tasks specified in the mandate received from the provider. It shall <br /> provide a copy of the mandate to the market surveillance authorities upon request, in one of the official languages of the <br /> institutions of the Union, as indicated by the competent authority.</p>
Show original text

An authorized representative must complete the tasks outlined in their mandate from the provider and provide a copy to market surveillance authorities upon request in an official EU language. The mandate gives the authorized representative the following responsibilities: (a) confirm that the EU declaration of conformity and technical documentation have been properly prepared and that the provider completed an appropriate conformity assessment; (b) keep the provider's contact details, a copy of the EU declaration of conformity, technical documentation, and any notified body certificate available to authorities for 10 years after the high-risk AI system is sold or put into use; (c) provide authorities with all necessary information and documentation upon request to prove the high-risk AI system meets requirements, including access to system-generated logs that the provider controls; (d) work with authorities when they take action regarding the high-risk AI system, particularly to reduce and manage its risks; (e) where required, meet registration obligations or ensure the provider's registration includes the required information.

<p>the tasks specified in the mandate received from the provider. It shall <br /> provide a copy of the mandate to the market surveillance authorities upon request, in one of the official languages of the <br /> institutions of the Union, as indicated by the competent authority. For the purposes of this Regulation, the mandate shall <br /> empower the authorised representative to carry out the following tasks:<br /> (a) verify that the EU declaration of conformity referred to in Article 47 and the technical documentation referred to in <br /> Article 11 have been drawn up and that an appropriate conformity assessment procedure has been carried out by the <br /> provider;<br /> (b) keep at the disposal of the competent authorities and national authorities or bodies referred to in Article 74(10), for <br /> a period of 10 years after the high-risk AI system has been placed on the market or put into service, the contact details <br /> of the provider that appointed the authorised representative, a copy of the EU declaration of conformity referred to in <br /> Article 47, the technical documentation and, if applicable, the certificate issued by the notified body;<br /> (c) provide a competent authority, upon a reasoned request, with all the information and documentation, including that <br /> referred to in point (b) of this subparagraph, necessary to demonstrate the conformity of a high-risk AI system with the <br /> requirements set out in Section 2, including access to the logs, as referred to in Article 12(1), automatically generated by <br /> the high-risk AI system, to the extent such logs are under the control of the provider;<br /> (d) cooperate with competent authorities, upon a reasoned request, in any action the latter take in relation to the high-risk <br /> AI system, in particular to reduce and mitigate the risks posed by the high-risk AI system;<br /> (e) where applicable, comply with the registration obligations referred to in Article 49(1), or, if the registration is carried <br /> out by the provider itself, ensure that the information referred to in point 3 of Section A of Annex</p>
Show original text

The authorized representative must have the power to be contacted by authorities instead of or in addition to the provider regarding compliance with regulations. If the authorized representative believes the provider is violating its obligations, it must immediately end the agreement and notify the market surveillance authority and relevant testing body about the termination and reasons. Before selling a high-risk AI system, importers must verify that: the provider completed the required conformity assessment; the provider created proper technical documentation; the system has the required CE marking and includes an EU declaration of conformity and user instructions; and the provider appointed an authorized representative.

<p>system;<br /> (e) where applicable, comply with the registration obligations referred to in Article 49(1), or, if the registration is carried <br /> out by the provider itself, ensure that the information referred to in point 3 of Section A of Annex VIII is correct.<br /> The mandate shall empower the authorised representative to be addressed, in addition to or instead of the provider, by the <br /> competent authorities, on all issues related to ensuring compliance with this Regulation.<br /> 4.<br /> The authorised representative shall terminate the mandate if it considers or has reason to consider the provider to be <br /> acting contrary to its obligations pursuant to this Regulation. In such a case, it shall immediately inform the relevant market <br /> surveillance authority, as well as, where applicable, the relevant notified body, about the termination of the mandate and the <br /> reasons therefor.<br /> Article 23<br /> Obligations of importers<br /> 1.<br /> Before placing a high-risk AI system on the market, importers shall ensure that the system is in conformity with this <br /> Regulation by verifying that:<br /> (a) the relevant conformity assessment procedure referred to in Article 43 has been carried out by the provider of the <br /> high-risk AI system;<br /> (b) the provider has drawn up the technical documentation in accordance with Article 11 and Annex IV;<br /> (c) the system bears the required CE marking and is accompanied by the EU declaration of conformity referred to in <br /> Article 47 and instructions for use;<br /> (d) the provider has appointed an authorised representative in accordance with Article 22(1).<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 65/144</p> <p>2.</p>
Show original text

Importers have several responsibilities when handling high-risk AI systems:

  1. If an importer suspects a high-risk AI system does not meet regulations, is fake, or has fake documentation, they must not sell it until it is fixed. If the system poses a safety risk, the importer must notify the manufacturer, authorized representative, and market surveillance authorities.

  2. Importers must display their name, business name or trademark, and contact address on the AI system, its packaging, or accompanying documents.

  3. Importers must ensure that storage and transport conditions do not damage the system or make it non-compliant with requirements.

  4. Importers must keep records for 10 years after the system is sold or put into use. These records include: the certification document from the testing body (if applicable), user instructions, and the EU conformity declaration.

  5. When authorities request it, importers must provide all necessary information and documents to prove the system meets regulations. This must be in a language the authorities understand. Importers must also make technical documentation available to authorities upon request.

<p>in accordance with Article 22(1).<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 65/144</p> <p>2.<br /> Where an importer has sufficient reason to consider that a high-risk AI system is not in conformity with this <br /> Regulation, or is falsified, or accompanied by falsified documentation, it shall not place the system on the market until it has <br /> been brought into conformity. Where the high-risk AI system presents a risk within the meaning of Article 79(1), the <br /> importer shall inform the provider of the system, the authorised representative and the market surveillance authorities to <br /> that effect.<br /> 3.<br /> Importers shall indicate their name, registered trade name or registered trade mark, and the address at which they can <br /> be contacted on the high-risk AI system and on its packaging or its accompanying documentation, where applicable.<br /> 4.<br /> Importers shall ensure that, while a high-risk AI system is under their responsibility, storage or transport conditions, <br /> where applicable, do not jeopardise its compliance with the requirements set out in Section 2.<br /> 5.<br /> Importers shall keep, for a period of 10 years after the high-risk AI system has been placed on the market or put into <br /> service, a copy of the certificate issued by the notified body, where applicable, of the instructions for use, and of the EU <br /> declaration of conformity referred to in Article 47.<br /> 6.<br /> Importers shall provide the relevant competent authorities, upon a reasoned request, with all the necessary <br /> information and documentation, including that referred to in paragraph 5, to demonstrate the conformity of a high-risk AI <br /> system with the requirements set out in Section 2 in a language which can be easily understood by them. For this purpose, <br /> they shall also ensure that the technical documentation can be made available to those authorities.<br /> 7.</p>
Show original text

Importers must provide clear explanations of high-risk AI system requirements to authorities and make technical documentation available to them. Importers must also work with authorities to address and reduce any risks from high-risk AI systems they place on the market.

Distributors have specific responsibilities before selling high-risk AI systems. They must check that the system has the required CE marking, includes a copy of the EU declaration of conformity and user instructions, and that the provider and importer have met their obligations. If a distributor believes the system does not meet the required standards, they cannot sell it until it is fixed. If the system poses a risk, the distributor must notify the provider or importer. While the system is in the distributor's care, they must ensure that storage and transport conditions do not damage the system or cause it to fail to meet the required standards.

<p>of a high-risk AI <br /> system with the requirements set out in Section 2 in a language which can be easily understood by them. For this purpose, <br /> they shall also ensure that the technical documentation can be made available to those authorities.<br /> 7.<br /> Importers shall cooperate with the relevant competent authorities in any action those authorities take in relation to <br /> a high-risk AI system placed on the market by the importers, in particular to reduce and mitigate the risks posed by it.<br /> Article 24<br /> Obligations of distributors<br /> 1.<br /> Before making a high-risk AI system available on the market, distributors shall verify that it bears the required CE <br /> marking, that it is accompanied by a copy of the EU declaration of conformity referred to in Article 47 and instructions for <br /> use, and that the provider and the importer of that system, as applicable, have complied with their respective obligations as <br /> laid down in Article 16, points (b) and (c) and Article 23(3).<br /> 2.<br /> Where a distributor considers or has reason to consider, on the basis of the information in its possession, that <br /> a high-risk AI system is not in conformity with the requirements set out in Section 2, it shall not make the high-risk AI <br /> system available on the market until the system has been brought into conformity with those requirements. Furthermore, <br /> where the high-risk AI system presents a risk within the meaning of Article 79(1), the distributor shall inform the provider <br /> or the importer of the system, as applicable, to that effect.<br /> 3.<br /> Distributors shall ensure that, while a high-risk AI system is under their responsibility, storage or transport <br /> conditions, where applicable, do not jeopardise the compliance of the system with the requirements set out in Section 2.<br /> 4.</p>
Show original text

Distributors of high-risk AI systems have several key responsibilities:

  1. They must ensure that storage and transport conditions do not harm the system's compliance with required standards.

  2. If a distributor believes a high-risk AI system it has sold does not meet the required standards, it must take corrective action. This means either fixing the system, removing it from the market, or recalling it. Alternatively, the distributor can ensure that the provider, importer, or another relevant operator takes these actions. If the system poses a risk, the distributor must immediately notify the provider or importer and the relevant authorities, explaining what is wrong and what corrective steps have been taken.

  3. When asked by a competent authority, distributors must provide all information and documentation about their actions to prove the system meets required standards.

  4. Distributors must work with relevant authorities to help reduce or eliminate any risks posed by high-risk AI systems they have distributed.

<p>3.<br /> Distributors shall ensure that, while a high-risk AI system is under their responsibility, storage or transport <br /> conditions, where applicable, do not jeopardise the compliance of the system with the requirements set out in Section 2.<br /> 4.<br /> A distributor that considers or has reason to consider, on the basis of the information in its possession, a high-risk AI <br /> system which it has made available on the market not to be in conformity with the requirements set out in Section 2, shall <br /> take the corrective actions necessary to bring that system into conformity with those requirements, to withdraw it or recall <br /> it, or shall ensure that the provider, the importer or any relevant operator, as appropriate, takes those corrective actions. <br /> Where the high-risk AI system presents a risk within the meaning of Article 79(1), the distributor shall immediately inform <br /> the provider or importer of the system and the authorities competent for the high-risk AI system concerned, giving details, <br /> in particular, of the non-compliance and of any corrective actions taken.<br /> 5.<br /> Upon a reasoned request from a relevant competent authority, distributors of a high-risk AI system shall provide that <br /> authority with all the information and documentation regarding their actions pursuant to paragraphs 1 to 4 necessary to <br /> demonstrate the conformity of that system with the requirements set out in Section 2.<br /> 6.<br /> Distributors shall cooperate with the relevant competent authorities in any action those authorities take in relation to <br /> a high-risk AI system made available on the market by the distributors, in particular to reduce or mitigate the risk posed by <br /> it.<br /> EN<br /> OJ L, 12.7.2024<br /> 66/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>Article 25<br /> Responsibilities along the AI value chain<br /> 1.</p>
Show original text

Article 25: Who is Responsible for High-Risk AI Systems

  1. Distributors, importers, deployers, and other third parties become responsible as providers of high-risk AI systems if they:

(a) Add their name or trademark to a high-risk AI system that is already being sold or used, unless a contract says otherwise;

(b) Make significant changes to a high-risk AI system that is already on the market or in use, and it remains classified as high-risk;

(c) Change how an AI system is meant to be used—including general-purpose AI systems—so that a system that was not previously classified as high-risk becomes one.

  1. When any of these situations happen, the original provider is no longer considered responsible for that specific AI system. However, the original provider must work closely with the new provider and share all necessary information, technical access, and support needed to meet the requirements of this regulation, especially regarding safety checks for high-risk AI systems.
<p>J L, 12.7.2024<br /> 66/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>Article 25<br /> Responsibilities along the AI value chain<br /> 1.<br /> Any distributor, importer, deployer or other third-party shall be considered to be a provider of a high-risk AI system <br /> for the purposes of this Regulation and shall be subject to the obligations of the provider under Article 16, in any of the <br /> following circumstances:<br /> (a) they put their name or trademark on a high-risk AI system already placed on the market or put into service, without <br /> prejudice to contractual arrangements stipulating that the obligations are otherwise allocated;<br /> (b) they make a substantial modification to a high-risk AI system that has already been placed on the market or has already <br /> been put into service in such a way that it remains a high-risk AI system pursuant to Article 6;<br /> (c) they modify the intended purpose of an AI system, including a general-purpose AI system, which has not been classified <br /> as high-risk and has already been placed on the market or put into service in such a way that the AI system concerned <br /> becomes a high-risk AI system in accordance with Article 6.<br /> 2.<br /> Where the circumstances referred to in paragraph 1 occur, the provider that initially placed the AI system on the <br /> market or put it into service shall no longer be considered to be a provider of that specific AI system for the purposes of <br /> this Regulation. That initial provider shall closely cooperate with new providers and shall make available the necessary <br /> information and provide the reasonably expected technical access and other assistance that are required for the fulfilment of <br /> the obligations set out in this Regulation, in particular regarding the compliance with the conformity assessment of <br /> high-risk AI systems.</p>
Show original text

Providers must share necessary information and provide reasonable technical support to meet the requirements of this Regulation, especially for checking that high-risk AI systems follow the rules. This does not apply if the original provider clearly stated the AI system will not become a high-risk system and therefore does not need to provide documentation.

For high-risk AI systems that are safety parts of products covered by EU laws listed in Annex I Section A, the product manufacturer is considered the AI system provider. The manufacturer must follow the rules in Article 16 if: (a) the high-risk AI system is sold with the product under the manufacturer's name or trademark, or (b) the high-risk AI system is used under the manufacturer's name or trademark after the product is already on the market.

The provider of a high-risk AI system and any third party supplying AI systems, tools, services, components, or processes used in that high-risk AI system must have a written agreement. This agreement must specify what information, capabilities, technical access, and support are needed—based on current industry standards—so the provider can fully meet all Regulation requirements. This does not apply to third parties who publicly share tools, services, processes, or components (except general-purpose AI models) under a free and open-source license.

<p>shall make available the necessary <br /> information and provide the reasonably expected technical access and other assistance that are required for the fulfilment of <br /> the obligations set out in this Regulation, in particular regarding the compliance with the conformity assessment of <br /> high-risk AI systems. This paragraph shall not apply in cases where the initial provider has clearly specified that its AI <br /> system is not to be changed into a high-risk AI system and therefore does not fall under the obligation to hand over the <br /> documentation.<br /> 3.<br /> In the case of high-risk AI systems that are safety components of products covered by the Union harmonisation <br /> legislation listed in Section A of Annex I, the product manufacturer shall be considered to be the provider of the high-risk <br /> AI system, and shall be subject to the obligations under Article 16 under either of the following circumstances:<br /> (a) the high-risk AI system is placed on the market together with the product under the name or trademark of the product <br /> manufacturer;<br /> (b) the high-risk AI system is put into service under the name or trademark of the product manufacturer after the product <br /> has been placed on the market.<br /> 4.<br /> The provider of a high-risk AI system and the third party that supplies an AI system, tools, services, components, or <br /> processes that are used or integrated in a high-risk AI system shall, by written agreement, specify the necessary information, <br /> capabilities, technical access and other assistance based on the generally acknowledged state of the art, in order to enable <br /> the provider of the high-risk AI system to fully comply with the obligations set out in this Regulation. This paragraph shall <br /> not apply to third parties making accessible to the public tools, services, processes, or components, other than <br /> general-purpose AI models, under a free and open-source licence.</p>
Show original text

Third parties that share free and open-source tools, services, or components (not general-purpose AI models) are not required to follow these rules. The AI Office can create and suggest optional contract templates for companies that provide high-risk AI systems and those that supply tools, services, or components used in these systems. These templates will consider different industry needs and be published online for free. However, these rules do not override the need to protect intellectual property rights, confidential business information, and trade secrets under EU and national laws. Companies that use high-risk AI systems must take proper technical and organizational steps to use them correctly according to the instructions provided. They must assign qualified, trained people with proper authority to oversee the AI systems. These requirements do not prevent companies from following other laws or organizing their own resources to implement the oversight measures recommended by the AI system provider.

<p>fully comply with the obligations set out in this Regulation. This paragraph shall <br /> not apply to third parties making accessible to the public tools, services, processes, or components, other than <br /> general-purpose AI models, under a free and open-source licence.<br /> The AI Office may develop and recommend voluntary model terms for contracts between providers of high-risk AI systems <br /> and third parties that supply tools, services, components or processes that are used for or integrated into high-risk AI <br /> systems. When developing those voluntary model terms, the AI Office shall take into account possible contractual <br /> requirements applicable in specific sectors or business cases. The voluntary model terms shall be published and be available <br /> free of charge in an easily usable electronic format.<br /> 5.<br /> Paragraphs 2 and 3 are without prejudice to the need to observe and protect intellectual property rights, confidential <br /> business information and trade secrets in accordance with Union and national law.<br /> Article 26<br /> Obligations of deployers of high-risk AI systems<br /> 1.<br /> Deployers of high-risk AI systems shall take appropriate technical and organisational measures to ensure they use <br /> such systems in accordance with the instructions for use accompanying the systems, pursuant to paragraphs 3 and 6.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 67/144</p> <p>2.<br /> Deployers shall assign human oversight to natural persons who have the necessary competence, training and <br /> authority, as well as the necessary support.<br /> 3.<br /> The obligations set out in paragraphs 1 and 2, are without prejudice to other deployer obligations under Union or <br /> national law and to the deployer’s freedom to organise its own resources and activities for the purpose of implementing the <br /> human oversight measures indicated by the provider.<br /> 4.</p>
Show original text

Deployers of high-risk AI systems have several key responsibilities:

  1. These obligations do not override other legal requirements under EU or national law, and deployers can organize their own resources and activities to implement the oversight measures recommended by the AI provider.

  2. If deployers control the input data, they must ensure that the data is relevant and representative for the intended use of the high-risk AI system.

  3. Deployers must monitor how the high-risk AI system operates according to the provider's instructions. If they believe the system could pose a risk during normal use, they must immediately inform the provider, distributor, and relevant authorities, then stop using the system. If they discover a serious incident, they must inform the provider first, then the importer, distributor, and authorities. If they cannot reach the provider, alternative notification rules apply. This requirement does not apply to sensitive operational data from law enforcement deployers.

  4. For financial institutions subject to EU financial services regulations, monitoring obligations are considered met if they comply with their internal governance rules under financial services law.

<p>1 and 2, are without prejudice to other deployer obligations under Union or <br /> national law and to the deployer’s freedom to organise its own resources and activities for the purpose of implementing the <br /> human oversight measures indicated by the provider.<br /> 4.<br /> Without prejudice to paragraphs 1 and 2, to the extent the deployer exercises control over the input data, that <br /> deployer shall ensure that input data is relevant and sufficiently representative in view of the intended purpose of the <br /> high-risk AI system.<br /> 5.<br /> Deployers shall monitor the operation of the high-risk AI system on the basis of the instructions for use and, where <br /> relevant, inform providers in accordance with Article 72. Where deployers have reason to consider that the use of the <br /> high-risk AI system in accordance with the instructions may result in that AI system presenting a risk within the meaning of <br /> Article 79(1), they shall, without undue delay, inform the provider or distributor and the relevant market surveillance <br /> authority, and shall suspend the use of that system. Where deployers have identified a serious incident, they shall also <br /> immediately inform first the provider, and then the importer or distributor and the relevant market surveillance authorities <br /> of that incident. If the deployer is not able to reach the provider, Article 73 shall apply mutatis mutandis. This obligation <br /> shall not cover sensitive operational data of deployers of AI systems which are law enforcement authorities.<br /> For deployers that are financial institutions subject to requirements regarding their internal governance, arrangements or <br /> processes under Union financial services law, the monitoring obligation set out in the first subparagraph shall be deemed to <br /> be fulfilled by complying with the rules on internal governance arrangements, processes and mechanisms pursuant to the <br /> relevant financial service law.<br /> 6.</p>
Show original text

Organizations using high-risk AI systems must follow these rules:

  1. Financial institutions can meet monitoring requirements by following their existing financial services rules.

  2. Organizations must keep automatic logs created by high-risk AI systems for at least six months (or longer if required by law), as long as they control those logs. Financial institutions should keep these logs with their other required documents.

  3. Before using a high-risk AI system at work, employers must tell workers and their representatives about it. This notification must follow applicable laws and workplace practices.

  4. Government agencies and EU institutions using high-risk AI systems must register them in the EU database. If a system is not registered in this database, they cannot use it and must inform the provider or distributor.

<p>or <br /> processes under Union financial services law, the monitoring obligation set out in the first subparagraph shall be deemed to <br /> be fulfilled by complying with the rules on internal governance arrangements, processes and mechanisms pursuant to the <br /> relevant financial service law.<br /> 6.<br /> Deployers of high-risk AI systems shall keep the logs automatically generated by that high-risk AI system to the extent <br /> such logs are under their control, for a period appropriate to the intended purpose of the high-risk AI system, of at least six <br /> months, unless provided otherwise in applicable Union or national law, in particular in Union law on the protection of <br /> personal data.<br /> Deployers that are financial institutions subject to requirements regarding their internal governance, arrangements or <br /> processes under Union financial services law shall maintain the logs as part of the documentation kept pursuant to the <br /> relevant Union financial service law.<br /> 7.<br /> Before putting into service or using a high-risk AI system at the workplace, deployers who are employers shall inform <br /> workers’ representatives and the affected workers that they will be subject to the use of the high-risk AI system. This <br /> information shall be provided, where applicable, in accordance with the rules and procedures laid down in Union and <br /> national law and practice on information of workers and their representatives.<br /> 8.<br /> Deployers of high-risk AI systems that are public authorities, or Union institutions, bodies, offices or agencies shall <br /> comply with the registration obligations referred to in Article 49. When such deployers find that the high-risk AI system <br /> that they envisage using has not been registered in the EU database referred to in Article 71, they shall not use that system <br /> and shall inform the provider or the distributor.<br /> 9.</p>
Show original text

Deployers of high-risk AI systems must check if their system is registered in the EU database mentioned in Article 71. If it is not registered, they cannot use it and must notify the provider or distributor.

When using high-risk AI systems, deployers must use the information from Article 13 to complete data protection impact assessments as required by EU Regulation 2016/679 (Article 35) or Directive 2016/680 (Article 27).

For criminal investigations involving targeted searches for suspects or convicted individuals, deployers of high-risk AI systems for post-remote biometric identification must obtain prior approval from a court or administrative authority with binding decision-making power and judicial review rights. This requirement does not apply when the system is used for initial suspect identification based on objective facts directly connected to the crime. Each use must be limited to what is necessary for investigating that specific crime.

If approval is denied, the deployer must immediately stop using the biometric identification system and delete all personal data collected through that system.

<p>When such deployers find that the high-risk AI system <br /> that they envisage using has not been registered in the EU database referred to in Article 71, they shall not use that system <br /> and shall inform the provider or the distributor.<br /> 9.<br /> Where applicable, deployers of high-risk AI systems shall use the information provided under Article 13 of this <br /> Regulation to comply with their obligation to carry out a data protection impact assessment under Article 35 of Regulation <br /> (EU) 2016/679 or Article 27 of Directive (EU) 2016/680.<br /> 10.<br /> Without prejudice to Directive (EU) 2016/680, in the framework of an investigation for the targeted search of <br /> a person suspected or convicted of having committed a criminal offence, the deployer of a high-risk AI system for <br /> post-remote biometric identification shall request an authorisation, ex ante, or without undue delay and no later than 48 <br /> hours, by a judicial authority or an administrative authority whose decision is binding and subject to judicial review, for the <br /> use of that system, except when it is used for the initial identification of a potential suspect based on objective and verifiable <br /> facts directly linked to the offence. Each use shall be limited to what is strictly necessary for the investigation of a specific <br /> criminal offence.<br /> If the authorisation requested pursuant to the first subparagraph is rejected, the use of the post-remote biometric <br /> identification system linked to that requested authorisation shall be stopped with immediate effect and the personal data <br /> linked to the use of the high-risk AI system for which the authorisation was requested shall be deleted.<br /> EN<br /> OJ L, 12.7.</p>
Show original text

When a high-risk AI system is no longer authorized, it must be stopped immediately and all personal data connected to it must be deleted.

High-risk AI systems that identify people from a distance using biometric data (like facial recognition) cannot be used by law enforcement in a general, untargeted way. They can only be used if there is a link to a crime, an ongoing criminal case, a real threat of crime, or a search for a missing person. Law enforcement cannot make decisions that negatively affect a person based solely on the results of these biometric identification systems.

Every use of these high-risk AI systems must be recorded in police files and made available to market surveillance and data protection authorities when requested, though sensitive law enforcement information can be kept private.

Organizations using these systems must submit yearly reports to market surveillance and data protection authorities about how they use them. These reports can combine information from multiple uses.

Individual countries can create stricter rules about using these biometric identification systems if they choose to do so.

<p>system linked to that requested authorisation shall be stopped with immediate effect and the personal data <br /> linked to the use of the high-risk AI system for which the authorisation was requested shall be deleted.<br /> EN<br /> OJ L, 12.7.2024<br /> 68/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>In no case shall such high-risk AI system for post-remote biometric identification be used for law enforcement purposes in <br /> an untargeted way, without any link to a criminal offence, a criminal proceeding, a genuine and present or genuine and <br /> foreseeable threat of a criminal offence, or the search for a specific missing person. It shall be ensured that no decision that <br /> produces an adverse legal effect on a person may be taken by the law enforcement authorities based solely on the output of <br /> such post-remote biometric identification systems.<br /> This paragraph is without prejudice to Article 9 of Regulation (EU) 2016/679 and Article 10 of Directive (EU) 2016/680 <br /> for the processing of biometric data.<br /> Regardless of the purpose or deployer, each use of such high-risk AI systems shall be documented in the relevant police file <br /> and shall be made available to the relevant market surveillance authority and the national data protection authority upon <br /> request, excluding the disclosure of sensitive operational data related to law enforcement. This subparagraph shall be <br /> without prejudice to the powers conferred by Directive (EU) 2016/680 on supervisory authorities.<br /> Deployers shall submit annual reports to the relevant market surveillance and national data protection authorities on their <br /> use of post-remote biometric identification systems, excluding the disclosure of sensitive operational data related to law <br /> enforcement. The reports may be aggregated to cover more than one deployment.<br /> Member States may introduce, in accordance with Union law, more restrictive laws on the use of post-remote biometric <br /> identification systems.<br /> 11.</p>
Show original text

Organizations using high-risk AI systems must follow specific rules. First, they must tell people when a high-risk AI system is being used to make decisions about them. For law enforcement AI systems, additional privacy rules apply. Organizations must work with government authorities to ensure they follow these regulations. Before using certain high-risk AI systems, public organizations and private companies providing public services must assess how the AI system might affect people's fundamental rights. This assessment must include: a description of how the AI system will be used; how often and for how long it will be used; which groups of people will be affected; and what risks of harm might occur to those people. Member States can create stricter laws about using AI systems for biometric identification if they choose to do so.

<p>operational data related to law <br /> enforcement. The reports may be aggregated to cover more than one deployment.<br /> Member States may introduce, in accordance with Union law, more restrictive laws on the use of post-remote biometric <br /> identification systems.<br /> 11.<br /> Without prejudice to Article 50 of this Regulation, deployers of high-risk AI systems referred to in Annex III that <br /> make decisions or assist in making decisions related to natural persons shall inform the natural persons that they are subject <br /> to the use of the high-risk AI system. For high-risk AI systems used for law enforcement purposes Article 13 of Directive <br /> (EU) 2016/680 shall apply.<br /> 12.<br /> Deployers shall cooperate with the relevant competent authorities in any action those authorities take in relation to <br /> the high-risk AI system in order to implement this Regulation.<br /> Article 27<br /> Fundamental rights impact assessment for high-risk AI systems<br /> 1.<br /> Prior to deploying a high-risk AI system referred to in Article 6(2), with the exception of high-risk AI systems <br /> intended to be used in the area listed in point 2 of Annex III, deployers that are bodies governed by public law, or are private <br /> entities providing public services, and deployers of high-risk AI systems referred to in points 5 (b) and (c) of Annex III, shall <br /> perform an assessment of the impact on fundamental rights that the use of such system may produce. For that purpose, <br /> deployers shall perform an assessment consisting of:<br /> (a) a description of the deployer’s processes in which the high-risk AI system will be used in line with its intended purpose;<br /> (b) a description of the period of time within which, and the frequency with which, each high-risk AI system is intended to <br /> be used;<br /> (c) the categories of natural persons and groups likely to be affected by its use in the specific context;<br /> (d) the specific risks of harm likely to have an impact on the categories of natural persons or</p>
Show original text

High-risk AI systems must include a fundamental rights impact assessment that covers: (a) how the AI system will be used; (b) which groups of people will be affected by it; (c) what specific harms could occur to those groups, based on information from the AI provider; (d) how human oversight will be implemented according to the user instructions; and (e) what actions will be taken if risks materialize, including complaint procedures and internal management plans. This assessment is required before first use of the high-risk AI system. Deployers can reuse previous assessments from the provider for similar cases. If any information becomes outdated during use, the deployer must update it. After completing the assessment, the deployer must report the results to the market surveillance authority using a required template. Some deployers may be exempt from this reporting requirement under Article 46(1).

<p>-risk AI system is intended to <br /> be used;<br /> (c) the categories of natural persons and groups likely to be affected by its use in the specific context;<br /> (d) the specific risks of harm likely to have an impact on the categories of natural persons or groups of persons identified <br /> pursuant to point (c) of this paragraph, taking into account the information given by the provider pursuant to <br /> Article 13;<br /> (e) a description of the implementation of human oversight measures, according to the instructions for use;<br /> (f) the measures to be taken in the case of the materialisation of those risks, including the arrangements for internal <br /> governance and complaint mechanisms.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 69/144</p> <p>2.<br /> The obligation laid down in paragraph 1 applies to the first use of the high-risk AI system. The deployer may, in <br /> similar cases, rely on previously conducted fundamental rights impact assessments or existing impact assessments carried <br /> out by provider. If, during the use of the high-risk AI system, the deployer considers that any of the elements listed in <br /> paragraph 1 has changed or is no longer up to date, the deployer shall take the necessary steps to update the information.<br /> 3.<br /> Once the assessment referred to in paragraph 1 of this Article has been performed, the deployer shall notify the <br /> market surveillance authority of its results, submitting the filled-out template referred to in paragraph 5 of this Article as <br /> part of the notification. In the case referred to in Article 46(1), deployers may be exempt from that obligation to notify.<br /> 4.</p>
Show original text

Deployers must submit a completed template as part of their notification, though some deployers may be exempt from this requirement under Article 46(1). If the same obligations are already covered by a data protection impact assessment under EU Regulation 2016/679 or Directive 2016/680, the fundamental rights impact assessment should add to that existing assessment rather than duplicate it. The AI Office will create a template and automated tool to help deployers meet their obligations in a simpler way. Each Member State must appoint at least one notifying authority to assess, designate, and monitor conformity assessment bodies. These procedures should be developed cooperatively across all Member States. Member States can choose to have a national accreditation body handle this assessment and monitoring instead. Notifying authorities must be structured to avoid conflicts of interest and ensure their work remains objective and impartial. Importantly, the people who decide whether to notify conformity assessment bodies must be different from those who evaluated them.

<p>of its results, submitting the filled-out template referred to in paragraph 5 of this Article as <br /> part of the notification. In the case referred to in Article 46(1), deployers may be exempt from that obligation to notify.<br /> 4.<br /> If any of the obligations laid down in this Article is already met through the data protection impact assessment <br /> conducted pursuant to Article 35 of Regulation (EU) 2016/679 or Article 27 of Directive (EU) 2016/680, the fundamental <br /> rights impact assessment referred to in paragraph 1 of this Article shall complement that data protection impact <br /> assessment.<br /> 5.<br /> The AI Office shall develop a template for a questionnaire, including through an automated tool, to facilitate deployers <br /> in complying with their obligations under this Article in a simplified manner.<br /> SECTION 4<br /> Notifying authorities and notified bodies<br /> Article 28<br /> Notifying authorities<br /> 1.<br /> Each Member State shall designate or establish at least one notifying authority responsible for setting up and carrying <br /> out the necessary procedures for the assessment, designation and notification of conformity assessment bodies and for their <br /> monitoring. Those procedures shall be developed in cooperation between the notifying authorities of all Member States.<br /> 2.<br /> Member States may decide that the assessment and monitoring referred to in paragraph 1 is to be carried out by <br /> a national accreditation body within the meaning of, and in accordance with, Regulation (EC) No 765/2008.<br /> 3.<br /> Notifying authorities shall be established, organised and operated in such a way that no conflict of interest arises with <br /> conformity assessment bodies, and that the objectivity and impartiality of their activities are safeguarded.<br /> 4.<br /> Notifying authorities shall be organised in such a way that decisions relating to the notification of conformity <br /> assessment bodies are taken by competent persons different from those who carried out the assessment of those bodies.<br /> 5.</p>
Show original text

Notifying authorities must organize their decision-making so that different people evaluate conformity assessment bodies than those who make the final approval decisions. Notifying authorities cannot offer services or consulting that compete with conformity assessment bodies. They must keep all information they receive confidential according to Article 78. Notifying authorities need enough qualified staff with expertise in areas like information technology, artificial intelligence, law, and fundamental rights protection. When conformity assessment bodies want to be officially recognized, they must apply to the notifying authority in their country. The application must include details about what conformity assessment activities they perform, which assessment modules they use, and what types of AI systems they can evaluate. They should also provide an accreditation certificate from their national accreditation body proving they meet the required standards in Article 31. Any documents showing they are already approved under other EU laws should be included with the application.

<p>ity of their activities are safeguarded.<br /> 4.<br /> Notifying authorities shall be organised in such a way that decisions relating to the notification of conformity <br /> assessment bodies are taken by competent persons different from those who carried out the assessment of those bodies.<br /> 5.<br /> Notifying authorities shall offer or provide neither any activities that conformity assessment bodies perform, nor any <br /> consultancy services on a commercial or competitive basis.<br /> 6.<br /> Notifying authorities shall safeguard the confidentiality of the information that they obtain, in accordance with <br /> Article 78.<br /> 7.<br /> Notifying authorities shall have an adequate number of competent personnel at their disposal for the proper <br /> performance of their tasks. Competent personnel shall have the necessary expertise, where applicable, for their function, in <br /> fields such as information technologies, AI and law, including the supervision of fundamental rights.<br /> Article 29<br /> Application of a conformity assessment body for notification<br /> 1.<br /> Conformity assessment bodies shall submit an application for notification to the notifying authority of the Member <br /> State in which they are established.<br /> EN<br /> OJ L, 12.7.2024<br /> 70/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>2.<br /> The application for notification shall be accompanied by a description of the conformity assessment activities, the <br /> conformity assessment module or modules and the types of AI systems for which the conformity assessment body claims <br /> to be competent, as well as by an accreditation certificate, where one exists, issued by a national accreditation body attesting <br /> that the conformity assessment body fulfils the requirements laid down in Article 31.<br /> Any valid document related to existing designations of the applicant notified body under any other Union harmonisation <br /> legislation shall be added.<br /> 3.</p>
Show original text

An accreditation body must confirm that the conformity assessment body meets the requirements in Article 31. Any existing documents showing the applicant's current status as a notified body under other EU laws should be included. If the conformity assessment body cannot provide an accreditation certificate, it must give the notifying authority all necessary documents to verify, recognize, and regularly check that it meets Article 31 requirements. Notified bodies designated under other EU laws can use their existing documents and certificates to support their application under this Regulation. The notified body must update its documentation whenever important changes occur so the responsible authority can monitor and verify ongoing compliance with Article 31 requirements. Only conformity assessment bodies that meet Article 31 requirements can be notified. Notifying authorities must inform the Commission and other Member States about each conformity assessment body using the Commission's electronic notification system. The notification must include full details about the conformity assessment activities, modules used, types of AI systems involved, and proof of competence. If the notification is not based on an accreditation certificate, the notifying authority must provide the Commission and other Member States with documents proving the body's competence and showing that it will be regularly monitored and continue to meet Article 31 requirements.

<p>accreditation body attesting <br /> that the conformity assessment body fulfils the requirements laid down in Article 31.<br /> Any valid document related to existing designations of the applicant notified body under any other Union harmonisation <br /> legislation shall be added.<br /> 3.<br /> Where the conformity assessment body concerned cannot provide an accreditation certificate, it shall provide the <br /> notifying authority with all the documentary evidence necessary for the verification, recognition and regular monitoring of <br /> its compliance with the requirements laid down in Article 31.<br /> 4.<br /> For notified bodies which are designated under any other Union harmonisation legislation, all documents and <br /> certificates linked to those designations may be used to support their designation procedure under this Regulation, as <br /> appropriate. The notified body shall update the documentation referred to in paragraphs 2 and 3 of this Article whenever <br /> relevant changes occur, in order to enable the authority responsible for notified bodies to monitor and verify continuous <br /> compliance with all the requirements laid down in Article 31.<br /> Article 30<br /> Notification procedure<br /> 1.<br /> Notifying authorities may notify only conformity assessment bodies which have satisfied the requirements laid down <br /> in Article 31.<br /> 2.<br /> Notifying authorities shall notify the Commission and the other Member States, using the electronic notification tool <br /> developed and managed by the Commission, of each conformity assessment body referred to in paragraph 1.<br /> 3.<br /> The notification referred to in paragraph 2 of this Article shall include full details of the conformity assessment <br /> activities, the conformity assessment module or modules, the types of AI systems concerned, and the relevant attestation of <br /> competence. Where a notification is not based on an accreditation certificate as referred to in Article 29(2), the notifying <br /> authority shall provide the Commission and the other Member States with documentary evidence which attests to the <br /> competence of the conformity assessment body and to the arrangements in place to ensure that that body will be <br /> monitored regularly and will continue to satisfy the requirements laid down in Article 31.<br /> 4.</p>
Show original text

Member States must provide written proof that conformity assessment bodies are qualified and will be regularly monitored to meet the requirements in Article 31. A conformity assessment body can only operate as a notified body if the Commission and other Member States do not object within two weeks (if an accreditation certificate is provided) or two months (if documentary evidence is provided). If objections are raised, the Commission must consult with the relevant Member States and the assessment body, then decide whether to approve the authorization and inform both the Member State and the assessment body. Notified bodies must be legally established in a Member State with legal status. They must meet organizational, quality management, resource, and process requirements needed for their work, plus appropriate cybersecurity standards. Their structure, responsibilities, reporting lines, and operations must build confidence in their performance and the accuracy of their conformity assessment activities. Notified bodies must be independent from the providers of high-risk AI systems that they assess.

<p>States with documentary evidence which attests to the <br /> competence of the conformity assessment body and to the arrangements in place to ensure that that body will be <br /> monitored regularly and will continue to satisfy the requirements laid down in Article 31.<br /> 4.<br /> The conformity assessment body concerned may perform the activities of a notified body only where no objections <br /> are raised by the Commission or the other Member States within two weeks of a notification by a notifying authority where <br /> it includes an accreditation certificate referred to in Article 29(2), or within two months of a notification by the notifying <br /> authority where it includes documentary evidence referred to in Article 29(3).<br /> 5.<br /> Where objections are raised, the Commission shall, without delay, enter into consultations with the relevant Member <br /> States and the conformity assessment body. In view thereof, the Commission shall decide whether the authorisation is <br /> justified. The Commission shall address its decision to the Member State concerned and to the relevant conformity <br /> assessment body.<br /> Article 31<br /> Requirements relating to notified bodies<br /> 1.<br /> A notified body shall be established under the national law of a Member State and shall have legal personality.<br /> 2.<br /> Notified bodies shall satisfy the organisational, quality management, resources and process requirements that are <br /> necessary to fulfil their tasks, as well as suitable cybersecurity requirements.<br /> 3.<br /> The organisational structure, allocation of responsibilities, reporting lines and operation of notified bodies shall <br /> ensure confidence in their performance, and in the results of the conformity assessment activities that the notified bodies <br /> conduct.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 71/144</p> <p>4.<br /> Notified bodies shall be independent of the provider of a high-risk AI system in relation to which they perform <br /> conformity assessment activities.</p>
Show original text

Notified bodies are organizations that check whether high-risk AI systems meet required standards. These bodies must be independent and cannot have financial ties to the AI system providers they assess, their competitors, or other companies with interests in these systems. However, they can use high-risk AI systems for their own operations or personal use. Staff members at notified bodies, including management and assessment personnel, cannot be involved in designing, developing, marketing, or using the high-risk AI systems they evaluate. They also cannot work as consultants for these activities, as this would create conflicts of interest. Notified bodies must be structured and run in ways that protect their independence and fairness. They need written procedures and policies that ensure all staff, committees, contractors, and partners follow principles of impartiality in their work. Finally, notified bodies must keep confidential all information they receive while assessing AI systems. This information can only be shared if the law requires it.

<p>data.europa.eu/eli/reg/2024/1689/oj<br /> 71/144</p> <p>4.<br /> Notified bodies shall be independent of the provider of a high-risk AI system in relation to which they perform <br /> conformity assessment activities. Notified bodies shall also be independent of any other operator having an economic <br /> interest in high-risk AI systems assessed, as well as of any competitors of the provider. This shall not preclude the use of <br /> assessed high-risk AI systems that are necessary for the operations of the conformity assessment body, or the use of such <br /> high-risk AI systems for personal purposes.<br /> 5.<br /> Neither a conformity assessment body, its top-level management nor the personnel responsible for carrying out its <br /> conformity assessment tasks shall be directly involved in the design, development, marketing or use of high-risk AI systems, <br /> nor shall they represent the parties engaged in those activities. They shall not engage in any activity that might conflict with <br /> their independence of judgement or integrity in relation to conformity assessment activities for which they are notified. <br /> This shall, in particular, apply to consultancy services.<br /> 6.<br /> Notified bodies shall be organised and operated so as to safeguard the independence, objectivity and impartiality of <br /> their activities. Notified bodies shall document and implement a structure and procedures to safeguard impartiality and to <br /> promote and apply the principles of impartiality throughout their organisation, personnel and assessment activities.<br /> 7.<br /> Notified bodies shall have documented procedures in place ensuring that their personnel, committees, subsidiaries, <br /> subcontractors and any associated body or personnel of external bodies maintain, in accordance with Article 78, the <br /> confidentiality of the information which comes into their possession during the performance of conformity assessment <br /> activities, except when its disclosure is required by law.</p>
Show original text

Notified bodies and their staff must keep confidential all information they receive while assessing whether products meet requirements, unless the law requires disclosure. Staff must maintain professional secrecy about information obtained during their work, except when reporting to the authorities of their Member State.

Notified bodies must have procedures that consider the size, sector, structure, and complexity of the AI systems they assess.

Notified bodies must have appropriate liability insurance for their assessment work, unless the Member State where they operate assumes liability or is directly responsible for the assessment.

Notified bodies must perform all their tasks with high professional integrity and possess the necessary expertise in their field, whether they do the work themselves or hire others to do it under their supervision.

Notified bodies must have enough internal staff to effectively oversee work done by external parties on their behalf. They must have permanent access to qualified administrative, technical, legal, and scientific personnel with experience in relevant AI systems, data, data computing, and applicable requirements.

Notified bodies must participate in coordination activities and take part in European standardisation organizations, or stay informed about relevant standards.

<p>ors and any associated body or personnel of external bodies maintain, in accordance with Article 78, the <br /> confidentiality of the information which comes into their possession during the performance of conformity assessment <br /> activities, except when its disclosure is required by law. The staff of notified bodies shall be bound to observe professional <br /> secrecy with regard to all information obtained in carrying out their tasks under this Regulation, except in relation to the <br /> notifying authorities of the Member State in which their activities are carried out.<br /> 8.<br /> Notified bodies shall have procedures for the performance of activities which take due account of the size of <br /> a provider, the sector in which it operates, its structure, and the degree of complexity of the AI system concerned.<br /> 9.<br /> Notified bodies shall take out appropriate liability insurance for their conformity assessment activities, unless liability <br /> is assumed by the Member State in which they are established in accordance with national law or that Member State is itself <br /> directly responsible for the conformity assessment.<br /> 10.<br /> Notified bodies shall be capable of carrying out all their tasks under this Regulation with the highest degree of <br /> professional integrity and the requisite competence in the specific field, whether those tasks are carried out by notified <br /> bodies themselves or on their behalf and under their responsibility.<br /> 11.<br /> Notified bodies shall have sufficient internal competences to be able effectively to evaluate the tasks conducted by <br /> external parties on their behalf. The notified body shall have permanent availability of sufficient administrative, technical, <br /> legal and scientific personnel who possess experience and knowledge relating to the relevant types of AI systems, data and <br /> data computing, and relating to the requirements set out in Section 2.<br /> 12.<br /> Notified bodies shall participate in coordination activities as referred to in Article 38. They shall also take part <br /> directly, or be represented in, European standardisation organisations, or ensure that they are aware and up to date in <br /> respect of relevant standards.</p>
Show original text

Organizations must participate in coordination activities as described in Article 38. They should also join European standardization organizations directly or through representation, and stay informed about relevant standards.

Article 32: Conformity Presumption for Notified Bodies
When a conformity assessment body follows the criteria in approved harmonized standards (published in the Official Journal of the European Union), it is automatically considered to meet the requirements in Article 31, as long as those standards cover those requirements.

Article 33: Subsidiaries and Subcontracting
1. If a notified body hires a subcontractor or uses a subsidiary to perform conformity assessment tasks, it must ensure they meet Article 31 requirements and inform the notifying authority.
2. Notified bodies are fully responsible for all work done by subcontractors or subsidiaries.
3. Subcontracting or subsidiary work requires the provider's approval. Notified bodies must publicly list their subsidiaries.
4. Records about subcontractor or subsidiary qualifications and their work must be kept available to the notifying authority for five years after the subcontracting ends.

Article 34: Notified Bodies' Operational Duties
1. Notified bodies must check that high-risk AI systems meet conformity standards using the procedures in Article 43.
2. [Text continues]

<p>bodies shall participate in coordination activities as referred to in Article 38. They shall also take part <br /> directly, or be represented in, European standardisation organisations, or ensure that they are aware and up to date in <br /> respect of relevant standards.<br /> Article 32<br /> Presumption of conformity with requirements relating to notified bodies<br /> Where a conformity assessment body demonstrates its conformity with the criteria laid down in the relevant harmonised <br /> standards or parts thereof, the references of which have been published in the Official Journal of the European Union, it shall <br /> be presumed to comply with the requirements set out in Article 31 in so far as the applicable harmonised standards cover <br /> those requirements.<br /> EN<br /> OJ L, 12.7.2024<br /> 72/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>Article 33<br /> Subsidiaries of notified bodies and subcontracting<br /> 1.<br /> Where a notified body subcontracts specific tasks connected with the conformity assessment or has recourse to <br /> a subsidiary, it shall ensure that the subcontractor or the subsidiary meets the requirements laid down in Article 31, and <br /> shall inform the notifying authority accordingly.<br /> 2.<br /> Notified bodies shall take full responsibility for the tasks performed by any subcontractors or subsidiaries.<br /> 3.<br /> Activities may be subcontracted or carried out by a subsidiary only with the agreement of the provider. Notified <br /> bodies shall make a list of their subsidiaries publicly available.<br /> 4.<br /> The relevant documents concerning the assessment of the qualifications of the subcontractor or the subsidiary and <br /> the work carried out by them under this Regulation shall be kept at the disposal of the notifying authority for a period of <br /> five years from the termination date of the subcontracting.<br /> Article 34<br /> Operational obligations of notified bodies<br /> 1.<br /> Notified bodies shall verify the conformity of high-risk AI systems in accordance with the conformity assessment <br /> procedures set out in Article 43.<br /> 2.</p>
Show original text

Article 34: What Notified Bodies Must Do

  1. Notified bodies must check that high-risk AI systems follow the rules in Article 43.

  2. Notified bodies should not create unnecessary work for companies. They must consider the company's size, industry, structure, and how complex the AI system is. This is especially important for small businesses to keep costs and paperwork low. However, notified bodies must still maintain the required quality and safety standards.

  3. Notified bodies must provide all relevant documents (including company documents) to the authority in Article 28 when asked. This helps the authority review, approve, and monitor the notified body's work.

Article 35: ID Numbers and Public Lists

  1. The Commission gives each notified body one identification number, even if it works under multiple EU laws.

  2. The Commission publishes a public list of all notified bodies under this regulation, showing their ID numbers and what they are approved to do. The Commission keeps this list current.

Article 36: Reporting Changes

  1. When a notified body changes, the authority must tell the Commission and other Member States using the electronic system in Article 30(2).

  2. The rules in Articles 29 and 30 apply when a notified body wants to expand what it is approved to do.

<p>termination date of the subcontracting.<br /> Article 34<br /> Operational obligations of notified bodies<br /> 1.<br /> Notified bodies shall verify the conformity of high-risk AI systems in accordance with the conformity assessment <br /> procedures set out in Article 43.<br /> 2.<br /> Notified bodies shall avoid unnecessary burdens for providers when performing their activities, and take due account <br /> of the size of the provider, the sector in which it operates, its structure and the degree of complexity of the high-risk AI <br /> system concerned, in particular in view of minimising administrative burdens and compliance costs for micro- and small <br /> enterprises within the meaning of Recommendation 2003/361/EC. The notified body shall, nevertheless, respect the degree <br /> of rigour and the level of protection required for the compliance of the high-risk AI system with the requirements of this <br /> Regulation.<br /> 3.<br /> Notified bodies shall make available and submit upon request all relevant documentation, including the providers’ <br /> documentation, to the notifying authority referred to in Article 28 to allow that authority to conduct its assessment, <br /> designation, notification and monitoring activities, and to facilitate the assessment outlined in this Section.<br /> Article 35<br /> Identification numbers and lists of notified bodies<br /> 1.<br /> The Commission shall assign a single identification number to each notified body, even where a body is notified under <br /> more than one Union act.<br /> 2.<br /> The Commission shall make publicly available the list of the bodies notified under this Regulation, including their <br /> identification numbers and the activities for which they have been notified. The Commission shall ensure that the list is kept <br /> up to date.<br /> Article 36<br /> Changes to notifications<br /> 1.<br /> The notifying authority shall notify the Commission and the other Member States of any relevant changes to the <br /> notification of a notified body via the electronic notification tool referred to in Article 30(2).<br /> 2.<br /> The procedures laid down in Articles 29 and 30 shall apply to extensions of the scope of the notification.</p>
Show original text

Notified bodies must report any changes to their notification using the electronic tool described in Article 30(2). When expanding the scope of notification, the procedures in Articles 29 and 30 apply. For other types of changes, paragraphs 3 to 9 apply instead. If a notified body stops performing conformity assessments, it must notify the relevant authority and affected providers as soon as possible, or at least one year in advance if planned. The body's certificates can remain valid for up to nine months after it stops operating, provided another notified body agrees in writing to take over responsibility for the high-risk AI systems covered by those certificates. The new notified body must complete a full assessment of these systems within the nine-month period before issuing new certificates. Once a notified body ceases operations, the notifying authority must withdraw its designation. If a notifying authority has good reason to believe a notified body no longer meets the requirements in Article 31 or is not fulfilling its obligations, it must promptly and thoroughly investigate. The authority must inform the notified body of the concerns and allow it to respond.

<p>of any relevant changes to the <br /> notification of a notified body via the electronic notification tool referred to in Article 30(2).<br /> 2.<br /> The procedures laid down in Articles 29 and 30 shall apply to extensions of the scope of the notification.<br /> For changes to the notification other than extensions of its scope, the procedures laid down in paragraphs (3) to (9) shall <br /> apply.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 73/144</p> <p>3.<br /> Where a notified body decides to cease its conformity assessment activities, it shall inform the notifying authority and <br /> the providers concerned as soon as possible and, in the case of a planned cessation, at least one year before ceasing its <br /> activities. The certificates of the notified body may remain valid for a period of nine months after cessation of the notified <br /> body’s activities, on condition that another notified body has confirmed in writing that it will assume responsibilities for the <br /> high-risk AI systems covered by those certificates. The latter notified body shall complete a full assessment of the high-risk <br /> AI systems affected by the end of that nine-month-period before issuing new certificates for those systems. Where the <br /> notified body has ceased its activity, the notifying authority shall withdraw the designation.<br /> 4.<br /> Where a notifying authority has sufficient reason to consider that a notified body no longer meets the requirements <br /> laid down in Article 31, or that it is failing to fulfil its obligations, the notifying authority shall without delay investigate the <br /> matter with the utmost diligence. In that context, it shall inform the notified body concerned about the objections raised <br /> and give it the possibility to make its views known.</p>
Show original text

When a notified body fails to meet its obligations, the notifying authority must investigate immediately and thoroughly. It must inform the notified body of the concerns and allow it to respond. If the notified body no longer meets the requirements in Article 31 or is not fulfilling its duties, the notifying authority must restrict, suspend, or withdraw its designation based on how serious the failure is. It must then immediately notify the Commission and other Member States.

If a notified body's designation is suspended, restricted, or withdrawn, it must inform affected providers within 10 days.

When a designation is restricted, suspended, or withdrawn, the notifying authority must keep the notified body's files and make them available to other Member States and market surveillance authorities upon request.

When a designation is restricted, suspended, or withdrawn, the notifying authority must: (a) assess how this affects certificates the notified body issued; (b) send a report to the Commission and other Member States within three months; (c) require the notified body to suspend or withdraw any incorrectly issued certificates within a reasonable timeframe to ensure high-risk AI systems on the market remain compliant; (d) inform the Commission and Member States about which certificates must be suspended or withdrawn; (e) provide the relevant national authorities in the provider's home Member State with all information about suspended or withdrawn certificates so they can take appropriate action.

<p>it is failing to fulfil its obligations, the notifying authority shall without delay investigate the <br /> matter with the utmost diligence. In that context, it shall inform the notified body concerned about the objections raised <br /> and give it the possibility to make its views known. If the notifying authority comes to the conclusion that the notified body <br /> no longer meets the requirements laid down in Article 31 or that it is failing to fulfil its obligations, it shall restrict, suspend <br /> or withdraw the designation as appropriate, depending on the seriousness of the failure to meet those requirements or fulfil <br /> those obligations. It shall immediately inform the Commission and the other Member States accordingly.<br /> 5.<br /> Where its designation has been suspended, restricted, or fully or partially withdrawn, the notified body shall inform <br /> the providers concerned within 10 days.<br /> 6.<br /> In the event of the restriction, suspension or withdrawal of a designation, the notifying authority shall take <br /> appropriate steps to ensure that the files of the notified body concerned are kept, and to make them available to notifying <br /> authorities in other Member States and to market surveillance authorities at their request.<br /> 7.<br /> In the event of the restriction, suspension or withdrawal of a designation, the notifying authority shall:<br /> (a) assess the impact on the certificates issued by the notified body;<br /> (b) submit a report on its findings to the Commission and the other Member States within three months of having notified <br /> the changes to the designation;<br /> (c) require the notified body to suspend or withdraw, within a reasonable period of time determined by the authority, any <br /> certificates which were unduly issued, in order to ensure the continuing conformity of high-risk AI systems on the <br /> market;<br /> (d) inform the Commission and the Member States about certificates the suspension or withdrawal of which it has required;<br /> (e) provide the national competent authorities of the Member State in which the provider has its registered place of <br /> business with all relevant information about the certificates of which it has required the suspension or withdrawal; that <br /> authority shall take the appropriate measures,</p>
Show original text

Providers must inform the national authorities in their Member State about any certificates they have suspended or withdrawn. These authorities will take necessary steps to protect health, safety, and fundamental rights.

Certificates generally remain valid during a suspension or restriction in two cases:

(a) Within one month, the notifying authority confirms there is no risk to health, safety, or fundamental rights from the suspension or restriction, and provides a timeline for fixing the issues.

(b) The notifying authority confirms that no new certificates will be issued, changed, or reissued during the suspension or restriction. The authority must also state whether the notified body can continue monitoring existing certificates. If the notified body cannot do this, the system provider must inform its national authorities within three months that another qualified notified body will temporarily take over monitoring and responsibility for the certificates during the suspension or restriction.

Exception: Certificates that were wrongly issued do not remain valid.

<p>required;<br /> (e) provide the national competent authorities of the Member State in which the provider has its registered place of <br /> business with all relevant information about the certificates of which it has required the suspension or withdrawal; that <br /> authority shall take the appropriate measures, where necessary, to avoid a potential risk to health, safety or fundamental <br /> rights.<br /> 8.<br /> With the exception of certificates unduly issued, and where a designation has been suspended or restricted, the <br /> certificates shall remain valid in one of the following circumstances:<br /> (a) the notifying authority has confirmed, within one month of the suspension or restriction, that there is no risk to health, <br /> safety or fundamental rights in relation to certificates affected by the suspension or restriction, and the notifying <br /> authority has outlined a timeline for actions to remedy the suspension or restriction; or<br /> (b) the notifying authority has confirmed that no certificates relevant to the suspension will be issued, amended or re-issued <br /> during the course of the suspension or restriction, and states whether the notified body has the capability of continuing <br /> to monitor and remain responsible for existing certificates issued for the period of the suspension or restriction; in the <br /> event that the notifying authority determines that the notified body does not have the capability to support existing <br /> certificates issued, the provider of the system covered by the certificate shall confirm in writing to the national <br /> competent authorities of the Member State in which it has its registered place of business, within three months of the <br /> suspension or restriction, that another qualified notified body is temporarily assuming the functions of the notified <br /> body to monitor and remain responsible for the certificates during the period of suspension or restriction.<br /> EN<br /> OJ L, 12.7.2024<br /> 74/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>9.</p>
Show original text

Certificates can remain valid for 9 months after a notified body's designation is withdrawn, provided that: (a) the national competent authority in the Member State where the AI system provider is registered confirms there is no risk to health, safety, or fundamental rights; and (b) another notified body agrees in writing to take over responsibility and complete its assessment within 12 months. The national competent authority can extend this temporary validity for up to three additional months at a time, with a maximum total extension of 12 months. The authority or new notified body must immediately notify the Commission, other Member States, and other notified bodies of any changes. The Commission has the authority to investigate any notified body if there are concerns about its competence or whether it still meets the required standards. When requested, the notifying authority must provide the Commission with all relevant information about the notified body's notification and continued competence.

<p>certificates during the period of suspension or restriction.<br /> EN<br /> OJ L, 12.7.2024<br /> 74/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>9.<br /> With the exception of certificates unduly issued, and where a designation has been withdrawn, the certificates shall <br /> remain valid for a period of nine months under the following circumstances:<br /> (a) the national competent authority of the Member State in which the provider of the high-risk AI system covered by the <br /> certificate has its registered place of business has confirmed that there is no risk to health, safety or fundamental rights <br /> associated with the high-risk AI systems concerned; and<br /> (b) another notified body has confirmed in writing that it will assume immediate responsibility for those AI systems and <br /> completes its assessment within 12 months of the withdrawal of the designation.<br /> In the circumstances referred to in the first subparagraph, the national competent authority of the Member State in which <br /> the provider of the system covered by the certificate has its place of business may extend the provisional validity of the <br /> certificates for additional periods of three months, which shall not exceed 12 months in total.<br /> The national competent authority or the notified body assuming the functions of the notified body affected by the change <br /> of designation shall immediately inform the Commission, the other Member States and the other notified bodies thereof.<br /> Article 37<br /> Challenge to the competence of notified bodies<br /> 1.<br /> The Commission shall, where necessary, investigate all cases where there are reasons to doubt the competence of <br /> a notified body or the continued fulfilment by a notified body of the requirements laid down in Article 31 and of its <br /> applicable responsibilities.<br /> 2.<br /> The notifying authority shall provide the Commission, on request, with all relevant information relating to the <br /> notification or the maintenance of the competence of the notified body concerned.<br /> 3.</p>
Show original text

The notifying authority must provide the Commission with all relevant information about the notification and the notified body's qualifications when requested. The Commission will keep all sensitive information confidential. If the Commission finds that a notified body no longer meets the required standards, it will notify the Member State and ask it to take corrective action, including suspending or withdrawing the notification if needed. If the Member State does not take action, the Commission can suspend, restrict, or withdraw the designation through an implementing act. The Commission must coordinate and ensure cooperation between notified bodies that assess high-risk AI systems through a sectoral group. Each Member State must ensure that its notified bodies participate in this group, either directly or through representatives. The Commission will also facilitate the sharing of knowledge and best practices among all Member States.

<p>laid down in Article 31 and of its <br /> applicable responsibilities.<br /> 2.<br /> The notifying authority shall provide the Commission, on request, with all relevant information relating to the <br /> notification or the maintenance of the competence of the notified body concerned.<br /> 3.<br /> The Commission shall ensure that all sensitive information obtained in the course of its investigations pursuant to this <br /> Article is treated confidentially in accordance with Article 78.<br /> 4.<br /> Where the Commission ascertains that a notified body does not meet or no longer meets the requirements for its <br /> notification, it shall inform the notifying Member State accordingly and request it to take the necessary corrective measures, <br /> including the suspension or withdrawal of the notification if necessary. Where the Member State fails to take the necessary <br /> corrective measures, the Commission may, by means of an implementing act, suspend, restrict or withdraw the designation. <br /> That implementing act shall be adopted in accordance with the examination procedure referred to in Article 98(2).<br /> Article 38<br /> Coordination of notified bodies<br /> 1.<br /> The Commission shall ensure that, with regard to high-risk AI systems, appropriate coordination and cooperation <br /> between notified bodies active in the conformity assessment procedures pursuant to this Regulation are put in place and <br /> properly operated in the form of a sectoral group of notified bodies.<br /> 2.<br /> Each notifying authority shall ensure that the bodies notified by it participate in the work of a group referred to in <br /> paragraph 1, directly or through designated representatives.<br /> 3.<br /> The Commission shall provide for the exchange of knowledge and best practices between notifying authorities.<br /> OJ L, 12.7.</p>
Show original text

Notifying authorities must share knowledge and best practices with each other through the Commission. Conformity assessment bodies from third countries can be authorized to act as notified bodies under this regulation if they meet the same requirements as EU bodies or demonstrate equivalent compliance standards. High-risk AI systems and general-purpose AI models that follow harmonized standards published in the EU Official Journal are considered compliant with the requirements in Section 2 of this Chapter and relevant obligations in Chapter V, Sections 2 and 3. The Commission must promptly issue standardization requests covering all requirements in Section 2 and applicable obligations in Chapter V, Sections 2 and 3 of this regulation.

<p>by it participate in the work of a group referred to in <br /> paragraph 1, directly or through designated representatives.<br /> 3.<br /> The Commission shall provide for the exchange of knowledge and best practices between notifying authorities.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 75/144</p> <p>Article 39<br /> Conformity assessment bodies of third countries<br /> Conformity assessment bodies established under the law of a third country with which the Union has concluded an <br /> agreement may be authorised to carry out the activities of notified bodies under this Regulation, provided that they meet <br /> the requirements laid down in Article 31 or they ensure an equivalent level of compliance.<br /> SECTION 5<br /> Standards, conformity assessment, certificates, registration<br /> Article 40<br /> Harmonised standards and standardisation deliverables<br /> 1.<br /> High-risk AI systems or general-purpose AI models which are in conformity with harmonised standards or parts <br /> thereof the references of which have been published in the Official Journal of the European Union in accordance with <br /> Regulation (EU) No 1025/2012 shall be presumed to be in conformity with the requirements set out in Section 2 of this <br /> Chapter or, as applicable, with the obligations set out in of Chapter V, Sections 2 and 3, of this Regulation, to the extent that <br /> those standards cover those requirements or obligations.<br /> 2.<br /> In accordance with Article 10 of Regulation (EU) No 1025/2012, the Commission shall issue, without undue delay, <br /> standardisation requests covering all requirements set out in Section 2 of this Chapter and, as applicable, standardisation <br /> requests covering obligations set out in Chapter V, Sections 2 and 3, of this Regulation.</p>
Show original text

The Commission must quickly issue standardisation requests that cover all requirements from Section 2 of this Chapter and, where applicable, obligations from Chapter V, Sections 2 and 3. These requests should also ask for reports and documentation on how to improve AI systems' resource efficiency, such as reducing energy use and other resource consumption during the system's lifetime, and on developing energy-efficient general-purpose AI models. When preparing these requests, the Commission must consult the Board, relevant stakeholders, and the advisory forum. The Commission must tell European standardisation organisations that standards must be clear and consistent with existing standards in various sectors covered by EU harmonisation laws (listed in Annex I). The standards should ensure that high-risk AI systems and general-purpose AI models sold or used in the EU meet the requirements of this Regulation. The Commission will ask European standardisation organisations to show they made their best efforts to meet these goals, following Article 24 of Regulation (EU) No 1025/2012. Those involved in standardisation should aim to promote AI investment and innovation, increase legal certainty, boost EU market competitiveness and growth, strengthen global standardisation cooperation, consider existing international AI standards that align with EU values and rights, and improve multi-stakeholder governance with balanced representation and effective participation from all relevant stakeholders, following Articles 5, 6, and 7 of Regulation (EU) No [reference continues].

<p>issue, without undue delay, <br /> standardisation requests covering all requirements set out in Section 2 of this Chapter and, as applicable, standardisation <br /> requests covering obligations set out in Chapter V, Sections 2 and 3, of this Regulation. The standardisation request shall <br /> also ask for deliverables on reporting and documentation processes to improve AI systems’ resource performance, such as <br /> reducing the high-risk AI system’s consumption of energy and of other resources during its lifecycle, and on the <br /> energy-efficient development of general-purpose AI models. When preparing a standardisation request, the Commission <br /> shall consult the Board and relevant stakeholders, including the advisory forum.<br /> When issuing a standardisation request to European standardisation organisations, the Commission shall specify that <br /> standards have to be clear, consistent, including with the standards developed in the various sectors for products covered by <br /> the existing Union harmonisation legislation listed in Annex I, and aiming to ensure that high-risk AI systems or <br /> general-purpose AI models placed on the market or put into service in the Union meet the relevant requirements or <br /> obligations laid down in this Regulation.<br /> The Commission shall request the European standardisation organisations to provide evidence of their best efforts to fulfil <br /> the objectives referred to in the first and the second subparagraph of this paragraph in accordance with Article 24 of <br /> Regulation (EU) No 1025/2012.<br /> 3.<br /> The participants in the standardisation process shall seek to promote investment and innovation in AI, including <br /> through increasing legal certainty, as well as the competitiveness and growth of the Union market, to contribute to <br /> strengthening global cooperation on standardisation and taking into account existing international standards in the field of <br /> AI that are consistent with Union values, fundamental rights and interests, and to enhance multi-stakeholder governance <br /> ensuring a balanced representation of interests and the effective participation of all relevant stakeholders in accordance with <br /> Articles 5, 6, and 7 of Regulation (EU) No </p>
Show original text

The Commission can create common technical specifications for requirements in Section 2 of this Chapter, or for obligations in Sections 2 and 3 of Chapter V, but only if certain conditions are met. First, the Commission must have asked European standardisation organisations to develop a harmonised standard for these requirements. Second, one of the following must be true: no organisation accepted the request, the standards were not delivered on time, the standards do not adequately protect fundamental rights, or the standards do not meet the original request. Additionally, no harmonised standards for these requirements can already be officially published in the European Union.

<p>fundamental rights and interests, and to enhance multi-stakeholder governance <br /> ensuring a balanced representation of interests and the effective participation of all relevant stakeholders in accordance with <br /> Articles 5, 6, and 7 of Regulation (EU) No 1025/2012.<br /> Article 41<br /> Common specifications<br /> 1.<br /> The Commission may adopt, implementing acts establishing common specifications for the requirements set out in <br /> Section 2 of this Chapter or, as applicable, for the obligations set out in Sections 2 and 3 of Chapter V where the following <br /> conditions have been fulfilled:<br /> (a) the Commission has requested, pursuant to Article 10(1) of Regulation (EU) No 1025/2012, one or more European <br /> standardisation organisations to draft a harmonised standard for the requirements set out in Section 2 of this Chapter, <br /> or, as applicable, for the obligations set out in Sections 2 and 3 of Chapter V, and:<br /> (i) the request has not been accepted by any of the European standardisation organisations; or<br /> EN<br /> OJ L, 12.7.2024<br /> 76/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>(ii) the harmonised standards addressing that request are not delivered within the deadline set in accordance with <br /> Article 10(1) of Regulation (EU) No 1025/2012; or<br /> (iii) the relevant harmonised standards insufficiently address fundamental rights concerns; or<br /> (iv) the harmonised standards do not comply with the request; and<br /> (b) no reference to harmonised standards covering the requirements referred to in Section 2 of this Chapter or, as <br /> applicable, the obligations referred to in Sections 2 and 3 of Chapter V has been published in the Official Journal of the <br /> European Union in accordance with Regulation (EU) No 1025/2012, and no such reference is</p>
Show original text

When common specifications for high-risk AI systems are created, the Commission must consult an advisory forum (Article 67) and follow the examination procedure in Article 98(2). Before drafting these specifications, the Commission must inform the committee under Regulation (EU) No 1025/2012 that the required conditions are met. AI systems that follow these common specifications are considered to meet the requirements in Section 2 of this Chapter and the obligations in Sections 2 and 3 of Chapter V. If a European standardisation organisation creates a harmonised standard and proposes it for publication in the Official Journal of the European Union, the Commission will review it under Regulation (EU) No 1025/2012. Once the harmonised standard is published, the Commission will cancel the implementing acts from the common specifications that cover the same requirements.

<p>applicable, the obligations referred to in Sections 2 and 3 of Chapter V has been published in the Official Journal of the <br /> European Union in accordance with Regulation (EU) No 1025/2012, and no such reference is expected to be published <br /> within a reasonable period.<br /> When drafting the common specifications, the Commission shall consult the advisory forum referred to in Article 67.<br /> The implementing acts referred to in the first subparagraph of this paragraph shall be adopted in accordance with the <br /> examination procedure referred to in Article 98(2).<br /> 2.<br /> Before preparing a draft implementing act, the Commission shall inform the committee referred to in Article 22 of <br /> Regulation (EU) No 1025/2012 that it considers the conditions laid down in paragraph 1 of this Article to be fulfilled.<br /> 3.<br /> High-risk AI systems or general-purpose AI models which are in conformity with the common specifications referred <br /> to in paragraph 1, or parts of those specifications, shall be presumed to be in conformity with the requirements set out in <br /> Section 2 of this Chapter or, as applicable, to comply with the obligations referred to in Sections 2 and 3 of Chapter V, to <br /> the extent those common specifications cover those requirements or those obligations.<br /> 4.<br /> Where a harmonised standard is adopted by a European standardisation organisation and proposed to the <br /> Commission for the publication of its reference in the Official Journal of the European Union, the Commission shall assess the <br /> harmonised standard in accordance with Regulation (EU) No 1025/2012. When reference to a harmonised standard is <br /> published in the Official Journal of the European Union, the Commission shall repeal the implementing acts referred to in <br /> paragraph 1, or parts thereof which cover the same requirements set out in Section 2 of this Chapter or, as applicable, the <br /> same obligations set out in Sections 2 and 3 of Chapter V.<br /> 5.</p>
Show original text

Providers of high-risk AI systems and general-purpose AI models can either follow common specifications or use alternative technical solutions that meet the same requirements. If they choose alternatives, they must explain why their solutions are equally effective.

If a Member State believes a common specification doesn't fully meet the requirements, it should notify the Commission with details. The Commission will review this and may update the specification if needed.

High-risk AI systems are considered compliant with certain requirements if they have been trained and tested using data from the specific location, behavior, context, or function where they will be used.

High-risk AI systems that have received cybersecurity certification or a conformity statement under EU Regulation 2019/881 (and are listed in the Official Journal of the European Union) are considered to meet cybersecurity requirements, as long as the certification covers those specific requirements.

<p>acts referred to in <br /> paragraph 1, or parts thereof which cover the same requirements set out in Section 2 of this Chapter or, as applicable, the <br /> same obligations set out in Sections 2 and 3 of Chapter V.<br /> 5.<br /> Where providers of high-risk AI systems or general-purpose AI models do not comply with the common <br /> specifications referred to in paragraph 1, they shall duly justify that they have adopted technical solutions that meet the <br /> requirements referred to in Section 2 of this Chapter or, as applicable, comply with the obligations set out in Sections 2 and <br /> 3 of Chapter V to a level at least equivalent thereto.<br /> 6.<br /> Where a Member State considers that a common specification does not entirely meet the requirements set out in <br /> Section 2 or, as applicable, comply with obligations set out in Sections 2 and 3 of Chapter V, it shall inform the <br /> Commission thereof with a detailed explanation. The Commission shall assess that information and, if appropriate, amend <br /> the implementing act establishing the common specification concerned.<br /> Article 42<br /> Presumption of conformity with certain requirements<br /> 1.<br /> High-risk AI systems that have been trained and tested on data reflecting the specific geographical, behavioural, <br /> contextual or functional setting within which they are intended to be used shall be presumed to comply with the relevant <br /> requirements laid down in Article 10(4).<br /> 2.<br /> High-risk AI systems that have been certified or for which a statement of conformity has been issued under <br /> a cybersecurity scheme pursuant to Regulation (EU) 2019/881 and the references of which have been published in the <br /> Official Journal of the European Union shall be presumed to comply with the cybersecurity requirements set out in Article 15 <br /> of this Regulation in so far as the cybersecurity certificate or statement of conformity or parts thereof cover those <br /> requirements.<br /> OJ L, 12.7.</p>
Show original text

High-risk AI systems listed in Annex III must be checked to confirm they meet the requirements in Section 2. Providers have two options for this check: (1) Internal control as described in Annex VI, or (2) Quality management system assessment and technical documentation review by an approved testing body, as described in Annex VII. Providers must use the Annex VII procedure if: (a) no approved standards exist and no common specifications are available, (b) they did not use or only partially used the approved standard, (c) common specifications exist but were not applied, or (d) an approved standard was published with restrictions and only that restricted part applies. When using the Annex VII procedure, providers can select any approved testing body. If a cybersecurity certificate or conformity statement covers the cybersecurity requirements in Article 15, the European Union will consider those requirements met.

<p>the European Union shall be presumed to comply with the cybersecurity requirements set out in Article 15 <br /> of this Regulation in so far as the cybersecurity certificate or statement of conformity or parts thereof cover those <br /> requirements.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 77/144</p> <p>Article 43<br /> Conformity assessment<br /> 1.<br /> For high-risk AI systems listed in point 1 of Annex III, where, in demonstrating the compliance of a high-risk AI <br /> system with the requirements set out in Section 2, the provider has applied harmonised standards referred to in Article 40, <br /> or, where applicable, common specifications referred to in Article 41, the provider shall opt for one of the following <br /> conformity assessment procedures based on:<br /> (a) the internal control referred to in Annex VI; or<br /> (b) the assessment of the quality management system and the assessment of the technical documentation, with the <br /> involvement of a notified body, referred to in Annex VII.<br /> In demonstrating the compliance of a high-risk AI system with the requirements set out in Section 2, the provider shall <br /> follow the conformity assessment procedure set out in Annex VII where:<br /> (a) harmonised standards referred to in Article 40 do not exist, and common specifications referred to in Article 41 are not <br /> available;<br /> (b) the provider has not applied, or has applied only part of, the harmonised standard;<br /> (c) the common specifications referred to in point (a) exist, but the provider has not applied them;<br /> (d) one or more of the harmonised standards referred to in point (a) has been published with a restriction, and only on the <br /> part of the standard that was restricted.<br /> For the purposes of the conformity assessment procedure referred to in Annex VII, the provider may choose any of the <br /> notified bodies.</p>
Show original text

Point (a) has been published with restrictions applied only to the relevant part of the standard.

For high-risk AI systems undergoing conformity assessment (as detailed in Annex VII), providers can select any approved testing body. However, if the AI system will be used by law enforcement, immigration, asylum authorities, or EU institutions and agencies, the market surveillance authority (as specified in Article 74) must act as the testing body instead.

For high-risk AI systems listed in points 2-8 of Annex III, providers must use an internal control assessment procedure (Annex VI). This procedure does not require involvement of an external testing body.

For high-risk AI systems covered by existing EU laws (listed in Section A of Annex I), providers must follow the conformity assessment procedures required by those laws. The requirements in Section 2 of this Chapter must be included in this assessment, along with specific sections (4.3, 4.4, 4.5, and part of 4.6) from Annex VII.

Approved testing bodies that were certified under these existing EU laws can verify that high-risk AI systems meet the Section 2 requirements, provided these bodies were already assessed for compliance with the relevant requirements (Article 31) during their original certification process.

<p>point (a) has been published with a restriction, and only on the <br /> part of the standard that was restricted.<br /> For the purposes of the conformity assessment procedure referred to in Annex VII, the provider may choose any of the <br /> notified bodies. However, where the high-risk AI system is intended to be put into service by law enforcement, immigration <br /> or asylum authorities or by Union institutions, bodies, offices or agencies, the market surveillance authority referred to in <br /> Article 74(8) or (9), as applicable, shall act as a notified body.<br /> 2.<br /> For high-risk AI systems referred to in points 2 to 8 of Annex III, providers shall follow the conformity assessment <br /> procedure based on internal control as referred to in Annex VI, which does not provide for the involvement of a notified <br /> body.<br /> 3.<br /> For high-risk AI systems covered by the Union harmonisation legislation listed in Section A of Annex I, the provider <br /> shall follow the relevant conformity assessment procedure as required under those legal acts. The requirements set out in <br /> Section 2 of this Chapter shall apply to those high-risk AI systems and shall be part of that assessment. Points 4.3., 4.4., 4.5. <br /> and the fifth paragraph of point 4.6 of Annex VII shall also apply.<br /> For the purposes of that assessment, notified bodies which have been notified under those legal acts shall be entitled to <br /> control the conformity of the high-risk AI systems with the requirements set out in Section 2, provided that the compliance <br /> of those notified bodies with requirements laid down in Article 31(4), (5), (10) and (11) has been assessed in the context of <br /> the notification procedure under those legal acts.</p>
Show original text

Notified bodies must meet the requirements in Article 31(4), (5), (10), and (11), which are checked during the notification process. Manufacturers can skip third-party testing if they use all required harmonized standards, but only if they also apply harmonized standards or common specifications for all requirements in Section 2. High-risk AI systems must undergo a new conformity assessment if they are substantially modified, whether they will be sold or continue being used. However, pre-planned changes to learning AI systems that were documented during the initial assessment do not count as substantial modifications. The Commission has the power to update Annexes VI and VII through delegated acts under Article 97 to reflect technical progress.

<p>Section 2, provided that the compliance <br /> of those notified bodies with requirements laid down in Article 31(4), (5), (10) and (11) has been assessed in the context of <br /> the notification procedure under those legal acts.<br /> Where a legal act listed in Section A of Annex I enables the product manufacturer to opt out from a third-party conformity <br /> assessment, provided that that manufacturer has applied all harmonised standards covering all the relevant requirements, <br /> that manufacturer may use that option only if it has also applied harmonised standards or, where applicable, common <br /> specifications referred to in Article 41, covering all requirements set out in Section 2 of this Chapter.<br /> 4.<br /> High-risk AI systems that have already been subject to a conformity assessment procedure shall undergo a new <br /> conformity assessment procedure in the event of a substantial modification, regardless of whether the modified system is <br /> intended to be further distributed or continues to be used by the current deployer.<br /> For high-risk AI systems that continue to learn after being placed on the market or put into service, changes to the high-risk <br /> AI system and its performance that have been pre-determined by the provider at the moment of the initial conformity <br /> assessment and are part of the information contained in the technical documentation referred to in point 2(f) of Annex IV, <br /> shall not constitute a substantial modification.<br /> 5.<br /> The Commission is empowered to adopt delegated acts in accordance with Article 97 in order to amend Annexes VI <br /> and VII by updating them in light of technical progress.<br /> EN<br /> OJ L, 12.7.2024<br /> 78/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>6.</p>
Show original text

The Commission has the power to update the rules for high-risk AI systems (listed in Annex III, items 2-8) by requiring them to follow the conformity assessment procedure in Annex VII or parts of it. When making these updates, the Commission must consider whether the current assessment method in Annex VI effectively prevents or reduces risks to health, safety, and fundamental rights. The Commission must also check that there are enough qualified bodies available to conduct these assessments.

Certificates issued by approved bodies must be written in a language that local authorities in the member state can easily understand. Certificates are valid for up to 5 years for AI systems in Annex I and up to 4 years for AI systems in Annex III. Providers can request to extend certificates for additional periods of the same length, provided the AI system is reassessed using the required procedures. Any additions to a certificate remain valid as long as the main certificate is still valid.

<p>by updating them in light of technical progress.<br /> EN<br /> OJ L, 12.7.2024<br /> 78/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>6.<br /> The Commission is empowered to adopt delegated acts in accordance with Article 97 in order to amend paragraphs 1 <br /> and 2 of this Article in order to subject high-risk AI systems referred to in points 2 to 8 of Annex III to the conformity <br /> assessment procedure referred to in Annex VII or parts thereof. The Commission shall adopt such delegated acts taking into <br /> account the effectiveness of the conformity assessment procedure based on internal control referred to in Annex VI in <br /> preventing or minimising the risks to health and safety and protection of fundamental rights posed by such systems, as well <br /> as the availability of adequate capacities and resources among notified bodies.<br /> Article 44<br /> Certificates<br /> 1.<br /> Certificates issued by notified bodies in accordance with Annex VII shall be drawn-up in a language which can be <br /> easily understood by the relevant authorities in the Member State in which the notified body is established.<br /> 2.<br /> Certificates shall be valid for the period they indicate, which shall not exceed five years for AI systems covered by <br /> Annex I, and four years for AI systems covered by Annex III. At the request of the provider, the validity of a certificate may <br /> be extended for further periods, each not exceeding five years for AI systems covered by Annex I, and four years for AI <br /> systems covered by Annex III, based on a re-assessment in accordance with the applicable conformity assessment <br /> procedures. Any supplement to a certificate shall remain valid, provided that the certificate which it supplements is valid.<br /> 3.</p>
Show original text

AI systems listed in Annex III must be reassessed every four years to maintain their certificates. Any additions to a certificate remain valid as long as the main certificate is valid.

If a notified body discovers that an AI system no longer meets the required standards, it may suspend, withdraw, or restrict the certificate unless the system provider fixes the problems within a set timeframe. The notified body must explain its decision. Companies can appeal decisions made by notified bodies about certificates.

Notified bodies must inform their supervising authority about: certificates and approvals they issue, any certificates or approvals they refuse or withdraw, changes to their notification scope, requests from market surveillance authorities, and details of their assessment activities when asked.

Notified bodies must also inform each other about quality management approvals they have refused, suspended, or withdrawn, and about technical documentation certificates they have refused, withdrawn, suspended, or restricted. They can share information about approvals and certificates they have issued if asked.

<p>and four years for AI <br /> systems covered by Annex III, based on a re-assessment in accordance with the applicable conformity assessment <br /> procedures. Any supplement to a certificate shall remain valid, provided that the certificate which it supplements is valid.<br /> 3.<br /> Where a notified body finds that an AI system no longer meets the requirements set out in Section 2, it shall, taking <br /> account of the principle of proportionality, suspend or withdraw the certificate issued or impose restrictions on it, unless <br /> compliance with those requirements is ensured by appropriate corrective action taken by the provider of the system within <br /> an appropriate deadline set by the notified body. The notified body shall give reasons for its decision.<br /> An appeal procedure against decisions of the notified bodies, including on conformity certificates issued, shall be available.<br /> Article 45<br /> Information obligations of notified bodies<br /> 1.<br /> Notified bodies shall inform the notifying authority of the following:<br /> (a) any Union technical documentation assessment certificates, any supplements to those certificates, and any quality <br /> management system approvals issued in accordance with the requirements of Annex VII;<br /> (b) any refusal, restriction, suspension or withdrawal of a Union technical documentation assessment certificate or a quality <br /> management system approval issued in accordance with the requirements of Annex VII;<br /> (c) any circumstances affecting the scope of or conditions for notification;<br /> (d) any request for information which they have received from market surveillance authorities regarding conformity <br /> assessment activities;<br /> (e) on request, conformity assessment activities performed within the scope of their notification and any other activity <br /> performed, including cross-border activities and subcontracting.<br /> 2.<br /> Each notified body shall inform the other notified bodies of:<br /> (a) quality management system approvals which it has refused, suspended or withdrawn, and, upon request, of quality <br /> system approvals which it has issued;<br /> (b) Union technical documentation assessment certificates or any supplements thereto which it has refused, withdrawn, <br /> suspended or otherwise restricted, and, upon request, of the certificates and/or supplements thereto which it has issued.</p>
Show original text

Notified bodies must share information about their decisions on Union technical documentation assessment certificates. This includes certificates they have refused, withdrawn, suspended, or restricted, as well as those they have approved.

Notified bodies working on similar AI system assessments must inform each other about important findings, including negative results and positive results when requested.

Notified bodies must keep all information they receive confidential according to Article 78.

In exceptional circumstances, market surveillance authorities can allow high-risk AI systems to be sold or used in their country without completing the normal approval process. This is only permitted for serious reasons such as public security, protecting people's health and safety, environmental protection, or protecting critical infrastructure. This temporary permission lasts only while the required approval procedures are being completed, which must happen as quickly as possible.

In urgent situations involving public security threats or immediate danger to people's lives, law enforcement or civil protection authorities can use a specific high-risk AI system without prior authorization. However, they must request authorization during or immediately after using the system.

<p>approvals which it has issued;<br /> (b) Union technical documentation assessment certificates or any supplements thereto which it has refused, withdrawn, <br /> suspended or otherwise restricted, and, upon request, of the certificates and/or supplements thereto which it has issued.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 79/144</p> <p>3.<br /> Each notified body shall provide the other notified bodies carrying out similar conformity assessment activities <br /> covering the same types of AI systems with relevant information on issues relating to negative and, on request, positive <br /> conformity assessment results.<br /> 4.<br /> Notified bodies shall safeguard the confidentiality of the information that they obtain, in accordance with Article 78.<br /> Article 46<br /> Derogation from conformity assessment procedure<br /> 1.<br /> By way of derogation from Article 43 and upon a duly justified request, any market surveillance authority may <br /> authorise the placing on the market or the putting into service of specific high-risk AI systems within the territory of the <br /> Member State concerned, for exceptional reasons of public security or the protection of life and health of persons, <br /> environmental protection or the protection of key industrial and infrastructural assets. That authorisation shall be for <br /> a limited period while the necessary conformity assessment procedures are being carried out, taking into account the <br /> exceptional reasons justifying the derogation. The completion of those procedures shall be undertaken without undue delay.<br /> 2.<br /> In a duly justified situation of urgency for exceptional reasons of public security or in the case of specific, substantial <br /> and imminent threat to the life or physical safety of natural persons, law-enforcement authorities or civil protection <br /> authorities may put a specific high-risk AI system into service without the authorisation referred to in paragraph 1, <br /> provided that such authorisation is requested during or after the use without undue delay.</p>
Show original text

Law enforcement and civil protection authorities can use a high-risk AI system before getting official approval, as long as they request approval during or immediately after using it. If approval is denied, they must stop using the system right away and delete all results and outputs.

The market surveillance authority can only approve a high-risk AI system if it meets the requirements in Section 2. The authority must inform the Commission and other Member States about any approvals given. However, sensitive operational data from law enforcement does not need to be shared.

If no Member State or the Commission objects within 15 days of receiving approval information, the authorization is automatically considered valid.

If a Member State objects within 15 days, or if the Commission believes the authorization breaks EU law or the compliance conclusion is wrong, the Commission must immediately discuss this with the relevant Member State. The companies involved must be consulted and allowed to explain their position. After considering all views, the Commission will decide whether the authorization should be allowed.

<p>persons, law-enforcement authorities or civil protection <br /> authorities may put a specific high-risk AI system into service without the authorisation referred to in paragraph 1, <br /> provided that such authorisation is requested during or after the use without undue delay. If the authorisation referred to in <br /> paragraph 1 is refused, the use of the high-risk AI system shall be stopped with immediate effect and all the results and <br /> outputs of such use shall be immediately discarded.<br /> 3.<br /> The authorisation referred to in paragraph 1 shall be issued only if the market surveillance authority concludes that <br /> the high-risk AI system complies with the requirements of Section 2. The market surveillance authority shall inform the <br /> Commission and the other Member States of any authorisation issued pursuant to paragraphs 1 and 2. This obligation shall <br /> not cover sensitive operational data in relation to the activities of law-enforcement authorities.<br /> 4.<br /> Where, within 15 calendar days of receipt of the information referred to in paragraph 3, no objection has been raised <br /> by either a Member State or the Commission in respect of an authorisation issued by a market surveillance authority of <br /> a Member State in accordance with paragraph 1, that authorisation shall be deemed justified.<br /> 5.<br /> Where, within 15 calendar days of receipt of the notification referred to in paragraph 3, objections are raised by <br /> a Member State against an authorisation issued by a market surveillance authority of another Member State, or where the <br /> Commission considers the authorisation to be contrary to Union law, or the conclusion of the Member States regarding the <br /> compliance of the system as referred to in paragraph 3 to be unfounded, the Commission shall, without delay, enter into <br /> consultations with the relevant Member State. The operators concerned shall be consulted and have the possibility to <br /> present their views. Having regard thereto, the Commission shall decide whether the authorisation is justified.</p>
Show original text

The Commission must quickly consult with the relevant Member State. The operators involved must be consulted and given the chance to share their views. Based on this input, the Commission will decide if the authorization is justified. The Commission will inform the Member State and the operators of its decision.

If the Commission believes the authorization is not justified, the Member State's market surveillance authority must withdraw it.

For high-risk AI systems related to products covered by EU harmonization laws listed in Section A of Annex I, only the exceptions from conformity assessment in those EU laws apply.

Article 47: EU Declaration of Conformity

The provider must create a written, machine-readable EU declaration of conformity for each high-risk AI system. This declaration must be signed (physically or electronically) and kept available to national authorities for 10 years after the system is sold or put into use. The declaration must identify which high-risk AI system it covers. National authorities can request a copy at any time.

The declaration must confirm that the high-risk AI system meets the requirements in Section 2. It must include the information listed in Annex V and be translated into a language that national authorities in the Member States where the system is sold can easily understand.

<p>the Commission shall, without delay, enter into <br /> consultations with the relevant Member State. The operators concerned shall be consulted and have the possibility to <br /> present their views. Having regard thereto, the Commission shall decide whether the authorisation is justified. The <br /> Commission shall address its decision to the Member State concerned and to the relevant operators.<br /> 6.<br /> Where the Commission considers the authorisation unjustified, it shall be withdrawn by the market surveillance <br /> authority of the Member State concerned.<br /> 7.<br /> For high-risk AI systems related to products covered by Union harmonisation legislation listed in Section A of <br /> Annex I, only the derogations from the conformity assessment established in that Union harmonisation legislation shall <br /> apply.<br /> Article 47<br /> EU declaration of conformity<br /> 1.<br /> The provider shall draw up a written machine readable, physical or electronically signed EU declaration of conformity <br /> for each high-risk AI system, and keep it at the disposal of the national competent authorities for 10 years after the <br /> high-risk AI system has been placed on the market or put into service. The EU declaration of conformity shall identify the <br /> high-risk AI system for which it has been drawn up. A copy of the EU declaration of conformity shall be submitted to the <br /> relevant national competent authorities upon request.<br /> EN<br /> OJ L, 12.7.2024<br /> 80/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>2.<br /> The EU declaration of conformity shall state that the high-risk AI system concerned meets the requirements set out in <br /> Section 2. The EU declaration of conformity shall contain the information set out in Annex V, and shall be translated into <br /> a language that can be easily understood by the national competent authorities of the Member States in which the high-risk <br /> AI system is placed on the market or made available.<br /> 3.</p>
Show original text

High-risk AI systems must include information listed in Annex V and be translated into a language that the national authorities in Member States can easily understand.

If a high-risk AI system is covered by other EU laws that also require an EU declaration of conformity, only one combined declaration should be created. This single declaration must list all applicable EU laws.

The company providing the AI system is responsible for ensuring it meets all requirements. They must keep the declaration current and updated.

The European Commission has the power to update Annex V as technology advances, changing what information must be included in the EU declaration of conformity.

CE marking must follow the general rules in EU Regulation 765/2008. For AI systems provided online, a digital CE marking can be used if it is easily accessible through the system's interface, a machine-readable code, or other electronic means. For physical AI systems, the CE marking must be clearly visible, readable, and permanent. If this is not practical due to the system's nature, the marking can be placed on the packaging or accompanying documents instead.

<p>contain the information set out in Annex V, and shall be translated into <br /> a language that can be easily understood by the national competent authorities of the Member States in which the high-risk <br /> AI system is placed on the market or made available.<br /> 3.<br /> Where high-risk AI systems are subject to other Union harmonisation legislation which also requires an EU <br /> declaration of conformity, a single EU declaration of conformity shall be drawn up in respect of all Union law applicable to <br /> the high-risk AI system. The declaration shall contain all the information required to identify the Union harmonisation <br /> legislation to which the declaration relates.<br /> 4.<br /> By drawing up the EU declaration of conformity, the provider shall assume responsibility for compliance with the <br /> requirements set out in Section 2. The provider shall keep the EU declaration of conformity up-to-date as appropriate.<br /> 5.<br /> The Commission is empowered to adopt delegated acts in accordance with Article 97 in order to amend Annex V by <br /> updating the content of the EU declaration of conformity set out in that Annex, in order to introduce elements that become <br /> necessary in light of technical progress.<br /> Article 48<br /> CE marking<br /> 1.<br /> The CE marking shall be subject to the general principles set out in Article 30 of Regulation (EC) No 765/2008.<br /> 2.<br /> For high-risk AI systems provided digitally, a digital CE marking shall be used, only if it can easily be accessed via the <br /> interface from which that system is accessed or via an easily accessible machine-readable code or other electronic means.<br /> 3.<br /> The CE marking shall be affixed visibly, legibly and indelibly for high-risk AI systems. Where that is not possible or <br /> not warranted on account of the nature of the high-risk AI system, it shall be affixed to the packaging or to the <br /> accompanying documentation, as appropriate.<br /> 4.</p>
Show original text

High-risk AI systems must display a CE marking, which shows they meet safety requirements. This marking should be placed directly on the product when possible. If not practical, it can be placed on the packaging or documentation instead.

When a notified body (an approved testing organization) is involved in checking the system, their identification number must appear next to the CE marking. This number can be added by the notified body itself, the company selling the product, or their authorized representative. The identification number should also appear in any advertising that mentions the product meets CE marking requirements.

If an AI system must follow other EU laws that also require CE marking, the marking must indicate that the system meets all applicable laws.

Before selling or using a high-risk AI system listed in Annex III (with some exceptions), the company providing it or their authorized representative must register the system and themselves in the EU database. Similarly, if a company decides their AI system is not high-risk according to Article 6(3), they must still register themselves and the system in the same EU database.

<p>ibly for high-risk AI systems. Where that is not possible or <br /> not warranted on account of the nature of the high-risk AI system, it shall be affixed to the packaging or to the <br /> accompanying documentation, as appropriate.<br /> 4.<br /> Where applicable, the CE marking shall be followed by the identification number of the notified body responsible for <br /> the conformity assessment procedures set out in Article 43. The identification number of the notified body shall be affixed <br /> by the body itself or, under its instructions, by the provider or by the provider’s authorised representative. The identification <br /> number shall also be indicated in any promotional material which mentions that the high-risk AI system fulfils the <br /> requirements for CE marking.<br /> 5.<br /> Where high-risk AI systems are subject to other Union law which also provides for the affixing of the CE marking, the <br /> CE marking shall indicate that the high-risk AI system also fulfil the requirements of that other law.<br /> Article 49<br /> Registration<br /> 1.<br /> Before placing on the market or putting into service a high-risk AI system listed in Annex III, with the exception of <br /> high-risk AI systems referred to in point 2 of Annex III, the provider or, where applicable, the authorised representative <br /> shall register themselves and their system in the EU database referred to in Article 71.<br /> 2.<br /> Before placing on the market or putting into service an AI system for which the provider has concluded that it is not <br /> high-risk according to Article 6(3), that provider or, where applicable, the authorised representative shall register <br /> themselves and that system in the EU database referred to in Article 71.<br /> 3.</p>
Show original text

If a provider determines that their AI system is not high-risk under Article 6(3), they or their authorized representative must register themselves and the system in the EU database mentioned in Article 71.

Before using a high-risk AI system listed in Annex III (except those in point 2 of Annex III), public authorities, EU institutions, and their representatives must register themselves, select the system, and record its use in the EU database referred to in Article 71.

For high-risk AI systems listed in points 1, 6, and 7 of Annex III that are used in law enforcement, migration, asylum, and border control, the registration must be placed in a secure, non-public section of the EU database. This registration must include only specific information from: Section A (points 1-10, excluding points 6, 8, and 9) of Annex VIII; Section B (points 1-5, 8, and 9) of Annex VIII; Section C (points 1-3) of Annex VIII; and points 1, 2, 3, and 5 of Annex IX.

Only the European Commission and national authorities listed in Article 74(8) can access these restricted sections of the EU database.

<p>provider has concluded that it is not <br /> high-risk according to Article 6(3), that provider or, where applicable, the authorised representative shall register <br /> themselves and that system in the EU database referred to in Article 71.<br /> 3.<br /> Before putting into service or using a high-risk AI system listed in Annex III, with the exception of high-risk AI <br /> systems listed in point 2 of Annex III, deployers that are public authorities, Union institutions, bodies, offices or agencies or <br /> persons acting on their behalf shall register themselves, select the system and register its use in the EU database referred to <br /> in Article 71.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 81/144</p> <p>4.<br /> For high-risk AI systems referred to in points 1, 6 and 7 of Annex III, in the areas of law enforcement, migration, <br /> asylum and border control management, the registration referred to in paragraphs 1, 2 and 3 of this Article shall be in <br /> a secure non-public section of the EU database referred to in Article 71 and shall include only the following information, as <br /> applicable, referred to in:<br /> (a) Section A, points 1 to 10, of Annex VIII, with the exception of points 6, 8 and 9;<br /> (b) Section B, points 1 to 5, and points 8 and 9 of Annex VIII;<br /> (c) Section C, points 1 to 3, of Annex VIII;<br /> (d) points 1, 2, 3 and 5, of Annex IX.<br /> Only the Commission and national authorities referred to in Article 74(8) shall have access to the respective restricted <br /> sections of the EU database listed in the first subparagraph of this paragraph.<br /> 5.</p>
Show original text

Only the Commission and specific national authorities listed in Article 74(8) can access the restricted sections of the EU database. High-risk AI systems mentioned in Annex III must be registered with national authorities. Providers of AI systems that directly interact with people must inform those people that they are talking to an AI, unless it is obvious from context. This rule does not apply to AI systems used by law enforcement to detect, prevent, investigate, or prosecute crimes, unless the public can use the system to report crimes. Providers of AI systems that create artificial audio, images, videos, or text must mark these outputs in a way that computers can read and identify them as artificially made or changed. Providers must make sure their technical solutions work well, are compatible with other systems, and are reliable, considering the type of content, implementation costs, and current industry standards.

<p>, 3 and 5, of Annex IX.<br /> Only the Commission and national authorities referred to in Article 74(8) shall have access to the respective restricted <br /> sections of the EU database listed in the first subparagraph of this paragraph.<br /> 5.<br /> High-risk AI systems referred to in point 2 of Annex III shall be registered at national level.<br /> CHAPTER IV<br /> TRANSPARENCY OBLIGATIONS FOR PROVIDERS AND DEPLOYERS OF CERTAIN AI SYSTEMS<br /> Article 50<br /> Transparency obligations for providers and deployers of certain AI systems<br /> 1.<br /> Providers shall ensure that AI systems intended to interact directly with natural persons are designed and developed in <br /> such a way that the natural persons concerned are informed that they are interacting with an AI system, unless this is <br /> obvious from the point of view of a natural person who is reasonably well-informed, observant and circumspect, taking <br /> into account the circumstances and the context of use. This obligation shall not apply to AI systems authorised by law to <br /> detect, prevent, investigate or prosecute criminal offences, subject to appropriate safeguards for the rights and freedoms of <br /> third parties, unless those systems are available for the public to report a criminal offence.<br /> 2.<br /> Providers of AI systems, including general-purpose AI systems, generating synthetic audio, image, video or text <br /> content, shall ensure that the outputs of the AI system are marked in a machine-readable format and detectable as <br /> artificially generated or manipulated. Providers shall ensure their technical solutions are effective, interoperable, robust and <br /> reliable as far as this is technically feasible, taking into account the specificities and limitations of various types of content, <br /> the costs of implementation and the generally acknowledged state of the art, as may be reflected in relevant technical <br /> standards.</p>
Show original text

AI systems must be reliable based on what is technically possible, considering the type of content, implementation costs, and current industry standards. This requirement does not apply to AI systems that assist with basic editing, do not significantly change the user's data, or are legally authorized to detect or prosecute crimes.

Companies using emotion recognition or biometric categorization systems must inform people that these systems are being used and must follow data protection laws (EU Regulations 2016/679, 2018/1725, and Directive 2016/680). This requirement does not apply to systems legally authorized to detect, prevent, or investigate crimes, as long as they protect people's rights and follow EU law.

Companies using AI systems that create or alter images, audio, or videos (deepfakes) must clearly state that the content is artificially generated or manipulated. This requirement does not apply when the use is legally authorized for detecting or prosecuting crimes. For artistic, creative, satirical, or fictional works, companies only need to disclose that the content is artificially generated or manipulated in a way that does not interfere with viewing or enjoying the work.

<p>reliable as far as this is technically feasible, taking into account the specificities and limitations of various types of content, <br /> the costs of implementation and the generally acknowledged state of the art, as may be reflected in relevant technical <br /> standards. This obligation shall not apply to the extent the AI systems perform an assistive function for standard editing or <br /> do not substantially alter the input data provided by the deployer or the semantics thereof, or where authorised by law to <br /> detect, prevent, investigate or prosecute criminal offences.<br /> 3.<br /> Deployers of an emotion recognition system or a biometric categorisation system shall inform the natural persons <br /> exposed thereto of the operation of the system, and shall process the personal data in accordance with Regulations (EU) <br /> 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680, as applicable. This obligation shall not apply to AI systems <br /> used for biometric categorisation and emotion recognition, which are permitted by law to detect, prevent or investigate <br /> criminal offences, subject to appropriate safeguards for the rights and freedoms of third parties, and in accordance with <br /> Union law.<br /> 4.<br /> Deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake, shall <br /> disclose that the content has been artificially generated or manipulated. This obligation shall not apply where the use is <br /> authorised by law to detect, prevent, investigate or prosecute criminal offence. Where the content forms part of an evidently <br /> artistic, creative, satirical, fictional or analogous work or programme, the transparency obligations set out in this paragraph <br /> are limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not <br /> hamper the display or enjoyment of the work.</p>
Show original text

For creative works like satire or fiction, companies only need to disclose that AI-generated or manipulated content exists in a way that doesn't interfere with enjoying the work. When AI creates or alters text meant to inform the public about important issues, the company must tell people the text was artificially created or changed. However, this requirement doesn't apply if the use is legally authorized for detecting or investigating crimes, or if a human has reviewed and approved the content before publication. Companies must provide this information to people clearly and noticeably when they first interact with or see the content, following accessibility standards. These rules don't replace other transparency requirements in EU or national law. The AI Office will help develop industry guidelines at the EU level to make it easier to detect and label artificially generated or manipulated content. The Commission can officially approve these guidelines through a formal process.

<p>satirical, fictional or analogous work or programme, the transparency obligations set out in this paragraph <br /> are limited to disclosure of the existence of such generated or manipulated content in an appropriate manner that does not <br /> hamper the display or enjoyment of the work.<br /> Deployers of an AI system that generates or manipulates text which is published with the purpose of informing the public <br /> on matters of public interest shall disclose that the text has been artificially generated or manipulated. This obligation shall <br /> not apply where the use is authorised by law to detect, prevent, investigate or prosecute criminal offences or where the <br /> AI-generated content has undergone a process of human review or editorial control and where a natural or legal person <br /> holds editorial responsibility for the publication of the content.<br /> EN<br /> OJ L, 12.7.2024<br /> 82/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>5.<br /> The information referred to in paragraphs 1 to 4 shall be provided to the natural persons concerned in a clear and <br /> distinguishable manner at the latest at the time of the first interaction or exposure. The information shall conform to the <br /> applicable accessibility requirements.<br /> 6.<br /> Paragraphs 1 to 4 shall not affect the requirements and obligations set out in Chapter III, and shall be without <br /> prejudice to other transparency obligations laid down in Union or national law for deployers of AI systems.<br /> 7.<br /> The AI Office shall encourage and facilitate the drawing up of codes of practice at Union level to facilitate the effective <br /> implementation of the obligations regarding the detection and labelling of artificially generated or manipulated content. <br /> The Commission may adopt implementing acts to approve those codes of practice in accordance with the procedure laid <br /> down in Article 56 (6).</p>
Show original text

The Commission can approve codes of practice that help companies detect and label artificially created or altered content. If a code of practice is not good enough, the Commission can create common rules instead. This section covers general-purpose AI models—large AI systems designed for many different uses. An AI model is classified as high-risk if it has significant capabilities, measured using technical tools and benchmarks. It is automatically considered high-risk if it used more than 10 to the power of 25 floating point operations during training. The Commission can update these risk thresholds and measurement methods as technology improves to keep standards current.

<p>to facilitate the effective <br /> implementation of the obligations regarding the detection and labelling of artificially generated or manipulated content. <br /> The Commission may adopt implementing acts to approve those codes of practice in accordance with the procedure laid <br /> down in Article 56 (6). If it deems the code is not adequate, the Commission may adopt an implementing act specifying <br /> common rules for the implementation of those obligations in accordance with the examination procedure laid down in <br /> Article 98(2).<br /> CHAPTER V<br /> GENERAL-PURPOSE AI MODELS<br /> SECTION 1<br /> Classification rules<br /> Article 51<br /> Classification of general-purpose AI models as general-purpose AI models with systemic risk<br /> 1.<br /> A general-purpose AI model shall be classified as a general-purpose AI model with systemic risk if it meets any of the <br /> following conditions:<br /> (a) it has high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, including <br /> indicators and benchmarks;<br /> (b) based on a decision of the Commission, ex officio or following a qualified alert from the scientific panel, it has <br /> capabilities or an impact equivalent to those set out in point (a) having regard to the criteria set out in Annex XIII.<br /> 2.<br /> A general-purpose AI model shall be presumed to have high impact capabilities pursuant to paragraph 1, point (a), <br /> when the cumulative amount of computation used for its training measured in floating point operations is greater than <br /> 1025.<br /> 3.<br /> The Commission shall adopt delegated acts in accordance with Article 97 to amend the thresholds listed in <br /> paragraphs 1 and 2 of this Article, as well as to supplement benchmarks and indicators in light of evolving technological <br /> developments, such as algorithmic improvements or increased hardware efficiency, when necessary, for these thresholds to <br /> reflect the state of the art.<br /> Article 52<br /> Procedure<br /> 1.</p>
Show original text

Article 51 requires benchmarks and indicators to be updated as technology improves to ensure they reflect current standards. Article 52 outlines the notification and classification process for high-risk AI models. When a general-purpose AI model meets the criteria in Article 51(1)(a), the provider must notify the European Commission within two weeks and provide evidence of compliance. If the Commission discovers a high-risk model that was not reported, it can designate it as such. Providers may submit arguments explaining why their model, despite meeting the criteria, should not be classified as high-risk due to its specific characteristics. If the Commission finds these arguments insufficient or unconvincing, it will reject them and classify the model as high-risk.

<p>as well as to supplement benchmarks and indicators in light of evolving technological <br /> developments, such as algorithmic improvements or increased hardware efficiency, when necessary, for these thresholds to <br /> reflect the state of the art.<br /> Article 52<br /> Procedure<br /> 1.<br /> Where a general-purpose AI model meets the condition referred to in Article 51(1), point (a), the relevant provider <br /> shall notify the Commission without delay and in any event within two weeks after that requirement is met or it becomes <br /> known that it will be met. That notification shall include the information necessary to demonstrate that the relevant <br /> requirement has been met. If the Commission becomes aware of a general-purpose AI model presenting systemic risks of <br /> which it has not been notified, it may decide to designate it as a model with systemic risk.<br /> 2.<br /> The provider of a general-purpose AI model that meets the condition referred to in Article 51(1), point (a), may <br /> present, with its notification, sufficiently substantiated arguments to demonstrate that, exceptionally, although it meets that <br /> requirement, the general-purpose AI model does not present, due to its specific characteristics, systemic risks and therefore <br /> should not be classified as a general-purpose AI model with systemic risk.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 83/144</p> <p>3.<br /> Where the Commission concludes that the arguments submitted pursuant to paragraph 2 are not sufficiently <br /> substantiated and the relevant provider was not able to demonstrate that the general-purpose AI model does not present, <br /> due to its specific characteristics, systemic risks, it shall reject those arguments, and the general-purpose AI model shall be <br /> considered to be a general-purpose AI model with systemic risk.<br /> 4.</p>
Show original text

If a general-purpose AI model does not create systemic risks due to its specific features, those arguments will be rejected and the model will still be classified as a general-purpose AI model with systemic risk. The Commission can identify general-purpose AI models as presenting systemic risks on its own or based on alerts from the scientific panel, using criteria listed in Annex XIII. The Commission can update these criteria through delegated acts. If a provider disagrees with this classification, they can request the Commission to reconsider whether their model still poses systemic risks. The request must include new, detailed, and objective reasons since the original decision. Providers can request reconsideration at least six months after the initial classification, and again at least six months after any decision to maintain the classification. The Commission must publish and regularly update a public list of general-purpose AI models classified as presenting systemic risks, while protecting intellectual property rights and confidential business information according to EU and national laws.

<p>that the general-purpose AI model does not present, <br /> due to its specific characteristics, systemic risks, it shall reject those arguments, and the general-purpose AI model shall be <br /> considered to be a general-purpose AI model with systemic risk.<br /> 4.<br /> The Commission may designate a general-purpose AI model as presenting systemic risks, ex officio or following <br /> a qualified alert from the scientific panel pursuant to Article 90(1), point (a), on the basis of criteria set out in Annex XIII.<br /> The Commission is empowered to adopt delegated acts in accordance with Article 97 in order to amend Annex XIII by <br /> specifying and updating the criteria set out in that Annex.<br /> 5.<br /> Upon a reasoned request of a provider whose model has been designated as a general-purpose AI model with systemic <br /> risk pursuant to paragraph 4, the Commission shall take the request into account and may decide to reassess whether the <br /> general-purpose AI model can still be considered to present systemic risks on the basis of the criteria set out in Annex XIII. <br /> Such a request shall contain objective, detailed and new reasons that have arisen since the designation decision. Providers <br /> may request reassessment at the earliest six months after the designation decision. Where the Commission, following its <br /> reassessment, decides to maintain the designation as a general-purpose AI model with systemic risk, providers may request <br /> reassessment at the earliest six months after that decision.<br /> 6.<br /> The Commission shall ensure that a list of general-purpose AI models with systemic risk is published and shall keep <br /> that list up to date, without prejudice to the need to observe and protect intellectual property rights and confidential <br /> business information or trade secrets in accordance with Union and national law.<br /> SECTION 2<br /> Obligations for providers of general-purpose AI models<br /> Article 53<br /> Obligations for providers of general-purpose AI models<br /> 1.</p>
Show original text

Providers of general-purpose AI models must fulfill four main obligations: First, they must create and maintain technical documentation about how the model was trained, tested, and evaluated, and provide this to the AI Office and national authorities upon request. Second, they must prepare and share information with companies that want to use their AI model, explaining what the model can and cannot do, while protecting intellectual property and trade secrets. Third, they must establish a policy to follow EU copyright laws and respect any copyright reservations. Fourth, they must create and publish a detailed public summary describing what data was used to train the model, following a template provided by the AI Office.

<p>and confidential <br /> business information or trade secrets in accordance with Union and national law.<br /> SECTION 2<br /> Obligations for providers of general-purpose AI models<br /> Article 53<br /> Obligations for providers of general-purpose AI models<br /> 1.<br /> Providers of general-purpose AI models shall:<br /> (a) draw up and keep up-to-date the technical documentation of the model, including its training and testing process and <br /> the results of its evaluation, which shall contain, at a minimum, the information set out in Annex XI for the purpose of <br /> providing it, upon request, to the AI Office and the national competent authorities;<br /> (b) draw up, keep up-to-date and make available information and documentation to providers of AI systems who intend to <br /> integrate the general-purpose AI model into their AI systems. Without prejudice to the need to observe and protect <br /> intellectual property rights and confidential business information or trade secrets in accordance with Union and <br /> national law, the information and documentation shall:<br /> (i) enable providers of AI systems to have a good understanding of the capabilities and limitations of the <br /> general-purpose AI model and to comply with their obligations pursuant to this Regulation; and<br /> (ii) contain, at a minimum, the elements set out in Annex XII;<br /> (c) put in place a policy to comply with Union law on copyright and related rights, and in particular to identify and comply <br /> with, including through state-of-the-art technologies, a reservation of rights expressed pursuant to Article 4(3) of <br /> Directive (EU) 2019/790;<br /> (d) draw up and make publicly available a sufficiently detailed summary about the content used for training of the <br /> general-purpose AI model, according to a template provided by the AI Office.<br /> EN<br /> OJ L, 12.7.2024<br /> 84/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>2.</p>
Show original text

Providers of AI models that are released under a free and open-source license (allowing access, usage, modification, and distribution) with publicly available parameters, architecture information, and usage details do not need to follow the requirements in paragraph 1, points (a) and (b). However, this exception does not apply to general-purpose AI models that pose systemic risks. Providers of general-purpose AI models must work with the Commission and national authorities as needed. They can use codes of practice (as defined in Article 56) to show they meet the requirements in paragraph 1 until an official European standard is published. Following European standards means providers are considered compliant. Those who do not follow an approved code of practice or standard must show other acceptable ways of meeting the requirements for the Commission to review. The Commission can create detailed rules for measuring and calculating compliance with Annex XI, particularly sections 2(d) and (e), to ensure documentation is comparable and verifiable. The Commission can also update Annexes XI and XII as technology changes.

<p>to a template provided by the AI Office.<br /> EN<br /> OJ L, 12.7.2024<br /> 84/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>2.<br /> The obligations set out in paragraph 1, points (a) and (b), shall not apply to providers of AI models that are released <br /> under a free and open-source licence that allows for the access, usage, modification, and distribution of the model, and <br /> whose parameters, including the weights, the information on the model architecture, and the information on model usage, <br /> are made publicly available. This exception shall not apply to general-purpose AI models with systemic risks.<br /> 3.<br /> Providers of general-purpose AI models shall cooperate as necessary with the Commission and the national <br /> competent authorities in the exercise of their competences and powers pursuant to this Regulation.<br /> 4.<br /> Providers of general-purpose AI models may rely on codes of practice within the meaning of Article 56 to <br /> demonstrate compliance with the obligations set out in paragraph 1 of this Article, until a harmonised standard is <br /> published. Compliance with European harmonised standards grants providers the presumption of conformity to the extent <br /> that those standards cover those obligations. Providers of general-purpose AI models who do not adhere to an approved <br /> code of practice or do not comply with a European harmonised standard shall demonstrate alternative adequate means of <br /> compliance for assessment by the Commission.<br /> 5.<br /> For the purpose of facilitating compliance with Annex XI, in particular points 2 (d) and (e) thereof, the Commission is <br /> empowered to adopt delegated acts in accordance with Article 97 to detail measurement and calculation methodologies <br /> with a view to allowing for comparable and verifiable documentation.<br /> 6.<br /> The Commission is empowered to adopt delegated acts in accordance with Article 97(2) to amend Annexes XI and XII <br /> in light of evolving technological developments.<br /> 7.</p>
Show original text

The Commission can update the technical requirements (Annexes XI and XII) as technology advances. All information collected, including trade secrets, must be kept confidential under Article 78. Providers of general-purpose AI models based outside the EU must appoint an authorized representative located in the EU before selling their models in the EU market. The provider must give their representative the authority to perform specific tasks. The authorized representative must: (a) verify that the provider has completed all required technical documentation and met all obligations; (b) keep copies of the technical documentation available to the AI Office and national authorities for 10 years after the model is sold, along with the provider's contact information; (c) provide the AI Office with all necessary information and documentation when requested to prove the provider is following the rules; and (d) work with the AI Office and authorities when they investigate the general-purpose AI model.

<p>methodologies <br /> with a view to allowing for comparable and verifiable documentation.<br /> 6.<br /> The Commission is empowered to adopt delegated acts in accordance with Article 97(2) to amend Annexes XI and XII <br /> in light of evolving technological developments.<br /> 7.<br /> Any information or documentation obtained pursuant to this Article, including trade secrets, shall be treated in <br /> accordance with the confidentiality obligations set out in Article 78.<br /> Article 54<br /> Authorised representatives of providers of general-purpose AI models<br /> 1.<br /> Prior to placing a general-purpose AI model on the Union market, providers established in third countries shall, by <br /> written mandate, appoint an authorised representative which is established in the Union.<br /> 2.<br /> The provider shall enable its authorised representative to perform the tasks specified in the mandate received from the <br /> provider.<br /> 3.<br /> The authorised representative shall perform the tasks specified in the mandate received from the provider. It shall <br /> provide a copy of the mandate to the AI Office upon request, in one of the official languages of the institutions of the <br /> Union. For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the <br /> following tasks:<br /> (a) verify that the technical documentation specified in Annex XI has been drawn up and all obligations referred to in <br /> Article 53 and, where applicable, Article 55 have been fulfilled by the provider;<br /> (b) keep a copy of the technical documentation specified in Annex XI at the disposal of the AI Office and national <br /> competent authorities, for a period of 10 years after the general-purpose AI model has been placed on the market, and <br /> the contact details of the provider that appointed the authorised representative;<br /> (c) provide the AI Office, upon a reasoned request, with all the information and documentation, including that referred to <br /> in point (b), necessary to demonstrate compliance with the obligations in this Chapter;<br /> (d) cooperate with the AI Office and competent authorities, upon a reasoned request, in any action they take in relation to <br /> the general-purpose AI model</p>
Show original text

Providers of general-purpose AI models must cooperate with the AI Office and relevant authorities when requested, including providing documentation to show they follow the rules in this chapter. An authorized representative can be appointed to handle communications with the AI Office and authorities on compliance matters instead of or alongside the provider. This representative has the power to speak for the provider on all issues related to following this regulation. If the representative believes the provider is breaking the rules, they must end the agreement and immediately notify the AI Office about the termination and why. However, providers who release their models under a free and open-source license that allows access, use, modification, and sharing do not need to follow these requirements, unless their model poses systemic risks. In that case, even open-source models must comply.

<p>that referred to <br /> in point (b), necessary to demonstrate compliance with the obligations in this Chapter;<br /> (d) cooperate with the AI Office and competent authorities, upon a reasoned request, in any action they take in relation to <br /> the general-purpose AI model, including when the model is integrated into AI systems placed on the market or put into <br /> service in the Union.<br /> 4.<br /> The mandate shall empower the authorised representative to be addressed, in addition to or instead of the provider, <br /> by the AI Office or the competent authorities, on all issues related to ensuring compliance with this Regulation.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 85/144</p> <p>5.<br /> The authorised representative shall terminate the mandate if it considers or has reason to consider the provider to be <br /> acting contrary to its obligations pursuant to this Regulation. In such a case, it shall also immediately inform the AI Office <br /> about the termination of the mandate and the reasons therefor.<br /> 6.<br /> The obligation set out in this Article shall not apply to providers of general-purpose AI models that are released under <br /> a free and open-source licence that allows for the access, usage, modification, and distribution of the model, and whose <br /> parameters, including the weights, the information on the model architecture, and the information on model usage, are <br /> made publicly available, unless the general-purpose AI models present systemic risks.<br /> SECTION 3<br /> Obligations of providers of general-purpose AI models with systemic risk<br /> Article 55<br /> Obligations of providers of general-purpose AI models with systemic risk<br /> 1.</p>
Show original text

SECTION 3: Requirements for Providers of High-Risk General-Purpose AI Models

Article 55: What Providers Must Do

  1. Providers of general-purpose AI models that pose systemic risks must:

(a) Test their models using standard, up-to-date methods. This includes deliberately trying to break the model (adversarial testing) to find and fix potential systemic risks.

(b) Identify and reduce systemic risks at the EU level. These risks can come from developing, selling, or using the AI model.

(c) Track, document, and quickly report serious problems and solutions to the AI Office and relevant national authorities.

(d) Protect the AI model and its physical infrastructure from cyber attacks.

  1. Until official EU standards are published, providers can follow approved practice codes to show they meet these requirements. Once EU standards exist, following them means providers automatically comply. If providers do not follow an approved code or standard, they must show the Commission other ways they meet these requirements.

  2. All information and documents from this process, including business secrets, must be kept confidential according to Article 78.

<p>unless the general-purpose AI models present systemic risks.<br /> SECTION 3<br /> Obligations of providers of general-purpose AI models with systemic risk<br /> Article 55<br /> Obligations of providers of general-purpose AI models with systemic risk<br /> 1.<br /> In addition to the obligations listed in Articles 53 and 54, providers of general-purpose AI models with systemic risk <br /> shall:<br /> (a) perform model evaluation in accordance with standardised protocols and tools reflecting the state of the art, including <br /> conducting and documenting adversarial testing of the model with a view to identifying and mitigating systemic risks;<br /> (b) assess and mitigate possible systemic risks at Union level, including their sources, that may stem from the development, <br /> the placing on the market, or the use of general-purpose AI models with systemic risk;<br /> (c) keep track of, document, and report, without undue delay, to the AI Office and, as appropriate, to national competent <br /> authorities, relevant information about serious incidents and possible corrective measures to address them;<br /> (d) ensure an adequate level of cybersecurity protection for the general-purpose AI model with systemic risk and the <br /> physical infrastructure of the model.<br /> 2.<br /> Providers of general-purpose AI models with systemic risk may rely on codes of practice within the meaning of <br /> Article 56 to demonstrate compliance with the obligations set out in paragraph 1 of this Article, until a harmonised <br /> standard is published. Compliance with European harmonised standards grants providers the presumption of conformity to <br /> the extent that those standards cover those obligations. Providers of general-purpose AI models with systemic risks who do <br /> not adhere to an approved code of practice or do not comply with a European harmonised standard shall demonstrate <br /> alternative adequate means of compliance for assessment by the Commission.<br /> 3.<br /> Any information or documentation obtained pursuant to this Article, including trade secrets, shall be treated in <br /> accordance with the confidentiality obligations set out in Article 78.</p>
Show original text

Companies must show they have other acceptable ways to follow the rules, which the Commission will review. Any information or trade secrets shared under this section must be kept confidential according to Article 78. The AI Office will create and support industry codes of practice at the European Union level to help enforce this regulation, considering international standards. The AI Office and Board will ensure these codes cover the requirements in Articles 53 and 55, including: keeping information about AI training data current as technology changes; providing clear summaries of training data; identifying systemic risks across the EU and their causes; and establishing procedures to assess and manage these risks proportionally based on their severity and likelihood. The AI Office can invite all general-purpose AI model providers and national authorities to help develop these codes of practice.

<p>standard shall demonstrate <br /> alternative adequate means of compliance for assessment by the Commission.<br /> 3.<br /> Any information or documentation obtained pursuant to this Article, including trade secrets, shall be treated in <br /> accordance with the confidentiality obligations set out in Article 78.<br /> SECTION 4<br /> Codes of practice<br /> Article 56<br /> Codes of practice<br /> 1.<br /> The AI Office shall encourage and facilitate the drawing up of codes of practice at Union level in order to contribute <br /> to the proper application of this Regulation, taking into account international approaches.<br /> 2.<br /> The AI Office and the Board shall aim to ensure that the codes of practice cover at least the obligations provided for in <br /> Articles 53 and 55, including the following issues:<br /> EN<br /> OJ L, 12.7.2024<br /> 86/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>(a) the means to ensure that the information referred to in Article 53(1), points (a) and (b), is kept up to date in light of <br /> market and technological developments;<br /> (b) the adequate level of detail for the summary about the content used for training;<br /> (c) the identification of the type and nature of the systemic risks at Union level, including their sources, where appropriate;<br /> (d) the measures, procedures and modalities for the assessment and management of the systemic risks at Union level, <br /> including the documentation thereof, which shall be proportionate to the risks, take into consideration their severity <br /> and probability and take into account the specific challenges of tackling those risks in light of the possible ways in <br /> which such risks may emerge and materialise along the AI value chain.<br /> 3.<br /> The AI Office may invite all providers of general-purpose AI models, as well as relevant national competent <br /> authorities, to participate in the drawing-up of codes of practice.</p>
Show original text

The AI Office can invite all general-purpose AI model providers and relevant national authorities to help create codes of practice. Civil society groups, industry representatives, academics, and other stakeholders like downstream providers and independent experts can also participate.

The AI Office and Board will ensure these codes of practice have clear objectives and include specific commitments or measures with key performance indicators to track progress. They will consider the needs and interests of all parties involved, including affected people across the EU.

Participants must regularly report to the AI Office on how they are following the codes of practice, what actions they have taken, and the results. The reporting requirements will be adjusted based on the size and capacity of each participant.

The AI Office and Board will regularly check whether participants are meeting the codes' objectives and following this Regulation. They will also verify that the codes cover the required obligations and will publish their findings on whether the codes are adequate.

The Commission can officially approve a code of practice and make it valid throughout the EU through an implementing act, following the examination procedure in Article 98(2).

The AI Office can ask all general-purpose AI model providers to follow these codes of practice.

<p>which such risks may emerge and materialise along the AI value chain.<br /> 3.<br /> The AI Office may invite all providers of general-purpose AI models, as well as relevant national competent <br /> authorities, to participate in the drawing-up of codes of practice. Civil society organisations, industry, academia and other <br /> relevant stakeholders, such as downstream providers and independent experts, may support the process.<br /> 4.<br /> The AI Office and the Board shall aim to ensure that the codes of practice clearly set out their specific objectives and <br /> contain commitments or measures, including key performance indicators as appropriate, to ensure the achievement of <br /> those objectives, and that they take due account of the needs and interests of all interested parties, including affected <br /> persons, at Union level.<br /> 5.<br /> The AI Office shall aim to ensure that participants to the codes of practice report regularly to the AI Office on the <br /> implementation of the commitments and the measures taken and their outcomes, including as measured against the key <br /> performance indicators as appropriate. Key performance indicators and reporting commitments shall reflect differences in <br /> size and capacity between various participants.<br /> 6.<br /> The AI Office and the Board shall regularly monitor and evaluate the achievement of the objectives of the codes of <br /> practice by the participants and their contribution to the proper application of this Regulation. The AI Office and the Board <br /> shall assess whether the codes of practice cover the obligations provided for in Articles 53 and 55, and shall regularly <br /> monitor and evaluate the achievement of their objectives. They shall publish their assessment of the adequacy of the codes <br /> of practice.<br /> The Commission may, by way of an implementing act, approve a code of practice and give it a general validity within the <br /> Union. That implementing act shall be adopted in accordance with the examination procedure referred to in Article 98(2).<br /> 7.<br /> The AI Office may invite all providers of general-purpose AI models to adhere to the codes of practice.</p>
Show original text

The AI Office can invite all providers of general-purpose AI models to follow codes of practice. Providers whose models do not pose systemic risks only need to follow Article 53 requirements, unless they choose to follow the full code. The AI Office will help review and update these codes as new standards emerge. Codes of practice must be completed by May 2, 2025. If a code is not finished or deemed inadequate by August 2, 2025, the Commission can create common rules for Articles 53 and 55 requirements instead. Each Member State must establish at least one AI regulatory sandbox (a testing environment) by August 2, 2026. Multiple Member States can work together to create a shared sandbox.

<p>general validity within the <br /> Union. That implementing act shall be adopted in accordance with the examination procedure referred to in Article 98(2).<br /> 7.<br /> The AI Office may invite all providers of general-purpose AI models to adhere to the codes of practice. For providers <br /> of general-purpose AI models not presenting systemic risks this adherence may be limited to the obligations provided for in <br /> Article 53, unless they declare explicitly their interest to join the full code.<br /> 8.<br /> The AI Office shall, as appropriate, also encourage and facilitate the review and adaptation of the codes of practice, in <br /> particular in light of emerging standards. The AI Office shall assist in the assessment of available standards.<br /> 9.<br /> Codes of practice shall be ready at the latest by 2 May 2025. The AI Office shall take the necessary steps, including <br /> inviting providers pursuant to paragraph 7.<br /> If, by 2 August 2025, a code of practice cannot be finalised, or if the AI Office deems it is not adequate following its <br /> assessment under paragraph 6 of this Article, the Commission may provide, by means of implementing acts, common rules <br /> for the implementation of the obligations provided for in Articles 53 and 55, including the issues set out in paragraph 2 of <br /> this Article. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article <br /> 98(2).<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 87/144</p> <p>CHAPTER VI<br /> MEASURES IN SUPPORT OF INNOVATION<br /> Article 57<br /> AI regulatory sandboxes<br /> 1.<br /> Member States shall ensure that their competent authorities establish at least one AI regulatory sandbox at national <br /> level, which shall be operational by 2 August 2026. That sandbox may also be established jointly with the competent <br /> authorities of other Member States.</p>
Show original text

Each Member State must set up at least one AI regulatory sandbox (a testing environment for AI innovation) at the national level by August 2, 2026. Member States can establish this sandbox together with other Member States if they choose. The Commission can offer technical support, advice, and tools to help set up and run these sandboxes.

Member States can meet this requirement by joining an existing sandbox if it provides adequate national coverage.

Member States may also create additional sandboxes at regional or local levels, or work together with other Member States to establish them.

The European Data Protection Supervisor can create a separate AI sandbox for EU institutions, bodies, offices, and agencies, and will have the same responsibilities as national authorities.

Member States must ensure their authorities have enough resources to effectively and promptly fulfill these requirements. National authorities should cooperate with other relevant authorities and may involve other organizations in the AI field. This does not affect other sandboxes already established under EU or national law. Member States must ensure proper cooperation between authorities overseeing these other sandboxes and the national authorities.

AI regulatory sandboxes must provide a controlled testing environment that encourages innovation and allows companies to develop, train, test, and validate new AI systems for a limited time before releasing them to the market. This testing can include real-world conditions under supervision, based on an agreement between the AI provider and the competent authority.

<p>Member States shall ensure that their competent authorities establish at least one AI regulatory sandbox at national <br /> level, which shall be operational by 2 August 2026. That sandbox may also be established jointly with the competent <br /> authorities of other Member States. The Commission may provide technical support, advice and tools for the establishment <br /> and operation of AI regulatory sandboxes.<br /> The obligation under the first subparagraph may also be fulfilled by participating in an existing sandbox in so far as that <br /> participation provides an equivalent level of national coverage for the participating Member States.<br /> 2.<br /> Additional AI regulatory sandboxes at regional or local level, or established jointly with the competent authorities of <br /> other Member States may also be established.<br /> 3.<br /> The European Data Protection Supervisor may also establish an AI regulatory sandbox for Union institutions, bodies, <br /> offices and agencies, and may exercise the roles and the tasks of national competent authorities in accordance with this <br /> Chapter.<br /> 4.<br /> Member States shall ensure that the competent authorities referred to in paragraphs 1 and 2 allocate sufficient <br /> resources to comply with this Article effectively and in a timely manner. Where appropriate, national competent authorities <br /> shall cooperate with other relevant authorities, and may allow for the involvement of other actors within the AI ecosystem. <br /> This Article shall not affect other regulatory sandboxes established under Union or national law. Member States shall ensure <br /> an appropriate level of cooperation between the authorities supervising those other sandboxes and the national competent <br /> authorities.<br /> 5.<br /> AI regulatory sandboxes established under paragraph 1 shall provide for a controlled environment that fosters <br /> innovation and facilitates the development, training, testing and validation of innovative AI systems for a limited time <br /> before their being placed on the market or put into service pursuant to a specific sandbox plan agreed between the <br /> providers or prospective providers and the competent authority. Such sandboxes may include testing in real world <br /> conditions supervised therein.<br /> 6.</p>
Show original text

AI companies can test their systems in a controlled environment (sandbox) for a limited time before releasing them to the market. These tests can happen in real-world conditions under supervision.

The government authority overseeing the sandbox will provide guidance and support to identify potential risks, especially those affecting fundamental rights, health, and safety. They will help companies test their systems and check if mitigation measures work properly according to the regulations.

The authority will also guide companies on what is expected of them and how to meet all legal requirements. When a company finishes testing, the authority will provide written proof of successful activities and a detailed exit report showing what was tested, the results, and what was learned. Companies can use this documentation to prove they follow the regulations during the official approval process. Market surveillance authorities and testing bodies will view these reports positively and use them to speed up approval procedures.

With the company's permission, the European Commission and the AI Board can access the exit reports and use them for their work. If both the company and the national authority agree, the exit report can be shared publicly on a central information platform.

<p>a limited time <br /> before their being placed on the market or put into service pursuant to a specific sandbox plan agreed between the <br /> providers or prospective providers and the competent authority. Such sandboxes may include testing in real world <br /> conditions supervised therein.<br /> 6.<br /> Competent authorities shall provide, as appropriate, guidance, supervision and support within the AI regulatory <br /> sandbox with a view to identifying risks, in particular to fundamental rights, health and safety, testing, mitigation measures, <br /> and their effectiveness in relation to the obligations and requirements of this Regulation and, where relevant, other Union <br /> and national law supervised within the sandbox.<br /> 7.<br /> Competent authorities shall provide providers and prospective providers participating in the AI regulatory sandbox <br /> with guidance on regulatory expectations and how to fulfil the requirements and obligations set out in this Regulation.<br /> Upon request of the provider or prospective provider of the AI system, the competent authority shall provide a written <br /> proof of the activities successfully carried out in the sandbox. The competent authority shall also provide an exit report <br /> detailing the activities carried out in the sandbox and the related results and learning outcomes. Providers may use such <br /> documentation to demonstrate their compliance with this Regulation through the conformity assessment process or <br /> relevant market surveillance activities. In this regard, the exit reports and the written proof provided by the national <br /> competent authority shall be taken positively into account by market surveillance authorities and notified bodies, with <br /> a view to accelerating conformity assessment procedures to a reasonable extent.<br /> 8.<br /> Subject to the confidentiality provisions in Article 78, and with the agreement of the provider or prospective provider, <br /> the Commission and the Board shall be authorised to access the exit reports and shall take them into account, as <br /> appropriate, when exercising their tasks under this Regulation. If both the provider or prospective provider and the national <br /> competent authority explicitly agree, the exit report may be made publicly available through the single information <br /> platform referred to in this Article.<br /> 9.</p>
Show original text

AI regulatory sandboxes are controlled testing environments where companies can develop and test new AI systems while working with government authorities. These sandboxes have five main goals: (1) help companies understand and follow AI regulations, (2) allow authorities to share best practices and learn from each other, (3) encourage innovation and help build a strong AI industry, (4) provide real-world evidence to improve future regulations, and (5) make it easier for companies—especially small businesses and startups—to bring AI products to the European market. When testing involves personal data or affects other areas under different authorities' control, those authorities must be included in the sandbox process. Authorities keep their full power to supervise and make corrections, and can stop or suspend testing at any time if serious risks to health, safety, or people's rights are found and cannot be fixed. If an exit report is created when a company leaves the sandbox, it can be made public if both the company and the authority agree.

<p>appropriate, when exercising their tasks under this Regulation. If both the provider or prospective provider and the national <br /> competent authority explicitly agree, the exit report may be made publicly available through the single information <br /> platform referred to in this Article.<br /> 9.<br /> The establishment of AI regulatory sandboxes shall aim to contribute to the following objectives:<br /> (a) improving legal certainty to achieve regulatory compliance with this Regulation or, where relevant, other applicable <br /> Union and national law;<br /> EN<br /> OJ L, 12.7.2024<br /> 88/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>(b) supporting the sharing of best practices through cooperation with the authorities involved in the AI regulatory <br /> sandbox;<br /> (c) fostering innovation and competitiveness and facilitating the development of an AI ecosystem;<br /> (d) contributing to evidence-based regulatory learning;<br /> (e) facilitating and accelerating access to the Union market for AI systems, in particular when provided by SMEs, including <br /> start-ups.<br /> 10.<br /> National competent authorities shall ensure that, to the extent the innovative AI systems involve the processing of <br /> personal data or otherwise fall under the supervisory remit of other national authorities or competent authorities providing <br /> or supporting access to data, the national data protection authorities and those other national or competent authorities are <br /> associated with the operation of the AI regulatory sandbox and involved in the supervision of those aspects to the extent of <br /> their respective tasks and powers.<br /> 11.<br /> The AI regulatory sandboxes shall not affect the supervisory or corrective powers of the competent authorities <br /> supervising the sandboxes, including at regional or local level. Any significant risks to health and safety and fundamental <br /> rights identified during the development and testing of such AI systems shall result in an adequate mitigation. National <br /> competent authorities shall have the power to temporarily or permanently suspend the testing process, or the participation <br /> in the sandbox if no effective mitigation is possible, and shall inform the AI Office of such decision.</p>
Show original text

National authorities can temporarily or permanently stop testing or sandbox participation if risks cannot be properly managed. They must inform the AI Office of this decision. These authorities have the power to supervise sandboxes within the law and can use their judgment when implementing rules for specific AI sandbox projects. Their goal is to support AI innovation in the EU.

Companies participating in AI sandboxes remain responsible under EU and national law for any harm caused to others during testing. However, if participants follow their approved plan, meet participation requirements, and honestly follow the national authority's guidance, they will not face financial penalties for breaking this regulation. If other authorities helped supervise the AI system and gave compliance guidance, they also will not impose penalties under their laws.

AI sandboxes must be designed to allow cooperation between national authorities across borders when needed.

National authorities must work together and coordinate through the Board.

National authorities must tell the AI Office and Board when they create a sandbox and can request their support and advice. The AI Office will publish and maintain a public list of planned and active sandboxes to encourage more participation and cross-border cooperation.

<p>systems shall result in an adequate mitigation. National <br /> competent authorities shall have the power to temporarily or permanently suspend the testing process, or the participation <br /> in the sandbox if no effective mitigation is possible, and shall inform the AI Office of such decision. National competent <br /> authorities shall exercise their supervisory powers within the limits of the relevant law, using their discretionary powers <br /> when implementing legal provisions in respect of a specific AI regulatory sandbox project, with the objective of supporting <br /> innovation in AI in the Union.<br /> 12.<br /> Providers and prospective providers participating in the AI regulatory sandbox shall remain liable under applicable <br /> Union and national liability law for any damage inflicted on third parties as a result of the experimentation taking place in <br /> the sandbox. However, provided that the prospective providers observe the specific plan and the terms and conditions for <br /> their participation and follow in good faith the guidance given by the national competent authority, no administrative fines <br /> shall be imposed by the authorities for infringements of this Regulation. Where other competent authorities responsible for <br /> other Union and national law were actively involved in the supervision of the AI system in the sandbox and provided <br /> guidance for compliance, no administrative fines shall be imposed regarding that law.<br /> 13.<br /> The AI regulatory sandboxes shall be designed and implemented in such a way that, where relevant, they facilitate <br /> cross-border cooperation between national competent authorities.<br /> 14.<br /> National competent authorities shall coordinate their activities and cooperate within the framework of the Board.<br /> 15.<br /> National competent authorities shall inform the AI Office and the Board of the establishment of a sandbox, and may <br /> ask them for support and guidance. The AI Office shall make publicly available a list of planned and existing sandboxes and <br /> keep it up to date in order to encourage more interaction in the AI regulatory sandboxes and cross-border cooperation.<br /> 16.</p>
Show original text

The AI Office will publish and maintain a list of all planned and existing AI regulatory sandboxes to promote greater participation and cooperation across borders. National authorities must submit yearly reports to the AI Office and Board, starting one year after a sandbox is established and continuing annually until it closes, plus a final report. These reports should cover progress, results, best practices, problems, lessons learned, and recommendations for improving the sandboxes and related regulations. National authorities must share these reports or summaries publicly online. The Commission will consider these reports when implementing this Regulation. The Commission will create a single online platform where stakeholders can access information about AI regulatory sandboxes, contact competent authorities, and request guidance on whether their AI-based products, services, and business models comply with regulations. The Commission will work closely with national authorities to coordinate this effort.

<p>may <br /> ask them for support and guidance. The AI Office shall make publicly available a list of planned and existing sandboxes and <br /> keep it up to date in order to encourage more interaction in the AI regulatory sandboxes and cross-border cooperation.<br /> 16.<br /> National competent authorities shall submit annual reports to the AI Office and to the Board, from one year after <br /> the establishment of the AI regulatory sandbox and every year thereafter until its termination, and a final report. Those <br /> reports shall provide information on the progress and results of the implementation of those sandboxes, including best <br /> practices, incidents, lessons learnt and recommendations on their setup and, where relevant, on the application and possible <br /> revision of this Regulation, including its delegated and implementing acts, and on the application of other Union law <br /> supervised by the competent authorities within the sandbox. The national competent authorities shall make those annual <br /> reports or abstracts thereof available to the public, online. The Commission shall, where appropriate, take the annual <br /> reports into account when exercising its tasks under this Regulation.<br /> 17.<br /> The Commission shall develop a single and dedicated interface containing all relevant information related to AI <br /> regulatory sandboxes to allow stakeholders to interact with AI regulatory sandboxes and to raise enquiries with competent <br /> authorities, and to seek non-binding guidance on the conformity of innovative products, services, business models <br /> embedding AI technologies, in accordance with Article 62(1), point (c). The Commission shall proactively coordinate with <br /> national competent authorities, where relevant.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 89/144</p> <p>Article 58<br /> Detailed arrangements for, and functioning of, AI regulatory sandboxes<br /> 1.</p>
Show original text

Article 58 outlines how AI regulatory sandboxes should work across the European Union. The European Commission will create detailed rules to prevent different countries from having different systems. These rules will cover who can join, how to apply and exit, and what conditions participants must follow. The Commission will ensure that: sandboxes are open to any AI company or startup that meets fair and transparent requirements, with decisions made within three months; access is broad and equal, allowing companies to apply alone or with partners; national authorities have flexibility in running their sandboxes; participation is free for small businesses and startups, though authorities can recover exceptional costs fairly; and sandboxes help companies learn how to comply with regulations.

<p>.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 89/144</p> <p>Article 58<br /> Detailed arrangements for, and functioning of, AI regulatory sandboxes<br /> 1.<br /> In order to avoid fragmentation across the Union, the Commission shall adopt implementing acts specifying the <br /> detailed arrangements for the establishment, development, implementation, operation and supervision of the AI regulatory <br /> sandboxes. The implementing acts shall include common principles on the following issues:<br /> (a) eligibility and selection criteria for participation in the AI regulatory sandbox;<br /> (b) procedures for the application, participation, monitoring, exiting from and termination of the AI regulatory sandbox, <br /> including the sandbox plan and the exit report;<br /> (c) the terms and conditions applicable to the participants.<br /> Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article 98(2).<br /> 2.<br /> The implementing acts referred to in paragraph 1 shall ensure:<br /> (a) that AI regulatory sandboxes are open to any applying provider or prospective provider of an AI system who fulfils <br /> eligibility and selection criteria, which shall be transparent and fair, and that national competent authorities inform <br /> applicants of their decision within three months of the application;<br /> (b) that AI regulatory sandboxes allow broad and equal access and keep up with demand for participation; providers and <br /> prospective providers may also submit applications in partnerships with deployers and other relevant third parties;<br /> (c) that the detailed arrangements for, and conditions concerning AI regulatory sandboxes support, to the best extent <br /> possible, flexibility for national competent authorities to establish and operate their AI regulatory sandboxes;<br /> (d) that access to the AI regulatory sandboxes is free of charge for SMEs, including start-ups, without prejudice to <br /> exceptional costs that national competent authorities may recover in a fair and proportionate manner;<br /> (e) that they facilitate providers and prospective providers, by means of the learning outcomes of the AI regulatory <br /> sandboxes, in complying</p>
Show original text

AI regulatory sandboxes should have the following features: (e) They help companies meet regulatory requirements and follow voluntary conduct codes. (f) They involve key players in the AI field, including testing organizations, research labs, small businesses, startups, and innovation centers, to encourage cooperation between public and private sectors. (g) The application process and rules are simple and clear so small companies and startups can easily participate. These rules should be the same across all EU countries to avoid confusion. If a company joins a sandbox in one country or through the European Data Protection Supervisor, that participation is recognized everywhere in the EU with the same legal benefits. (h) Companies can participate for a set time based on their project's complexity, and this period can be extended by the national authority. (i) The sandboxes provide tools and resources to test and evaluate AI systems, measuring things like accuracy, reliability, and security, as well as protecting people's rights and society.

<p>start-ups, without prejudice to <br /> exceptional costs that national competent authorities may recover in a fair and proportionate manner;<br /> (e) that they facilitate providers and prospective providers, by means of the learning outcomes of the AI regulatory <br /> sandboxes, in complying with conformity assessment obligations under this Regulation and the voluntary application <br /> of the codes of conduct referred to in Article 95;<br /> (f) that AI regulatory sandboxes facilitate the involvement of other relevant actors within the AI ecosystem, such as notified <br /> bodies and standardisation organisations, SMEs, including start-ups, enterprises, innovators, testing and experimenta­<br /> tion facilities, research and experimentation labs and European Digital Innovation Hubs, centres of excellence, <br /> individual researchers, in order to allow and facilitate cooperation with the public and private sectors;<br /> (g) that procedures, processes and administrative requirements for application, selection, participation and exiting the AI <br /> regulatory sandbox are simple, easily intelligible, and clearly communicated in order to facilitate the participation of <br /> SMEs, including start-ups, with limited legal and administrative capacities and are streamlined across the Union, in order <br /> to avoid fragmentation and that participation in an AI regulatory sandbox established by a Member State, or by the <br /> European Data Protection Supervisor is mutually and uniformly recognised and carries the same legal effects across the <br /> Union;<br /> (h) that participation in the AI regulatory sandbox is limited to a period that is appropriate to the complexity and scale of <br /> the project and that may be extended by the national competent authority;<br /> (i) that AI regulatory sandboxes facilitate the development of tools and infrastructure for testing, benchmarking, assessing <br /> and explaining dimensions of AI systems relevant for regulatory learning, such as accuracy, robustness and <br /> cybersecurity, as well as measures to mitigate risks to fundamental rights and society at large.<br /> 3.</p>
Show original text

AI regulatory sandboxes provide tools and facilities to test and evaluate AI systems. These tools measure important factors like accuracy, robustness, and cybersecurity. They also help identify and reduce risks to people's rights and society.

Small companies and startups in these sandboxes receive support services. This includes guidance on following AI regulations, help with standards and certification, access to testing facilities, and support from Digital Innovation Hubs and research centers.

When national authorities approve real-world testing in a sandbox, they must set clear terms and safety rules with participants. These rules protect people's rights, health, and safety. Authorities in different countries should work together to ensure consistent practices across Europe.

Inside the sandbox, personal data collected for other purposes can be reused to develop and test certain AI systems. This is allowed only when the AI system serves an important public interest, such as public safety, public health, disease detection, diagnosis, prevention, treatment, or improving healthcare systems.

<p>and infrastructure for testing, benchmarking, assessing <br /> and explaining dimensions of AI systems relevant for regulatory learning, such as accuracy, robustness and <br /> cybersecurity, as well as measures to mitigate risks to fundamental rights and society at large.<br /> 3.<br /> Prospective providers in the AI regulatory sandboxes, in particular SMEs and start-ups, shall be directed, where <br /> relevant, to pre-deployment services such as guidance on the implementation of this Regulation, to other value-adding <br /> services such as help with standardisation documents and certification, testing and experimentation facilities, European <br /> Digital Innovation Hubs and centres of excellence.<br /> EN<br /> OJ L, 12.7.2024<br /> 90/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>4.<br /> Where national competent authorities consider authorising testing in real world conditions supervised within the <br /> framework of an AI regulatory sandbox to be established under this Article, they shall specifically agree the terms and <br /> conditions of such testing and, in particular, the appropriate safeguards with the participants, with a view to protecting <br /> fundamental rights, health and safety. Where appropriate, they shall cooperate with other national competent authorities <br /> with a view to ensuring consistent practices across the Union.<br /> Article 59<br /> Further processing of personal data for developing certain AI systems in the public interest in the AI regulatory <br /> sandbox<br /> 1.<br /> In the AI regulatory sandbox, personal data lawfully collected for other purposes may be processed solely for the <br /> purpose of developing, training and testing certain AI systems in the sandbox when all of the following conditions are met:<br /> (a) AI systems shall be developed for safeguarding substantial public interest by a public authority or another natural or <br /> legal person and in one or more of the following areas:<br /> (i) public safety and public health, including disease detection, diagnosis prevention, control and treatment and <br /> improvement of health care systems;<br /> (ii) a high level of protection and improvement of the quality of</p>
Show original text

A data sandbox can be used for one or more of these purposes: (i) public safety and health, including detecting, diagnosing, preventing, controlling and treating diseases, and improving healthcare systems; (ii) protecting and improving the environment, preserving biodiversity, preventing pollution, supporting green energy transition, and addressing climate change; (iii) sustainable energy; (iv) safe and reliable transport systems, infrastructure and networks; (v) efficient and effective government administration and services. To qualify, the sandbox must meet these requirements: (b) Personal data must be necessary because anonymized or synthetic data cannot effectively meet the requirements; (c) There must be systems to monitor for risks to people's rights and quick response procedures to reduce risks or stop processing if needed; (d) Personal data must be kept in a separate, isolated, protected environment controlled by the provider, with access limited to authorized staff only; (e) Original data can only be shared following EU data protection laws, and new data created in the sandbox cannot leave it; (f) The sandbox processing cannot make decisions affecting people or interfere with their data protection rights; (g) Personal data must be protected with appropriate security measures and deleted when the sandbox participation ends.

<p>in one or more of the following areas:<br /> (i) public safety and public health, including disease detection, diagnosis prevention, control and treatment and <br /> improvement of health care systems;<br /> (ii) a high level of protection and improvement of the quality of the environment, protection of biodiversity, protection <br /> against pollution, green transition measures, climate change mitigation and adaptation measures;<br /> (iii) energy sustainability;<br /> (iv) safety and resilience of transport systems and mobility, critical infrastructure and networks;<br /> (v) efficiency and quality of public administration and public services;<br /> (b) the data processed are necessary for complying with one or more of the requirements referred to in Chapter III, <br /> Section 2 where those requirements cannot effectively be fulfilled by processing anonymised, synthetic or other <br /> non-personal data;<br /> (c) there are effective monitoring mechanisms to identify if any high risks to the rights and freedoms of the data subjects, as <br /> referred to in Article 35 of Regulation (EU) 2016/679 and in Article 39 of Regulation (EU) 2018/1725, may arise <br /> during the sandbox experimentation, as well as response mechanisms to promptly mitigate those risks and, where <br /> necessary, stop the processing;<br /> (d) any personal data to be processed in the context of the sandbox are in a functionally separate, isolated and protected <br /> data processing environment under the control of the prospective provider and only authorised persons have access to <br /> those data;<br /> (e) providers can further share the originally collected data only in accordance with Union data protection law; any <br /> personal data created in the sandbox cannot be shared outside the sandbox;<br /> (f) any processing of personal data in the context of the sandbox neither leads to measures or decisions affecting the data <br /> subjects nor does it affect the application of their rights laid down in Union law on the protection of personal data;<br /> (g) any personal data processed in the context of the sandbox are protected by means of appropriate technical and <br /> organisational measures and deleted once the participation in the sandbox has</p>
Show original text

Personal data used in AI testing sandboxes must be protected with strong security measures and deleted when testing ends or the retention period expires. Records of all data processing during sandbox testing must be kept for the duration of participation, unless other laws require otherwise. Companies must document the complete process of how the AI system was trained, tested, and validated, including all results. A brief summary of the AI project, its goals, and expected outcomes must be published on the competent authority's website, except for sensitive information related to law enforcement, border control, immigration, or asylum activities. For criminal investigations, crime prevention, prosecution, or public security purposes, personal data processing in AI sandboxes must follow specific national or EU laws and meet the same requirements listed above. All data processing must comply with EU personal data protection laws.

<p>application of their rights laid down in Union law on the protection of personal data;<br /> (g) any personal data processed in the context of the sandbox are protected by means of appropriate technical and <br /> organisational measures and deleted once the participation in the sandbox has terminated or the personal data has <br /> reached the end of its retention period;<br /> (h) the logs of the processing of personal data in the context of the sandbox are kept for the duration of the participation in <br /> the sandbox, unless provided otherwise by Union or national law;<br /> (i) a complete and detailed description of the process and rationale behind the training, testing and validation of the AI <br /> system is kept together with the testing results as part of the technical documentation referred to in Annex IV;<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 91/144</p> <p>(j) a short summary of the AI project developed in the sandbox, its objectives and expected results is published on the <br /> website of the competent authorities; this obligation shall not cover sensitive operational data in relation to the activities <br /> of law enforcement, border control, immigration or asylum authorities.<br /> 2.<br /> For the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of <br /> criminal penalties, including safeguarding against and preventing threats to public security, under the control and <br /> responsibility of law enforcement authorities, the processing of personal data in AI regulatory sandboxes shall be based on <br /> a specific Union or national law and subject to the same cumulative conditions as referred to in paragraph 1.<br /> 3.</p>
Show original text

Law enforcement authorities control personal data processing in AI regulatory sandboxes. This processing must follow specific Union or national laws and meet the same conditions as paragraph 1.

Paragraph 1 does not affect Union or national laws that restrict personal data processing to only specific purposes, or laws that allow personal data processing for developing, testing, or training new AI systems. All processing must comply with Union data protection laws.

Article 60: Testing High-Risk AI Systems in Real-World Conditions Outside Sandboxes

Companies that provide or plan to provide high-risk AI systems listed in Annex III can test these systems in real-world conditions outside AI regulatory sandboxes. They must follow this Article and create a real-world testing plan. This does not override the prohibitions in Article 5. The Commission will provide detailed requirements for the testing plan through implementing acts.

This does not affect Union or national laws about testing high-risk AI systems related to products covered by Union harmonisation legislation listed in Annex I.

Providers or prospective providers can test high-risk AI systems from Annex III in real-world conditions at any time before selling or using the AI system. They can conduct testing alone or with one or more deployers or prospective deployers.

Testing high-risk AI systems in real-world conditions must comply with any ethical review required by Union or national law.

<p>under the control and <br /> responsibility of law enforcement authorities, the processing of personal data in AI regulatory sandboxes shall be based on <br /> a specific Union or national law and subject to the same cumulative conditions as referred to in paragraph 1.<br /> 3.<br /> Paragraph 1 is without prejudice to Union or national law which excludes processing of personal data for other <br /> purposes than those explicitly mentioned in that law, as well as to Union or national law laying down the basis for the <br /> processing of personal data which is necessary for the purpose of developing, testing or training of innovative AI systems or <br /> any other legal basis, in compliance with Union law on the protection of personal data.<br /> Article 60<br /> Testing of high-risk AI systems in real world conditions outside AI regulatory sandboxes<br /> 1.<br /> Testing of high-risk AI systems in real world conditions outside AI regulatory sandboxes may be conducted by <br /> providers or prospective providers of high-risk AI systems listed in Annex III, in accordance with this Article and the <br /> real-world testing plan referred to in this Article, without prejudice to the prohibitions under Article 5.<br /> The Commission shall, by means of implementing acts, specify the detailed elements of the real-world testing plan. Those <br /> implementing acts shall be adopted in accordance with the examination procedure referred to in Article 98(2).<br /> This paragraph shall be without prejudice to Union or national law on the testing in real world conditions of high-risk AI <br /> systems related to products covered by Union harmonisation legislation listed in Annex I.<br /> 2.<br /> Providers or prospective providers may conduct testing of high-risk AI systems referred to in Annex III in real world <br /> conditions at any time before the placing on the market or the putting into service of the AI system on their own or in <br /> partnership with one or more deployers or prospective deployers.<br /> 3.<br /> The testing of high-risk AI systems in real world conditions under this Article shall be without prejudice to any ethical <br /> review that is required by Union or national law.<br /> 4.</p>
Show original text

Companies or potential companies can test high-risk AI systems in real-world conditions, but only if they follow these steps: First, they must create a testing plan and submit it to the market surveillance authority in the country where testing will happen. Second, that authority must approve both the plan and the testing. If the authority doesn't respond within 30 days, the testing is automatically approved (unless national law requires explicit approval). Third, the company must register the testing with a unique identification number. Most companies register in a public EU database with details from Annex IX. However, companies testing high-risk AI systems used for law enforcement, migration, asylum, or border control must register in a secure, non-public section of the EU database instead. All testing must also follow any ethical reviews required by EU or national law.

<p>partnership with one or more deployers or prospective deployers.<br /> 3.<br /> The testing of high-risk AI systems in real world conditions under this Article shall be without prejudice to any ethical <br /> review that is required by Union or national law.<br /> 4.<br /> Providers or prospective providers may conduct the testing in real world conditions only where all of the following <br /> conditions are met:<br /> (a) the provider or prospective provider has drawn up a real-world testing plan and submitted it to the market surveillance <br /> authority in the Member State where the testing in real world conditions is to be conducted;<br /> (b) the market surveillance authority in the Member State where the testing in real world conditions is to be conducted has <br /> approved the testing in real world conditions and the real-world testing plan; where the market surveillance authority <br /> has not provided an answer within 30 days, the testing in real world conditions and the real-world testing plan shall be <br /> understood to have been approved; where national law does not provide for a tacit approval, the testing in real world <br /> conditions shall remain subject to an authorisation;<br /> (c) the provider or prospective provider, with the exception of providers or prospective providers of high-risk AI systems <br /> referred to in points 1, 6 and 7 of Annex III in the areas of law enforcement, migration, asylum and border control <br /> management, and high-risk AI systems referred to in point 2 of Annex III has registered the testing in real world <br /> conditions in accordance with Article 71(4) with a Union-wide unique single identification number and with the <br /> information specified in Annex IX; the provider or prospective provider of high-risk AI systems referred to in points 1, <br /> 6 and 7 of Annex III in the areas of law enforcement, migration, asylum and border control management, has registered <br /> the testing in real-world conditions in the secure non-public section of the EU database according to Article 49(4), point <br /> (d), with a Union-wide unique single identification number and with</p>
Show original text

Companies testing high-risk AI systems in real conditions must follow these rules: They must register their testing in the EU's secure database with a unique identification number. The company conducting the test must be based in the EU or have a legal representative there. Any data collected during testing can only be shared with countries outside the EU if proper legal protections are in place. Testing cannot last longer than six months, though it can be extended for another six months if the company notifies the market surveillance authority and explains why. If vulnerable people (children or people with disabilities) are involved in testing, they must be properly protected. When companies work with other organizations to conduct testing, those partners must be fully informed about all relevant details and given proper instructions on how to use the AI system. The company and its partners must sign an agreement that clearly defines each party's roles and responsibilities.

<p>, migration, asylum and border control management, has registered <br /> the testing in real-world conditions in the secure non-public section of the EU database according to Article 49(4), point <br /> (d), with a Union-wide unique single identification number and with the information specified therein; the provider or <br /> prospective provider of high-risk AI systems referred to in point 2 of Annex III has registered the testing in real-world <br /> conditions in accordance with Article 49(5);<br /> EN<br /> OJ L, 12.7.2024<br /> 92/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>(d) the provider or prospective provider conducting the testing in real world conditions is established in the Union or has <br /> appointed a legal representative who is established in the Union;<br /> (e) data collected and processed for the purpose of the testing in real world conditions shall be transferred to third <br /> countries only provided that appropriate and applicable safeguards under Union law are implemented;<br /> (f) the testing in real world conditions does not last longer than necessary to achieve its objectives and in any case not <br /> longer than six months, which may be extended for an additional period of six months, subject to prior notification by <br /> the provider or prospective provider to the market surveillance authority, accompanied by an explanation of the need <br /> for such an extension;<br /> (g) the subjects of the testing in real world conditions who are persons belonging to vulnerable groups due to their age or <br /> disability, are appropriately protected;<br /> (h) where a provider or prospective provider organises the testing in real world conditions in cooperation with one or more <br /> deployers or prospective deployers, the latter have been informed of all aspects of the testing that are relevant to their <br /> decision to participate, and given the relevant instructions for use of the AI system referred to in Article 13; the <br /> provider or prospective provider and the deployer or prospective deployer shall conclude an agreement specifying their <br /> roles and responsibilities with</p>
Show original text

When testing AI systems in real-world conditions, the following rules must be followed:

Agreement and Consent: The AI provider and the organization using the AI must sign an agreement that clearly defines their roles and responsibilities. People being tested must give informed consent (permission) before participating, unless it's a law enforcement test. In law enforcement cases where asking for consent would interfere with testing, the test cannot harm the subjects, and their personal data must be deleted afterward.

Oversight and Control: A qualified person from both the provider and the deploying organization must actively supervise the testing. These supervisors must have proper training, authority, and expertise. The AI system's decisions must be able to be reversed or ignored if needed.

Right to Withdraw: Test subjects or their legal representatives can stop participating at any time without penalty or explanation. They can also request that their personal data be deleted immediately and permanently. Withdrawing consent does not undo work already completed.

Government Monitoring: Member States must give their market surveillance authorities the power to require providers to share information, conduct surprise inspections (in person or remotely), and check that real-world testing is being done correctly and that high-risk AI systems are being properly managed.

<p>to their <br /> decision to participate, and given the relevant instructions for use of the AI system referred to in Article 13; the <br /> provider or prospective provider and the deployer or prospective deployer shall conclude an agreement specifying their <br /> roles and responsibilities with a view to ensuring compliance with the provisions for testing in real world conditions <br /> under this Regulation and under other applicable Union and national law;<br /> (i) the subjects of the testing in real world conditions have given informed consent in accordance with Article 61, or in the <br /> case of law enforcement, where the seeking of informed consent would prevent the AI system from being tested, the <br /> testing itself and the outcome of the testing in the real world conditions shall not have any negative effect on the <br /> subjects, and their personal data shall be deleted after the test is performed;<br /> (j) the testing in real world conditions is effectively overseen by the provider or prospective provider, as well as by <br /> deployers or prospective deployers through persons who are suitably qualified in the relevant field and have the <br /> necessary capacity, training and authority to perform their tasks;<br /> (k) the predictions, recommendations or decisions of the AI system can be effectively reversed and disregarded.<br /> 5.<br /> Any subjects of the testing in real world conditions, or their legally designated representative, as appropriate, may, <br /> without any resulting detriment and without having to provide any justification, withdraw from the testing at any time by <br /> revoking their informed consent and may request the immediate and permanent deletion of their personal data. The <br /> withdrawal of the informed consent shall not affect the activities already carried out.<br /> 6.<br /> In accordance with Article 75, Member States shall confer on their market surveillance authorities the powers of <br /> requiring providers and prospective providers to provide information, of carrying out unannounced remote or on-site <br /> inspections, and of performing checks on the conduct of the testing in real world conditions and the related high-risk AI <br /> systems.</p>
Show original text

Market surveillance authorities have the power to require providers and prospective providers to submit information, conduct unannounced inspections (either remote or on-site), and verify that testing is conducted safely in real-world conditions with high-risk AI systems.

If a serious incident occurs during real-world testing, it must be reported to the national market surveillance authority as required by Article 73. The provider or prospective provider must immediately take steps to reduce the risk, or stop the testing until the risk is addressed, or end it completely. They must also create a plan to quickly remove the AI system from use if testing is stopped.

Providers or prospective providers must inform the national market surveillance authority in their Member State when they suspend, stop, or complete real-world testing, and must share the final results.

Providers or prospective providers are legally responsible for any harm caused during their real-world testing under EU and national laws.

Article 61 addresses informed consent for testing in real-world conditions outside AI regulatory sandboxes.

<p>powers of <br /> requiring providers and prospective providers to provide information, of carrying out unannounced remote or on-site <br /> inspections, and of performing checks on the conduct of the testing in real world conditions and the related high-risk AI <br /> systems. Market surveillance authorities shall use those powers to ensure the safe development of testing in real world <br /> conditions.<br /> 7.<br /> Any serious incident identified in the course of the testing in real world conditions shall be reported to the national <br /> market surveillance authority in accordance with Article 73. The provider or prospective provider shall adopt immediate <br /> mitigation measures or, failing that, shall suspend the testing in real world conditions until such mitigation takes place, or <br /> otherwise terminate it. The provider or prospective provider shall establish a procedure for the prompt recall of the AI <br /> system upon such termination of the testing in real world conditions.<br /> 8.<br /> Providers or prospective providers shall notify the national market surveillance authority in the Member State where <br /> the testing in real world conditions is to be conducted of the suspension or termination of the testing in real world <br /> conditions and of the final outcomes.<br /> 9.<br /> The provider or prospective provider shall be liable under applicable Union and national liability law for any damage <br /> caused in the course of their testing in real world conditions.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 93/144</p> <p>Article 61<br /> Informed consent to participate in testing in real world conditions outside AI regulatory sandboxes<br /> 1.</p>
Show original text

Article 61: Getting Permission from People to Test AI in Real Situations

  1. Before people participate in testing an AI system in real-world conditions (outside of controlled testing areas), they must give their permission. They must first receive clear, simple information about:

(a) What the test is trying to do and any problems it might cause them
(b) How the test will work and how long they will be involved
(c) Their rights, especially that they can refuse to participate or quit at any time without penalty or needing to explain why
(d) How they can ask the AI system to change or ignore its predictions, suggestions, or decisions
(e) The unique identification number for this test (from Article 60) and contact information for the AI provider or their representative if they need more details

  1. The permission must be dated, written down, and a copy given to the participants or their legal representative.

Article 62: Help for Companies Creating and Using AI, Including Small Businesses and Startups

  1. [Text continues...]
<p>EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 93/144</p> <p>Article 61<br /> Informed consent to participate in testing in real world conditions outside AI regulatory sandboxes<br /> 1.<br /> For the purpose of testing in real world conditions under Article 60, freely-given informed consent shall be obtained <br /> from the subjects of testing prior to their participation in such testing and after their having been duly informed with <br /> concise, clear, relevant, and understandable information regarding:<br /> (a) the nature and objectives of the testing in real world conditions and the possible inconvenience that may be linked to <br /> their participation;<br /> (b) the conditions under which the testing in real world conditions is to be conducted, including the expected duration of <br /> the subject or subjects’ participation;<br /> (c) their rights, and the guarantees regarding their participation, in particular their right to refuse to participate in, and the <br /> right to withdraw from, testing in real world conditions at any time without any resulting detriment and without having <br /> to provide any justification;<br /> (d) the arrangements for requesting the reversal or the disregarding of the predictions, recommendations or decisions of <br /> the AI system;<br /> (e) the Union-wide unique single identification number of the testing in real world conditions in accordance with Article <br /> 60(4) point (c), and the contact details of the provider or its legal representative from whom further information can be <br /> obtained.<br /> 2.<br /> The informed consent shall be dated and documented and a copy shall be given to the subjects of testing or their legal <br /> representative.<br /> Article 62<br /> Measures for providers and deployers, in particular SMEs, including start-ups<br /> 1.</p>
Show original text

Informed consent must be dated, documented, and a copy given to the test subjects or their legal representative.

Article 62: Support for Small Businesses and Startups

  1. Member States must:
    (a) Give priority access to AI regulatory sandboxes to small and medium-sized enterprises (SMEs) and startups with offices in the EU, as long as they meet the requirements. Other SMEs and startups can also access these sandboxes if they qualify.
    (b) Organize training and awareness programs about this Regulation designed specifically for SMEs, startups, deployers, and local authorities.
    (c) Create communication channels to help SMEs, startups, deployers, and local authorities understand and follow this Regulation, including information about AI regulatory sandboxes.
    (d) Help SMEs and other stakeholders participate in developing industry standards.

  2. When setting fees for compliance checks under Article 43, the costs should be reduced based on the size and market position of SMEs and startups.

  3. The AI Office must:
    (a) Provide standard templates for areas covered by this Regulation as requested by the Board.
    (b) Create and maintain a single, easy-to-use information platform about this Regulation for all businesses across the EU.

<p>.<br /> The informed consent shall be dated and documented and a copy shall be given to the subjects of testing or their legal <br /> representative.<br /> Article 62<br /> Measures for providers and deployers, in particular SMEs, including start-ups<br /> 1.<br /> Member States shall undertake the following actions:<br /> (a) provide SMEs, including start-ups, having a registered office or a branch in the Union, with priority access to the AI <br /> regulatory sandboxes, to the extent that they fulfil the eligibility conditions and selection criteria; the priority access <br /> shall not preclude other SMEs, including start-ups, other than those referred to in this paragraph from access to the AI <br /> regulatory sandbox, provided that they also fulfil the eligibility conditions and selection criteria;<br /> (b) organise specific awareness raising and training activities on the application of this Regulation tailored to the needs of <br /> SMEs including start-ups, deployers and, as appropriate, local public authorities;<br /> (c) utilise existing dedicated channels and where appropriate, establish new ones for communication with SMEs including <br /> start-ups, deployers, other innovators and, as appropriate, local public authorities to provide advice and respond to <br /> queries about the implementation of this Regulation, including as regards participation in AI regulatory sandboxes;<br /> (d) facilitate the participation of SMEs and other relevant stakeholders in the standardisation development process.<br /> 2.<br /> The specific interests and needs of the SME providers, including start-ups, shall be taken into account when setting the <br /> fees for conformity assessment under Article 43, reducing those fees proportionately to their size, market size and other <br /> relevant indicators.<br /> 3.<br /> The AI Office shall undertake the following actions:<br /> (a) provide standardised templates for areas covered by this Regulation, as specified by the Board in its request;<br /> (b) develop and maintain a single information platform providing easy to use information in relation to this Regulation for <br /> all operators across the Union;<br /> EN<br /> OJ L, 12.7.</p>
Show original text

The regulation requires several key actions: First, operators across the European Union must have access to a single, easy-to-use information platform about this regulation. Second, awareness campaigns must be organized to inform operators about their obligations under this regulation. Third, best practices in public procurement procedures for AI systems should be evaluated and promoted for consistency across the Union.

Small businesses (microenterprises) have some flexibility. According to Recommendation 2003/361/EC, microenterprises without partner or linked enterprises can follow simplified versions of the quality management system requirements described in Article 17. The Commission will create guidelines explaining which quality management elements can be simplified for small businesses, while still maintaining protection standards and compliance requirements for high-risk AI systems.

However, this simplified approach does not excuse microenterprises from meeting other requirements in this regulation, including those in Articles 9, 10, 11, 12, 13, 14, 15, 72, and 73.

For governance, the Commission will build expertise and capabilities in artificial intelligence through an AI Office. Member States must support the AI Office's work as outlined in this regulation. The regulation also establishes a European Artificial Intelligence Board.

<p>this Regulation, as specified by the Board in its request;<br /> (b) develop and maintain a single information platform providing easy to use information in relation to this Regulation for <br /> all operators across the Union;<br /> EN<br /> OJ L, 12.7.2024<br /> 94/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>(c) organise appropriate communication campaigns to raise awareness about the obligations arising from this Regulation;<br /> (d) evaluate and promote the convergence of best practices in public procurement procedures in relation to AI systems.<br /> Article 63<br /> Derogations for specific operators<br /> 1.<br /> Microenterprises within the meaning of Recommendation 2003/361/EC may comply with certain elements of the <br /> quality management system required by Article 17 of this Regulation in a simplified manner, provided that they do not <br /> have partner enterprises or linked enterprises within the meaning of that Recommendation. For that purpose, the <br /> Commission shall develop guidelines on the elements of the quality management system which may be complied with in <br /> a simplified manner considering the needs of microenterprises, without affecting the level of protection or the need for <br /> compliance with the requirements in respect of high-risk AI systems.<br /> 2.<br /> Paragraph 1 of this Article shall not be interpreted as exempting those operators from fulfilling any other <br /> requirements or obligations laid down in this Regulation, including those established in Articles 9, 10, 11, 12, 13, 14, 15, <br /> 72 and 73.<br /> CHAPTER VII<br /> GOVERNANCE<br /> SECTION 1<br /> Governance at Union level<br /> Article 64<br /> AI Office<br /> 1.<br /> The Commission shall develop Union expertise and capabilities in the field of AI through the AI Office.<br /> 2.<br /> Member States shall facilitate the tasks entrusted to the AI Office, as reflected in this Regulation.<br /> Article 65<br /> Establishment and structure of the European Artificial Intelligence Board<br /> 1.</p>
Show original text

The AI Office will share its artificial intelligence expertise and capabilities with Member States. Member States must support the work assigned to the AI Office under this regulation. A European Artificial Intelligence Board is being created to oversee AI matters. The Board will have one representative from each Member State, serving three-year terms that can be renewed once. The European Data Protection Supervisor will attend as an observer, and the AI Office will participate in meetings but cannot vote. Other national and EU authorities or experts may be invited to specific meetings if relevant. Each Member State's representative must have the necessary skills and authority to actively participate in the Board's work. They will serve as the main contact point for the Board and, if needed, for stakeholders. These representatives must also help coordinate between their country's different authorities responsible for implementing this regulation, including gathering necessary data and information. The Board's rules of procedure must be approved by a two-thirds majority vote of the Member State representatives.

<p>expertise and capabilities in the field of AI through the AI Office.<br /> 2.<br /> Member States shall facilitate the tasks entrusted to the AI Office, as reflected in this Regulation.<br /> Article 65<br /> Establishment and structure of the European Artificial Intelligence Board<br /> 1.<br /> A European Artificial Intelligence Board (the ‘Board’) is hereby established.<br /> 2.<br /> The Board shall be composed of one representative per Member State. The European Data Protection Supervisor shall <br /> participate as observer. The AI Office shall also attend the Board’s meetings, without taking part in the votes. Other national <br /> and Union authorities, bodies or experts may be invited to the meetings by the Board on a case by case basis, where the <br /> issues discussed are of relevance for them.<br /> 3.<br /> Each representative shall be designated by their Member State for a period of three years, renewable once.<br /> 4.<br /> Member States shall ensure that their representatives on the Board:<br /> (a) have the relevant competences and powers in their Member State so as to contribute actively to the achievement of the <br /> Board’s tasks referred to in Article 66;<br /> (b) are designated as a single contact point vis-à-vis the Board and, where appropriate, taking into account Member States’ <br /> needs, as a single contact point for stakeholders;<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 95/144</p> <p>(c) are empowered to facilitate consistency and coordination between national competent authorities in their Member State <br /> as regards the implementation of this Regulation, including through the collection of relevant data and information for <br /> the purpose of fulfilling their tasks on the Board.<br /> 5.<br /> The designated representatives of the Member States shall adopt the Board’s rules of procedure by a two-thirds <br /> majority.</p>
Show original text

The Board's representatives from Member States must create rules of procedure using a two-thirds majority vote. These rules should cover how to select the Chair, how long the Chair serves, voting procedures, and how the Board and its sub-groups operate.

The Board will have two permanent sub-groups: one for market surveillance authorities and one for notified bodies. These sub-groups allow different authorities to share information and cooperate. The market surveillance sub-group will also serve as the administrative cooperation group under EU Regulation 2019/1020. The Board can create additional temporary sub-groups to address specific issues. Advisory forum representatives may attend these sub-groups as observers when appropriate.

The Board must operate fairly and impartially. A Member State representative will chair the Board. The AI Office will handle administrative support, schedule meetings when requested by the Chair, and prepare meeting agendas based on the Board's responsibilities.

The Board's main role is to advise and help the Commission and Member States ensure this Regulation is applied consistently and effectively.

<p>of this Regulation, including through the collection of relevant data and information for <br /> the purpose of fulfilling their tasks on the Board.<br /> 5.<br /> The designated representatives of the Member States shall adopt the Board’s rules of procedure by a two-thirds <br /> majority. The rules of procedure shall, in particular, lay down procedures for the selection process, the duration of the <br /> mandate of, and specifications of the tasks of, the Chair, detailed arrangements for voting, and the organisation of the <br /> Board’s activities and those of its sub-groups.<br /> 6.<br /> The Board shall establish two standing sub-groups to provide a platform for cooperation and exchange among market <br /> surveillance authorities and notifying authorities about issues related to market surveillance and notified bodies respectively.<br /> The standing sub-group for market surveillance should act as the administrative cooperation group (ADCO) for this <br /> Regulation within the meaning of Article 30 of Regulation (EU) 2019/1020.<br /> The Board may establish other standing or temporary sub-groups as appropriate for the purpose of examining specific <br /> issues. Where appropriate, representatives of the advisory forum referred to in Article 67 may be invited to such sub-groups <br /> or to specific meetings of those subgroups as observers.<br /> 7.<br /> The Board shall be organised and operated so as to safeguard the objectivity and impartiality of its activities.<br /> 8.<br /> The Board shall be chaired by one of the representatives of the Member States. The AI Office shall provide the <br /> secretariat for the Board, convene the meetings upon request of the Chair, and prepare the agenda in accordance with the <br /> tasks of the Board pursuant to this Regulation and its rules of procedure.<br /> Article 66<br /> Tasks of the Board<br /> The Board shall advise and assist the Commission and the Member States in order to facilitate the consistent and effective <br /> application of this Regulation.</p>
Show original text

Article 66 describes the tasks of the Board. The Board helps the Commission and Member States apply this Regulation consistently and effectively. Specifically, the Board can: (a) coordinate work among national authorities responsible for enforcing this Regulation and support joint market surveillance activities; (b) collect and share technical knowledge and best practices between Member States; (c) advise on how to implement this Regulation, especially regarding rules for general-purpose AI models; (d) help standardize administrative practices across Member States, including procedures for conformity assessment, AI regulatory sandboxes, and real-world testing; (e) provide recommendations and written opinions on matters related to implementing this Regulation, including: (i) development of codes of conduct and practice, and Commission guidelines; (ii) evaluation and review of this Regulation, serious incident reports, the EU database, preparation of legal acts, and alignment with EU harmonisation laws; (iii) technical specifications and standards for the requirements in Chapter III, Section 2.

<p>tasks of the Board pursuant to this Regulation and its rules of procedure.<br /> Article 66<br /> Tasks of the Board<br /> The Board shall advise and assist the Commission and the Member States in order to facilitate the consistent and effective <br /> application of this Regulation. To that end, the Board may in particular:<br /> (a)<br /> contribute to the coordination among national competent authorities responsible for the application of this Regulation <br /> and, in cooperation with and subject to the agreement of the market surveillance authorities concerned, support joint <br /> activities of market surveillance authorities referred to in Article 74(11);<br /> (b) collect and share technical and regulatory expertise and best practices among Member States;<br /> (c)<br /> provide advice on the implementation of this Regulation, in particular as regards the enforcement of rules on <br /> general-purpose AI models;<br /> (d) contribute to the harmonisation of administrative practices in the Member States, including in relation to the <br /> derogation from the conformity assessment procedures referred to in Article 46, the functioning of AI regulatory <br /> sandboxes, and testing in real world conditions referred to in Articles 57, 59 and 60;<br /> (e)<br /> at the request of the Commission or on its own initiative, issue recommendations and written opinions on any relevant <br /> matters related to the implementation of this Regulation and to its consistent and effective application, including:<br /> (i) on the development and application of codes of conduct and codes of practice pursuant to this Regulation, as well <br /> as of the Commission’s guidelines;<br /> (ii) the evaluation and review of this Regulation pursuant to Article 112, including as regards the serious incident <br /> reports referred to in Article 73, and the functioning of the EU database referred to in Article 71, the preparation <br /> of the delegated or implementing acts, and as regards possible alignments of this Regulation with the Union <br /> harmonisation legislation listed in Annex I;<br /> (iii) on technical specifications or existing standards regarding the requirements set out in Chapter III, Section 2;<br /> EN<br /> OJ L, 12.7.</p>
Show original text

The Board should: (iii) provide advice on technical specifications and existing standards related to Chapter III, Section 2 requirements; (iv) advise on the use of harmonised standards or common specifications mentioned in Articles 40 and 41; (v) monitor trends such as AI competitiveness in Europe, AI adoption across the EU, and digital skills development; (vi) track changes in how AI value chains are structured and their impact on accountability; (vii) assess whether Annex III needs updating under Article 7 and whether Article 5 should be revised under Article 112, based on current evidence and technological advances; (f) help the Commission increase public understanding of AI, including its benefits, risks, safeguards, and users' rights and responsibilities; (g) help businesses and authorities develop shared standards and common understanding of key concepts in this Regulation, including creating benchmarks; (h) work with other EU institutions, agencies, expert groups, and networks in areas like product safety, cybersecurity, competition, digital services, financial services, consumer protection, data protection, and fundamental rights; (i) support cooperation with competent authorities and international organisations in other countries; (j) help national authorities and the Commission build the organisational and technical skills needed to apply this Regulation, including contributing to assessments.

<p>alignments of this Regulation with the Union <br /> harmonisation legislation listed in Annex I;<br /> (iii) on technical specifications or existing standards regarding the requirements set out in Chapter III, Section 2;<br /> EN<br /> OJ L, 12.7.2024<br /> 96/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>(iv) on the use of harmonised standards or common specifications referred to in Articles 40 and 41;<br /> (v) trends, such as European global competitiveness in AI, the uptake of AI in the Union, and the development of <br /> digital skills;<br /> (vi) trends on the evolving typology of AI value chains, in particular on the resulting implications in terms of <br /> accountability;<br /> (vii) on the potential need for amendment to Annex III in accordance with Article 7, and on the potential need for <br /> possible revision of Article 5 pursuant to Article 112, taking into account relevant available evidence and the <br /> latest developments in technology;<br /> (f)<br /> support the Commission in promoting AI literacy, public awareness and understanding of the benefits, risks, <br /> safeguards and rights and obligations in relation to the use of AI systems;<br /> (g) facilitate the development of common criteria and a shared understanding among market operators and competent <br /> authorities of the relevant concepts provided for in this Regulation, including by contributing to the development of <br /> benchmarks;<br /> (h) cooperate, as appropriate, with other Union institutions, bodies, offices and agencies, as well as relevant Union expert <br /> groups and networks, in particular in the fields of product safety, cybersecurity, competition, digital and media services, <br /> financial services, consumer protection, data and fundamental rights protection;<br /> (i)<br /> contribute to effective cooperation with the competent authorities of third countries and with international <br /> organisations;<br /> (j)<br /> assist national competent authorities and the Commission in developing the organisational and technical expertise <br /> required for the implementation of this Regulation, including by contributing to the assessment of</p>
Show original text

The AI Office will work with authorities in other countries and international organizations. It will help national authorities and the Commission develop the skills and systems needed to enforce this regulation, including assessing training needs for staff. The AI Office will support the establishment of AI regulatory sandboxes and help them share information. It will provide advice on guidance documents and international AI matters. It will give opinions to the Commission about alerts regarding general-purpose AI models and receive feedback from Member States about their experiences monitoring and enforcing AI systems.

Article 67 establishes an advisory forum to provide technical advice to the Board and Commission on implementing this regulation. The forum's members must represent a balanced mix of stakeholders, including businesses, startups, small companies, civil society organizations, and universities. The membership must balance commercial and non-commercial interests, and among commercial members, must balance small companies with larger businesses. The Commission will appoint forum members from stakeholders with recognized AI expertise, following these criteria.

<p>to effective cooperation with the competent authorities of third countries and with international <br /> organisations;<br /> (j)<br /> assist national competent authorities and the Commission in developing the organisational and technical expertise <br /> required for the implementation of this Regulation, including by contributing to the assessment of training needs for <br /> staff of Member States involved in implementing this Regulation;<br /> (k) assist the AI Office in supporting national competent authorities in the establishment and development of AI <br /> regulatory sandboxes, and facilitate cooperation and information-sharing among AI regulatory sandboxes;<br /> (l)<br /> contribute to, and provide relevant advice on, the development of guidance documents;<br /> (m) advise the Commission in relation to international matters on AI;<br /> (n) provide opinions to the Commission on the qualified alerts regarding general-purpose AI models;<br /> (o) receive opinions by the Member States on qualified alerts regarding general-purpose AI models, and on national <br /> experiences and practices on the monitoring and enforcement of AI systems, in particular systems integrating the <br /> general-purpose AI models.<br /> Article 67<br /> Advisory forum<br /> 1.<br /> An advisory forum shall be established to provide technical expertise and advise the Board and the Commission, and <br /> to contribute to their tasks under this Regulation.<br /> 2.<br /> The membership of the advisory forum shall represent a balanced selection of stakeholders, including industry, <br /> start-ups, SMEs, civil society and academia. The membership of the advisory forum shall be balanced with regard to <br /> commercial and non-commercial interests and, within the category of commercial interests, with regard to SMEs and other <br /> undertakings.<br /> 3.<br /> The Commission shall appoint the members of the advisory forum, in accordance with the criteria set out in <br /> paragraph 2, from amongst stakeholders with recognised expertise in the field of AI.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 97/144</p> <p>4.</p>
Show original text

An advisory forum has been established with the following structure and responsibilities:

Membership and Leadership:
- Members serve two-year terms, which can be extended for up to four additional years
- Five organizations are permanent members: the Fundamental Rights Agency, ENISA, the European Committee for Standardization (CEN), the European Committee for Electrotechnical Standardization (CENELEC), and the European Telecommunications Standards Institute (ETSI)
- The forum elects two co-chairs from its members based on specific criteria. Co-chairs serve two-year terms and can be reelected once

Operations:
- The forum meets at least twice yearly
- It can invite experts and other stakeholders to attend meetings
- It creates its own rules of procedure

Responsibilities:
- The forum can prepare opinions, recommendations, and written contributions when requested by the Board or the Commission
- It can create temporary or permanent sub-groups to examine specific issues related to the regulation
- It must prepare and publicly release an annual activity report

Scientific Panel:
The Commission will establish a scientific panel of independent experts through an implementing act to support enforcement of this regulation. This act will follow the examination procedure outlined in Article 98(2).

<p>with recognised expertise in the field of AI.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 97/144</p> <p>4.<br /> The term of office of the members of the advisory forum shall be two years, which may be extended by up to no more <br /> than four years.<br /> 5.<br /> The Fundamental Rights Agency, ENISA, the European Committee for Standardization (CEN), the European <br /> Committee for Electrotechnical Standardization (CENELEC), and the European Telecommunications Standards Institute <br /> (ETSI) shall be permanent members of the advisory forum.<br /> 6.<br /> The advisory forum shall draw up its rules of procedure. It shall elect two co-chairs from among its members, in <br /> accordance with criteria set out in paragraph 2. The term of office of the co-chairs shall be two years, renewable once.<br /> 7.<br /> The advisory forum shall hold meetings at least twice a year. The advisory forum may invite experts and other <br /> stakeholders to its meetings.<br /> 8.<br /> The advisory forum may prepare opinions, recommendations and written contributions at the request of the Board or <br /> the Commission.<br /> 9.<br /> The advisory forum may establish standing or temporary sub-groups as appropriate for the purpose of examining <br /> specific questions related to the objectives of this Regulation.<br /> 10.<br /> The advisory forum shall prepare an annual report on its activities. That report shall be made publicly available.<br /> Article 68<br /> Scientific panel of independent experts<br /> 1.<br /> The Commission shall, by means of an implementing act, make provisions on the establishment of a scientific panel <br /> of independent experts (the ‘scientific panel’) intended to support the enforcement activities under this Regulation. That <br /> implementing act shall be adopted in accordance with the examination procedure referred to in Article 98(2).<br /> 2.</p>
Show original text

A scientific panel of independent experts will be created to help enforce this AI regulation. The Commission will establish this panel through an implementing act using the examination procedure outlined in Article 98(2). The panel will consist of experts chosen by the Commission based on their current scientific or technical knowledge of AI. All panel members must: (a) have specialized expertise and competence in AI, (b) be independent from AI system or model providers, and (c) be able to work diligently, accurately, and objectively. The Commission, working with the Board, will decide how many experts are needed and will aim for fair representation by gender and geography. The scientific panel will advise and support the AI Office, particularly by: (a) helping implement and enforce this regulation for general-purpose AI models and systems, including by alerting the AI Office to potential systemic risks of general-purpose AI models at the EU level, helping develop evaluation tools and methods for AI capabilities through benchmarks, advising on classifying general-purpose AI models as systemic risks, and advising on classifying various general-purpose AI models and systems.

<p>establishment of a scientific panel <br /> of independent experts (the ‘scientific panel’) intended to support the enforcement activities under this Regulation. That <br /> implementing act shall be adopted in accordance with the examination procedure referred to in Article 98(2).<br /> 2.<br /> The scientific panel shall consist of experts selected by the Commission on the basis of up-to-date scientific or <br /> technical expertise in the field of AI necessary for the tasks set out in paragraph 3, and shall be able to demonstrate meeting <br /> all of the following conditions:<br /> (a) having particular expertise and competence and scientific or technical expertise in the field of AI;<br /> (b) independence from any provider of AI systems or general-purpose AI models;<br /> (c) an ability to carry out activities diligently, accurately and objectively.<br /> The Commission, in consultation with the Board, shall determine the number of experts on the panel in accordance with <br /> the required needs and shall ensure fair gender and geographical representation.<br /> 3.<br /> The scientific panel shall advise and support the AI Office, in particular with regard to the following tasks:<br /> (a) supporting the implementation and enforcement of this Regulation as regards general-purpose AI models and systems, <br /> in particular by:<br /> (i) alerting the AI Office of possible systemic risks at Union level of general-purpose AI models, in accordance with <br /> Article 90;<br /> (ii) contributing to the development of tools and methodologies for evaluating capabilities of general-purpose AI <br /> models and systems, including through benchmarks;<br /> (iii) providing advice on the classification of general-purpose AI models with systemic risk;<br /> (iv) providing advice on the classification of various general-purpose AI models and systems;<br /> EN<br /> OJ L, 12.7.</p>
Show original text

The scientific panel supports AI regulation through several key activities: evaluating AI systems using benchmarks, advising on which general-purpose AI models pose systemic risks, classifying different AI models and systems, and developing tools and templates. The panel also helps market surveillance authorities monitor AI products across borders and assists the AI Office during safety procedures. Panel experts must work impartially and objectively, keep information confidential, and avoid conflicts of interest. Each expert must publicly declare their interests, and the AI Office will establish systems to prevent conflicts. The AI Office will create detailed rules about how the panel issues alerts and requests assistance. Member States can ask panel experts to help enforce AI regulations, though they may need to pay fees for this support.

<p>systems, including through benchmarks;<br /> (iii) providing advice on the classification of general-purpose AI models with systemic risk;<br /> (iv) providing advice on the classification of various general-purpose AI models and systems;<br /> EN<br /> OJ L, 12.7.2024<br /> 98/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>(v) contributing to the development of tools and templates;<br /> (b) supporting the work of market surveillance authorities, at their request;<br /> (c) supporting cross-border market surveillance activities as referred to in Article 74(11), without prejudice to the powers <br /> of market surveillance authorities;<br /> (d) supporting the AI Office in carrying out its duties in the context of the Union safeguard procedure pursuant to <br /> Article 81.<br /> 4.<br /> The experts on the scientific panel shall perform their tasks with impartiality and objectivity, and shall ensure the <br /> confidentiality of information and data obtained in carrying out their tasks and activities. They shall neither seek nor take <br /> instructions from anyone when exercising their tasks under paragraph 3. Each expert shall draw up a declaration of <br /> interests, which shall be made publicly available. The AI Office shall establish systems and procedures to actively manage <br /> and prevent potential conflicts of interest.<br /> 5.<br /> The implementing act referred to in paragraph 1 shall include provisions on the conditions, procedures and detailed <br /> arrangements for the scientific panel and its members to issue alerts, and to request the assistance of the AI Office for the <br /> performance of the tasks of the scientific panel.<br /> Article 69<br /> Access to the pool of experts by the Member States<br /> 1.<br /> Member States may call upon experts of the scientific panel to support their enforcement activities under this <br /> Regulation.<br /> 2.<br /> The Member States may be required to pay fees for the advice and support provided by the experts.</p>
Show original text

EXPERTS FROM MEMBER STATES

  1. Member States can ask scientific panel experts to help them enforce this Regulation.

  2. Member States may need to pay fees for expert advice and support. An implementing act (referenced in Article 68(1)) will set the fee structure, cost levels, and what costs can be recovered. This will consider: proper implementation of the Regulation, cost-effectiveness, and ensuring all Member States can access experts.

  3. The Commission will help Member States access experts quickly when needed. It will also coordinate support from the Union AI testing service (Article 84) and experts under this Article to work efficiently and provide maximum value.

NATIONAL COMPETENT AUTHORITIES

Article 70: Appointing National Competent Authorities and Contact Points

  1. Each Member State must establish or designate at least two national competent authorities: a notifying authority and a market surveillance authority. These authorities must work independently, fairly, and without bias to ensure objectivity and proper implementation of this Regulation. Their members must avoid actions that conflict with their duties. One or more authorities can perform these tasks, depending on each Member State's organizational structure.

  2. Member States must tell the Commission which authorities are the notifying and market surveillance authorities, what they do, and any changes to this information. By August 2, 2025, Member States must publicly provide contact information for competent authorities and contact points through electronic means.

<p>experts by the Member States<br /> 1.<br /> Member States may call upon experts of the scientific panel to support their enforcement activities under this <br /> Regulation.<br /> 2.<br /> The Member States may be required to pay fees for the advice and support provided by the experts. The structure and <br /> the level of fees as well as the scale and structure of recoverable costs shall be set out in the implementing act referred to in <br /> Article 68(1), taking into account the objectives of the adequate implementation of this Regulation, cost-effectiveness and <br /> the necessity of ensuring effective access to experts for all Member States.<br /> 3.<br /> The Commission shall facilitate timely access to the experts by the Member States, as needed, and ensure that the <br /> combination of support activities carried out by Union AI testing support pursuant to Article 84 and experts pursuant to <br /> this Article is efficiently organised and provides the best possible added value.<br /> SECTION 2<br /> National competent authorities<br /> Article 70<br /> Designation of national competent authorities and single points of contact<br /> 1.<br /> Each Member State shall establish or designate as national competent authorities at least one notifying authority and <br /> at least one market surveillance authority for the purposes of this Regulation. Those national competent authorities shall <br /> exercise their powers independently, impartially and without bias so as to safeguard the objectivity of their activities and <br /> tasks, and to ensure the application and implementation of this Regulation. The members of those authorities shall refrain <br /> from any action incompatible with their duties. Provided that those principles are observed, such activities and tasks may be <br /> performed by one or more designated authorities, in accordance with the organisational needs of the Member State.<br /> 2.<br /> Member States shall communicate to the Commission the identity of the notifying authorities and the market <br /> surveillance authorities and the tasks of those authorities, as well as any subsequent changes thereto. Member States shall <br /> make publicly available information on how competent authorities and single points of contact can be contacted, through <br /> electronic communication means by 2 August 2025.</p>
Show original text

Member States must make contact information for their competent authorities and single points of contact publicly available online by August 2, 2025. Each Member State must designate one market surveillance authority as its single point of contact for this Regulation and inform the Commission of this choice. The Commission will publish a list of all single points of contact.

Member States must ensure their competent authorities have sufficient funding, staff, and infrastructure to carry out their duties effectively. Staff must have expertise in AI technologies, data management, data protection, cybersecurity, fundamental rights, health and safety risks, and relevant standards and laws. Member States must review and update these resource and competence requirements every year.

National competent authorities must implement appropriate cybersecurity measures. When performing their duties, they must follow confidentiality rules outlined in Article 78.

By August 2, 2025, and every two years after that, Member States must report to the Commission about the financial and human resources available to their competent authorities and whether these resources are adequate. The Commission will share this information with the Board for discussion and recommendations.

The Commission will help national competent authorities share knowledge and experience with each other.

<p>the tasks of those authorities, as well as any subsequent changes thereto. Member States shall <br /> make publicly available information on how competent authorities and single points of contact can be contacted, through <br /> electronic communication means by 2 August 2025. Member States shall designate a market surveillance authority to act as <br /> the single point of contact for this Regulation, and shall notify the Commission of the identity of the single point of contact. <br /> The Commission shall make a list of the single points of contact publicly available.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 99/144</p> <p>3.<br /> Member States shall ensure that their national competent authorities are provided with adequate technical, financial <br /> and human resources, and with infrastructure to fulfil their tasks effectively under this Regulation. In particular, the national <br /> competent authorities shall have a sufficient number of personnel permanently available whose competences and expertise <br /> shall include an in-depth understanding of AI technologies, data and data computing, personal data protection, <br /> cybersecurity, fundamental rights, health and safety risks and knowledge of existing standards and legal requirements. <br /> Member States shall assess and, if necessary, update competence and resource requirements referred to in this paragraph on <br /> an annual basis.<br /> 4.<br /> National competent authorities shall take appropriate measures to ensure an adequate level of cybersecurity.<br /> 5.<br /> When performing their tasks, the national competent authorities shall act in accordance with the confidentiality <br /> obligations set out in Article 78.<br /> 6.<br /> By 2 August 2025, and once every two years thereafter, Member States shall report to the Commission on the status <br /> of the financial and human resources of the national competent authorities, with an assessment of their adequacy. The <br /> Commission shall transmit that information to the Board for discussion and possible recommendations.<br /> 7.<br /> The Commission shall facilitate the exchange of experience between national competent authorities.<br /> 8.</p>
Show original text

The Commission will share information about national authorities' resources and staffing with the Board for review and recommendations. The Commission will help national authorities share experiences with each other. National authorities can provide guidance to businesses, especially small companies and startups, on following this regulation. When guidance involves other EU laws, relevant authorities must be consulted. For EU institutions and agencies covered by this regulation, the European Data Protection Supervisor will oversee compliance. The Commission will create and maintain an EU database with Member States containing information about high-risk AI systems listed in Annex III that are registered according to Articles 49 and 60, as well as non-high-risk AI systems registered under Article 6(4) and Article 49. When setting up the database, the Commission will consult experts, and when updating it, will consult the Board. Providers or their authorized representatives must enter the data from Sections A and B of Annex VIII into the EU database.

<p>and human resources of the national competent authorities, with an assessment of their adequacy. The <br /> Commission shall transmit that information to the Board for discussion and possible recommendations.<br /> 7.<br /> The Commission shall facilitate the exchange of experience between national competent authorities.<br /> 8.<br /> National competent authorities may provide guidance and advice on the implementation of this Regulation, in <br /> particular to SMEs including start-ups, taking into account the guidance and advice of the Board and the Commission, as <br /> appropriate. Whenever national competent authorities intend to provide guidance and advice with regard to an AI system <br /> in areas covered by other Union law, the national competent authorities under that Union law shall be consulted, as <br /> appropriate.<br /> 9.<br /> Where Union institutions, bodies, offices or agencies fall within the scope of this Regulation, the European Data <br /> Protection Supervisor shall act as the competent authority for their supervision.<br /> CHAPTER VIII<br /> EU DATABASE FOR HIGH-RISK AI SYSTEMS<br /> Article 71<br /> EU database for high-risk AI systems listed in Annex III<br /> 1.<br /> The Commission shall, in collaboration with the Member States, set up and maintain an EU database containing <br /> information referred to in paragraphs 2 and 3 of this Article concerning high-risk AI systems referred to in Article 6(2) <br /> which are registered in accordance with Articles 49 and 60 and AI systems that are not considered as high-risk pursuant to <br /> Article 6(3) and which are registered in accordance with Article 6(4) and Article 49. When setting the functional <br /> specifications of such database, the Commission shall consult the relevant experts, and when updating the functional <br /> specifications of such database, the Commission shall consult the Board.<br /> 2.<br /> The data listed in Sections A and B of Annex VIII shall be entered into the EU database by the provider or, where <br /> applicable, by the authorised representative.<br /> 3.</p>
Show original text

The Commission must consult the Board when setting up the database. Providers or their authorized representatives enter data from Sections A and B of Annex VIII into the EU database. Public authorities and their representatives enter data from Section C of Annex VIII, following Articles 49(3) and (4). Most information in the EU database is public and easy to access, with user-friendly navigation and machine-readable formats. However, information registered under Article 60 is only available to market surveillance authorities and the Commission, unless the provider agrees to make it public. The database only contains personal data when necessary, including names and contact details of authorized representatives from providers or deployers. The Commission manages the database and provides technical and administrative support to providers and deployers. The database must meet all accessibility requirements.

<p>specifications of such database, the Commission shall consult the Board.<br /> 2.<br /> The data listed in Sections A and B of Annex VIII shall be entered into the EU database by the provider or, where <br /> applicable, by the authorised representative.<br /> 3.<br /> The data listed in Section C of Annex VIII shall be entered into the EU database by the deployer who is, or who acts on <br /> behalf of, a public authority, agency or body, in accordance with Article 49(3) and (4).<br /> 4.<br /> With the exception of the section referred to in Article 49(4) and Article 60(4), point (c), the information contained in <br /> the EU database registered in accordance with Article 49 shall be accessible and publicly available in a user-friendly manner. <br /> The information should be easily navigable and machine-readable. The information registered in accordance with Article 60 <br /> shall be accessible only to market surveillance authorities and the Commission, unless the prospective provider or provider <br /> has given consent for also making the information accessible the public.<br /> 5.<br /> The EU database shall contain personal data only in so far as necessary for collecting and processing information in <br /> accordance with this Regulation. That information shall include the names and contact details of natural persons who are <br /> responsible for registering the system and have the legal authority to represent the provider or the deployer, as applicable.<br /> EN<br /> OJ L, 12.7.2024<br /> 100/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>6.<br /> The Commission shall be the controller of the EU database. It shall make available to providers, prospective providers <br /> and deployers adequate technical and administrative support. The EU database shall comply with the applicable accessibility <br /> requirements.</p>
Show original text

The Commission will manage the EU database and provide technical and administrative help to providers, potential providers, and users. The database must meet accessibility standards.

CHAPTER IX: POST-MARKET MONITORING, INFORMATION SHARING AND MARKET SURVEILLANCE

SECTION 1: Post-Market Monitoring

Article 72: How Providers Must Monitor High-Risk AI Systems After Release

  1. Providers must create and document a system to monitor their high-risk AI systems after they are released to the market. The monitoring system should match the type of AI technology and the level of risk involved.

  2. The monitoring system must actively collect, document, and analyze data about how high-risk AI systems perform throughout their lifetime. This data can come from users or other sources. The goal is to ensure the AI systems continue to meet the requirements in Chapter III, Section 2. When relevant, providers should also analyze how their AI systems interact with other AI systems. However, providers do not need to monitor sensitive operational data from law-enforcement users.

  3. The monitoring system must be based on a written post-market monitoring plan. This plan is part of the technical documentation in Annex IV. By February 2, 2026, the Commission will create a template and list of required elements for this plan. The Commission will follow the examination procedure described in Article 98(2).

  4. [Text continues]

<p>/1689/oj</p> <p>6.<br /> The Commission shall be the controller of the EU database. It shall make available to providers, prospective providers <br /> and deployers adequate technical and administrative support. The EU database shall comply with the applicable accessibility <br /> requirements.<br /> CHAPTER IX<br /> POST-MARKET MONITORING, INFORMATION SHARING AND MARKET SURVEILLANCE<br /> SECTION 1<br /> Post-market monitoring<br /> Article 72<br /> Post-market monitoring by providers and post-market monitoring plan for high-risk AI systems<br /> 1.<br /> Providers shall establish and document a post-market monitoring system in a manner that is proportionate to the <br /> nature of the AI technologies and the risks of the high-risk AI system.<br /> 2.<br /> The post-market monitoring system shall actively and systematically collect, document and analyse relevant data <br /> which may be provided by deployers or which may be collected through other sources on the performance of high-risk AI <br /> systems throughout their lifetime, and which allow the provider to evaluate the continuous compliance of AI systems with <br /> the requirements set out in Chapter III, Section 2. Where relevant, post-market monitoring shall include an analysis of the <br /> interaction with other AI systems. This obligation shall not cover sensitive operational data of deployers which are <br /> law-enforcement authorities.<br /> 3.<br /> The post-market monitoring system shall be based on a post-market monitoring plan. The post-market monitoring <br /> plan shall be part of the technical documentation referred to in Annex IV. The Commission shall adopt an implementing act <br /> laying down detailed provisions establishing a template for the post-market monitoring plan and the list of elements to be <br /> included in the plan by 2 February 2026. That implementing act shall be adopted in accordance with the examination <br /> procedure referred to in Article 98(2).<br /> 4.</p>
Show original text

High-risk AI system providers must create a post-market monitoring plan by February 2, 2026, following the examination procedure in Article 98(2). For high-risk AI systems already covered by existing EU laws with established monitoring systems, providers can choose to integrate the required monitoring elements into their existing plans instead of creating new ones, as long as the protection level remains equivalent. This also applies to high-risk AI systems used by financial institutions that already follow EU financial services regulations. Providers of high-risk AI systems sold in the EU must immediately report any serious incidents to the market surveillance authorities in the Member State where the incident occurred. The report must be submitted as soon as the provider confirms a causal link between the AI system and the incident, or within 15 days of becoming aware of the incident, whichever comes first. The reporting timeline should reflect how severe the incident is.

<p>for the post-market monitoring plan and the list of elements to be <br /> included in the plan by 2 February 2026. That implementing act shall be adopted in accordance with the examination <br /> procedure referred to in Article 98(2).<br /> 4.<br /> For high-risk AI systems covered by the Union harmonisation legislation listed in Section A of Annex I, where <br /> a post-market monitoring system and plan are already established under that legislation, in order to ensure consistency, <br /> avoid duplications and minimise additional burdens, providers shall have a choice of integrating, as appropriate, the <br /> necessary elements described in paragraphs 1, 2 and 3 using the template referred in paragraph 3 into systems and plans <br /> already existing under that legislation, provided that it achieves an equivalent level of protection.<br /> The first subparagraph of this paragraph shall also apply to high-risk AI systems referred to in point 5 of Annex III placed <br /> on the market or put into service by financial institutions that are subject to requirements under Union financial services <br /> law regarding their internal governance, arrangements or processes.<br /> SECTION 2<br /> Sharing of information on serious incidents<br /> Article 73<br /> Reporting of serious incidents<br /> 1.<br /> Providers of high-risk AI systems placed on the Union market shall report any serious incident to the market <br /> surveillance authorities of the Member States where that incident occurred.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 101/144</p> <p>2.<br /> The report referred to in paragraph 1 shall be made immediately after the provider has established a causal link <br /> between the AI system and the serious incident or the reasonable likelihood of such a link, and, in any event, not later than <br /> 15 days after the provider or, where applicable, the deployer, becomes aware of the serious incident.<br /> The period for the reporting referred to in the first subparagraph shall take account of the severity of the serious incident.<br /> 3.</p>
Show original text

Providers or deployers must report serious incidents to authorities within 15 days of becoming aware of them. The reporting timeline should consider how severe the incident is. However, for widespread violations or certain serious incidents, the report must be submitted immediately or within 2 days of discovery. If someone dies and the provider or deployer believes the AI system caused it, they must report immediately but no later than 10 days after learning about the incident. If needed to meet deadlines, providers can submit an incomplete initial report followed by a complete one later. After reporting, the provider must promptly investigate the incident and the AI system involved, including assessing risks and taking corrective action. The provider must work with authorities and relevant inspection bodies during investigations. Before making any changes to the AI system that could affect the investigation, the provider must first inform the authorities.

<p>, not later than <br /> 15 days after the provider or, where applicable, the deployer, becomes aware of the serious incident.<br /> The period for the reporting referred to in the first subparagraph shall take account of the severity of the serious incident.<br /> 3.<br /> Notwithstanding paragraph 2 of this Article, in the event of a widespread infringement or a serious incident as <br /> defined in Article 3, point (49)(b), the report referred to in paragraph 1 of this Article shall be provided immediately, and <br /> not later than two days after the provider or, where applicable, the deployer becomes aware of that incident.<br /> 4.<br /> Notwithstanding paragraph 2, in the event of the death of a person, the report shall be provided immediately after the <br /> provider or the deployer has established, or as soon as it suspects, a causal relationship between the high-risk AI system and <br /> the serious incident, but not later than 10 days after the date on which the provider or, where applicable, the deployer <br /> becomes aware of the serious incident.<br /> 5.<br /> Where necessary to ensure timely reporting, the provider or, where applicable, the deployer, may submit an initial <br /> report that is incomplete, followed by a complete report.<br /> 6.<br /> Following the reporting of a serious incident pursuant to paragraph 1, the provider shall, without delay, perform the <br /> necessary investigations in relation to the serious incident and the AI system concerned. This shall include a risk assessment <br /> of the incident, and corrective action.<br /> The provider shall cooperate with the competent authorities, and where relevant with the notified body concerned, during <br /> the investigations referred to in the first subparagraph, and shall not perform any investigation which involves altering the <br /> AI system concerned in a way which may affect any subsequent evaluation of the causes of the incident, prior to informing <br /> the competent authorities of such action.<br /> 7.</p>
Show original text

Investigators must not make changes to the AI system that could affect how the incident is evaluated, unless they first tell the authorities about these changes.

When a serious incident is reported, the market surveillance authority must inform the national public authorities listed in Article 77(1). The Commission will create guidance to help organizations follow these rules by August 2, 2025, and will review it regularly.

The market surveillance authority must take action within seven days of receiving the incident report, following the procedures in Regulation (EU) 2019/1020.

For high-risk AI systems listed in Annex III that are sold or used by providers already required to report similar incidents under other EU laws, they only need to report the most serious incidents.

For high-risk AI systems that are safety parts of medical devices or are themselves medical devices covered by EU Regulations 2017/745 and 2017/746, they only need to report the most serious incidents. These reports must go to the national authority chosen by the Member State where the incident occurred.

<p>referred to in the first subparagraph, and shall not perform any investigation which involves altering the <br /> AI system concerned in a way which may affect any subsequent evaluation of the causes of the incident, prior to informing <br /> the competent authorities of such action.<br /> 7.<br /> Upon receiving a notification related to a serious incident referred to in Article 3, point (49)(c), the relevant market <br /> surveillance authority shall inform the national public authorities or bodies referred to in Article 77(1). The Commission <br /> shall develop dedicated guidance to facilitate compliance with the obligations set out in paragraph 1 of this Article. That <br /> guidance shall be issued by 2 August 2025, and shall be assessed regularly.<br /> 8.<br /> The market surveillance authority shall take appropriate measures, as provided for in Article 19 of Regulation (EU) <br /> 2019/1020, within seven days from the date it received the notification referred to in paragraph 1 of this Article, and shall <br /> follow the notification procedures as provided in that Regulation.<br /> 9.<br /> For high-risk AI systems referred to in Annex III that are placed on the market or put into service by providers that are <br /> subject to Union legislative instruments laying down reporting obligations equivalent to those set out in this Regulation, the <br /> notification of serious incidents shall be limited to those referred to in Article 3, point (49)(c).<br /> 10.<br /> For high-risk AI systems which are safety components of devices, or are themselves devices, covered by Regulations <br /> (EU) 2017/745 and (EU) 2017/746, the notification of serious incidents shall be limited to those referred to in Article 3, <br /> point (49)(c) of this Regulation, and shall be made to the national competent authority chosen for that purpose by the <br /> Member States where the incident occurred.<br /> 11.</p>
Show original text

Serious incidents must be reported to the national authority designated by each Member State where the incident took place. These incidents are defined in Article 3, point (49)(c) of this Regulation. National authorities must immediately inform the Commission about any serious incident, regardless of whether they have taken action, following the procedures in Article 20 of Regulation (EU) 2019/1020. AI systems in the EU market are subject to Regulation (EU) 2019/1020 for enforcement purposes. All operators and AI systems covered by this Regulation are included in the enforcement framework. Market surveillance authorities must submit annual reports to the Commission and national competition authorities about any information discovered during surveillance that could be relevant to EU competition law. They must also report annually on prohibited practices found during the year and the actions taken in response.

<p>serious incidents shall be limited to those referred to in Article 3, <br /> point (49)(c) of this Regulation, and shall be made to the national competent authority chosen for that purpose by the <br /> Member States where the incident occurred.<br /> 11.<br /> National competent authorities shall immediately notify the Commission of any serious incident, whether or not <br /> they have taken action on it, in accordance with Article 20 of Regulation (EU) 2019/1020.<br /> SECTION 3<br /> Enforcement<br /> Article 74<br /> Market surveillance and control of AI systems in the Union market<br /> 1.<br /> Regulation (EU) 2019/1020 shall apply to AI systems covered by this Regulation. For the purposes of the effective <br /> enforcement of this Regulation:<br /> EN<br /> OJ L, 12.7.2024<br /> 102/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>(a) any reference to an economic operator under Regulation (EU) 2019/1020 shall be understood as including all operators <br /> identified in Article 2(1) of this Regulation;<br /> (b) any reference to a product under Regulation (EU) 2019/1020 shall be understood as including all AI systems falling <br /> within the scope of this Regulation.<br /> 2.<br /> As part of their reporting obligations under Article 34(4) of Regulation (EU) 2019/1020, the market surveillance <br /> authorities shall report annually to the Commission and relevant national competition authorities any information <br /> identified in the course of market surveillance activities that may be of potential interest for the application of Union law on <br /> competition rules. They shall also annually report to the Commission about the use of prohibited practices that occurred <br /> during that year and about the measures taken.<br /> 3.</p>
Show original text

Market surveillance authorities must report annually to the Commission about prohibited practices used during the year and the actions taken in response. For high-risk AI systems related to products covered by EU harmonisation laws listed in Annex I Section A, the responsible market surveillance authority is the one designated under those laws. Member States may designate a different authority in appropriate cases, but must ensure coordination with the relevant sectoral authorities. The procedures in Articles 79-83 do not apply to AI systems related to products covered by EU harmonisation laws in Annex I Section A if those laws already have equivalent procedures with the same objectives—in these cases, the sectoral procedures apply instead. Market surveillance authorities may use remote powers under Article 14(4) points (d) and (j) of Regulation (EU) 2019/1020 to effectively enforce this Regulation, as appropriate.

<p>course of market surveillance activities that may be of potential interest for the application of Union law on <br /> competition rules. They shall also annually report to the Commission about the use of prohibited practices that occurred <br /> during that year and about the measures taken.<br /> 3.<br /> For high-risk AI systems related to products covered by the Union harmonisation legislation listed in Section A of <br /> Annex I, the market surveillance authority for the purposes of this Regulation shall be the authority responsible for market <br /> surveillance activities designated under those legal acts.<br /> By derogation from the first subparagraph, and in appropriate circumstances, Member States may designate another <br /> relevant authority to act as a market surveillance authority, provided they ensure coordination with the relevant sectoral <br /> market surveillance authorities responsible for the enforcement of the Union harmonisation legislation listed in Annex I.<br /> 4.<br /> The procedures referred to in Articles 79 to 83 of this Regulation shall not apply to AI systems related to products <br /> covered by the Union harmonisation legislation listed in section A of Annex I, where such legal acts already provide for <br /> procedures ensuring an equivalent level of protection and having the same objective. In such cases, the relevant sectoral <br /> procedures shall apply instead.<br /> 5.<br /> Without prejudice to the powers of market surveillance authorities under Article 14 of Regulation (EU) 2019/1020, <br /> for the purpose of ensuring the effective enforcement of this Regulation, market surveillance authorities may exercise the <br /> powers referred to in Article 14(4), points (d) and (j), of that Regulation remotely, as appropriate.<br /> 6.</p>
Show original text

Market surveillance authorities can use remote enforcement powers to effectively enforce this Regulation. For high-risk AI systems used by financial institutions regulated under EU financial services law, the market surveillance authority is the national financial supervisor responsible for those institutions, as long as the AI system is directly connected to providing financial services. In some cases, Member States may designate a different authority as the market surveillance authority if coordination is ensured. National authorities supervising credit institutions under Directive 2013/36/EU that participate in the Single Supervisory Mechanism must report any relevant information found during market surveillance activities to the European Central Bank without delay. For high-risk AI systems used for law enforcement, border management, justice, and democracy (listed in Annex III), Member States must designate either the data protection supervisory authorities under EU Regulation 2016/679 or Directive 2016/680, or another designated authority as the market surveillance authority.

<p>0, <br /> for the purpose of ensuring the effective enforcement of this Regulation, market surveillance authorities may exercise the <br /> powers referred to in Article 14(4), points (d) and (j), of that Regulation remotely, as appropriate.<br /> 6.<br /> For high-risk AI systems placed on the market, put into service, or used by financial institutions regulated by Union <br /> financial services law, the market surveillance authority for the purposes of this Regulation shall be the relevant national <br /> authority responsible for the financial supervision of those institutions under that legislation in so far as the placing on the <br /> market, putting into service, or the use of the AI system is in direct connection with the provision of those financial <br /> services.<br /> 7.<br /> By way of derogation from paragraph 6, in appropriate circumstances, and provided that coordination is ensured, <br /> another relevant authority may be identified by the Member State as market surveillance authority for the purposes of this <br /> Regulation.<br /> National market surveillance authorities supervising regulated credit institutions regulated under Directive 2013/36/EU, <br /> which are participating in the Single Supervisory Mechanism established by Regulation (EU) No 1024/2013, should report, <br /> without delay, to the European Central Bank any information identified in the course of their market surveillance activities <br /> that may be of potential interest for the prudential supervisory tasks of the European Central Bank specified in that <br /> Regulation.<br /> 8.<br /> For high-risk AI systems listed in point 1 of Annex III to this Regulation, in so far as the systems are used for law <br /> enforcement purposes, border management and justice and democracy, and for high-risk AI systems listed in points 6, 7 <br /> and 8 of Annex III to this Regulation, Member States shall designate as market surveillance authorities for the purposes of <br /> this Regulation either the competent data protection supervisory authorities under Regulation (EU) 2016/679 or Directive <br /> (EU) 2016/680, or any other authority designated pursuant to</p>
Show original text

Market surveillance authorities will be designated to oversee this Regulation. These authorities can be either data protection supervisory authorities under EU Regulation 2016/679 or Directive 2016/680, or other authorities designated under the same conditions in Articles 41-44 of Directive 2016/680. Market surveillance activities must not interfere with the independence or judicial functions of courts.

For EU institutions, bodies, offices, and agencies covered by this Regulation, the European Data Protection Supervisor serves as the market surveillance authority, except for the Court of Justice of the European Union when acting in its judicial capacity.

Member States must help coordinate between market surveillance authorities designated under this Regulation and other national authorities or bodies that oversee EU harmonisation legislation listed in Annex I or other relevant EU laws. This coordination is particularly important for high-risk AI systems listed in Annex III.

Market surveillance authorities and the Commission can propose joint activities, including joint investigations, to promote compliance, identify violations, raise awareness, or provide guidance on this Regulation. These activities focus on specific categories of high-risk AI systems that pose serious risks across two or more Member States, in accordance with Article 9 of Regulation 2019/1020. The AI Office will provide coordination support for joint investigations.

<p>shall designate as market surveillance authorities for the purposes of <br /> this Regulation either the competent data protection supervisory authorities under Regulation (EU) 2016/679 or Directive <br /> (EU) 2016/680, or any other authority designated pursuant to the same conditions laid down in Articles 41 to 44 of <br /> Directive (EU) 2016/680. Market surveillance activities shall in no way affect the independence of judicial authorities, or <br /> otherwise interfere with their activities when acting in their judicial capacity.<br /> 9.<br /> Where Union institutions, bodies, offices or agencies fall within the scope of this Regulation, the European Data <br /> Protection Supervisor shall act as their market surveillance authority, except in relation to the Court of Justice of the <br /> European Union acting in its judicial capacity.<br /> 10.<br /> Member States shall facilitate coordination between market surveillance authorities designated under this Regulation <br /> and other relevant national authorities or bodies which supervise the application of Union harmonisation legislation listed <br /> in Annex I, or in other Union law, that might be relevant for the high-risk AI systems referred to in Annex III.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 103/144</p> <p>11.<br /> Market surveillance authorities and the Commission shall be able to propose joint activities, including joint <br /> investigations, to be conducted by either market surveillance authorities or market surveillance authorities jointly with the <br /> Commission, that have the aim of promoting compliance, identifying non-compliance, raising awareness or providing <br /> guidance in relation to this Regulation with respect to specific categories of high-risk AI systems that are found to present <br /> a serious risk across two or more Member States in accordance with Article 9 of Regulation (EU) 2019/1020. The AI Office <br /> shall provide coordination support for joint investigations.<br /> 12.</p>
Show original text

High-risk AI systems that pose serious risks across two or more Member States will be investigated jointly, with the AI Office providing coordination support. Market surveillance authorities must be given full access to all documentation, training data, validation data, and testing data used to develop high-risk AI systems. This access can be provided through remote means like APIs or other technical tools if needed. Authorities can request access to the source code of high-risk AI systems, but only when: (1) the source code is necessary to check if the system meets the required standards in Chapter III, Section 2, and (2) other testing, auditing, and verification methods using the provider's data and documentation have been tried but were insufficient. All information obtained by market surveillance authorities must be kept confidential according to Article 78. Regarding general-purpose AI systems: when an AI system is based on a general-purpose AI model and both are developed by the same provider, the AI Office has the power to monitor and ensure the system follows this Regulation's requirements. The AI Office will have all the same authority as a market surveillance authority under this Section and Regulation (EU) 2019/1020.

<p>-risk AI systems that are found to present <br /> a serious risk across two or more Member States in accordance with Article 9 of Regulation (EU) 2019/1020. The AI Office <br /> shall provide coordination support for joint investigations.<br /> 12.<br /> Without prejudice to the powers provided for under Regulation (EU) 2019/1020, and where relevant and limited to <br /> what is necessary to fulfil their tasks, the market surveillance authorities shall be granted full access by providers to the <br /> documentation as well as the training, validation and testing data sets used for the development of high-risk AI systems, <br /> including, where appropriate and subject to security safeguards, through application programming interfaces (API) or other <br /> relevant technical means and tools enabling remote access.<br /> 13.<br /> Market surveillance authorities shall be granted access to the source code of the high-risk AI system upon a reasoned <br /> request and only when both of the following conditions are fulfilled:<br /> (a) access to source code is necessary to assess the conformity of a high-risk AI system with the requirements set out in <br /> Chapter III, Section 2; and<br /> (b) testing or auditing procedures and verifications based on the data and documentation provided by the provider have <br /> been exhausted or proved insufficient.<br /> 14.<br /> Any information or documentation obtained by market surveillance authorities shall be treated in accordance with <br /> the confidentiality obligations set out in Article 78.<br /> Article 75<br /> Mutual assistance, market surveillance and control of general-purpose AI systems<br /> 1.<br /> Where an AI system is based on a general-purpose AI model, and the model and the system are developed by the <br /> same provider, the AI Office shall have powers to monitor and supervise compliance of that AI system with obligations <br /> under this Regulation. To carry out its monitoring and supervision tasks, the AI Office shall have all the powers of a market <br /> surveillance authority provided for in this Section and Regulation (EU) 2019/1020.<br /> 2.</p>
Show original text

The AI Office has all the powers of a market surveillance authority to monitor and supervise compliance with this Regulation, as outlined in Regulation (EU) 2019/1020.

When market surveillance authorities have good reason to believe that general-purpose AI systems used by deployers for high-risk purposes do not meet the requirements of this Regulation, they must work with the AI Office to evaluate compliance. They must also inform the Board and other market surveillance authorities of their findings.

If a market surveillance authority cannot complete its investigation of a high-risk AI system because it cannot access necessary information about the general-purpose AI model, even after making reasonable efforts, it can submit a formal request to the AI Office. The AI Office must then provide any relevant information within 30 days to help determine if the high-risk AI system fails to comply. Market surveillance authorities must keep this information confidential according to Article 78 of this Regulation. The process outlined in Chapter VI of Regulation (EU) 2019/1020 applies in the same way.

Market surveillance authorities have the responsibility and authority to ensure that real-world testing of AI systems follows this Regulation.

<p>with obligations <br /> under this Regulation. To carry out its monitoring and supervision tasks, the AI Office shall have all the powers of a market <br /> surveillance authority provided for in this Section and Regulation (EU) 2019/1020.<br /> 2.<br /> Where the relevant market surveillance authorities have sufficient reason to consider general-purpose AI systems that <br /> can be used directly by deployers for at least one purpose that is classified as high-risk pursuant to this Regulation to be <br /> non-compliant with the requirements laid down in this Regulation, they shall cooperate with the AI Office to carry out <br /> compliance evaluations, and shall inform the Board and other market surveillance authorities accordingly.<br /> 3.<br /> Where a market surveillance authority is unable to conclude its investigation of the high-risk AI system because of its <br /> inability to access certain information related to the general-purpose AI model despite having made all appropriate efforts <br /> to obtain that information, it may submit a reasoned request to the AI Office, by which access to that information shall be <br /> enforced. In that case, the AI Office shall supply to the applicant authority without delay, and in any event within 30 days, <br /> any information that the AI Office considers to be relevant in order to establish whether a high-risk AI system is <br /> non-compliant. Market surveillance authorities shall safeguard the confidentiality of the information that they obtain in <br /> accordance with Article 78 of this Regulation. The procedure provided for in Chapter VI of Regulation (EU) 2019/1020 <br /> shall apply mutatis mutandis.<br /> Article 76<br /> Supervision of testing in real world conditions by market surveillance authorities<br /> 1.<br /> Market surveillance authorities shall have competences and powers to ensure that testing in real world conditions is in <br /> accordance with this Regulation.<br /> EN<br /> OJ L, 12.7.2024<br /> 104/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>2.</p>
Show original text

Market surveillance authorities oversee AI systems being tested in regulatory sandboxes. They check that testing follows the required rules. In some cases, they can allow companies to conduct real-world testing with fewer restrictions than normally required.

If authorities learn about serious problems during testing or have concerns that testing rules are not being followed, they can take action. They can stop the testing, or they can require companies to change how they are conducting the test.

When authorities make these decisions, they must explain their reasons and tell the companies how they can challenge the decision.

If an authority stops or changes a test, it should inform other countries' authorities if the same AI system was being tested there as well.

<p>in <br /> accordance with this Regulation.<br /> EN<br /> OJ L, 12.7.2024<br /> 104/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>2.<br /> Where testing in real world conditions is conducted for AI systems that are supervised within an AI regulatory <br /> sandbox under Article 58, the market surveillance authorities shall verify the compliance with Article 60 as part of their <br /> supervisory role for the AI regulatory sandbox. Those authorities may, as appropriate, allow the testing in real world <br /> conditions to be conducted by the provider or prospective provider, in derogation from the conditions set out in Article <br /> 60(4), points (f) and (g).<br /> 3.<br /> Where a market surveillance authority has been informed by the prospective provider, the provider or any third party <br /> of a serious incident or has other grounds for considering that the conditions set out in Articles 60 and 61 are not met, it <br /> may take either of the following decisions on its territory, as appropriate:<br /> (a) to suspend or terminate the testing in real world conditions;<br /> (b) to require the provider or prospective provider and the deployer or prospective deployer to modify any aspect of the <br /> testing in real world conditions.<br /> 4.<br /> Where a market surveillance authority has taken a decision referred to in paragraph 3 of this Article, or has issued an <br /> objection within the meaning of Article 60(4), point (b), the decision or the objection shall indicate the grounds therefor <br /> and how the provider or prospective provider can challenge the decision or objection.<br /> 5.<br /> Where applicable, where a market surveillance authority has taken a decision referred to in paragraph 3, it shall <br /> communicate the grounds therefor to the market surveillance authorities of other Member States in which the AI system <br /> has been tested in accordance with the testing plan.<br /> Article 77<br /> Powers of authorities protecting fundamental rights<br /> 1.</p>
Show original text

When an AI system is tested, the testing authority must inform market surveillance authorities in other Member States where the system was also tested.

National authorities responsible for protecting fundamental rights and preventing discrimination have the power to request and access all documentation about high-risk AI systems. They can access this information in a format they can understand when needed to perform their duties. These authorities must inform their Member State's market surveillance authority about any such requests.

By November 2, 2024, each Member State must identify which public authorities have these powers and publish a list. Member States must share this list with the European Commission and other Member States, and keep it updated.

If the available documentation is not enough to determine whether a high-risk AI system violates fundamental rights laws, the relevant authority can ask the market surveillance authority to test the system. The market surveillance authority must organize this testing with the requesting authority's involvement within a reasonable timeframe.

All information and documentation obtained by these national authorities must be kept confidential according to confidentiality rules.

<p>3, it shall <br /> communicate the grounds therefor to the market surveillance authorities of other Member States in which the AI system <br /> has been tested in accordance with the testing plan.<br /> Article 77<br /> Powers of authorities protecting fundamental rights<br /> 1.<br /> National public authorities or bodies which supervise or enforce the respect of obligations under Union law <br /> protecting fundamental rights, including the right to non-discrimination, in relation to the use of high-risk AI systems <br /> referred to in Annex III shall have the power to request and access any documentation created or maintained under this <br /> Regulation in accessible language and format when access to that documentation is necessary for effectively fulfilling their <br /> mandates within the limits of their jurisdiction. The relevant public authority or body shall inform the market surveillance <br /> authority of the Member State concerned of any such request.<br /> 2.<br /> By 2 November 2024, each Member State shall identify the public authorities or bodies referred to in paragraph 1 and <br /> make a list of them publicly available. Member States shall notify the list to the Commission and to the other Member <br /> States, and shall keep the list up to date.<br /> 3.<br /> Where the documentation referred to in paragraph 1 is insufficient to ascertain whether an infringement of <br /> obligations under Union law protecting fundamental rights has occurred, the public authority or body referred to in <br /> paragraph 1 may make a reasoned request to the market surveillance authority, to organise testing of the high-risk AI <br /> system through technical means. The market surveillance authority shall organise the testing with the close involvement of <br /> the requesting public authority or body within a reasonable time following the request.<br /> 4.<br /> Any information or documentation obtained by the national public authorities or bodies referred to in paragraph 1 of <br /> this Article pursuant to this Article shall be treated in accordance with the confidentiality obligations set out in Article 78.<br /> Article 78<br /> Confidentiality<br /> 1.</p>
Show original text

Information and documents obtained by national authorities under this Article must follow the confidentiality rules in Article 78.

Article 78 - Confidentiality

  1. The Commission, market surveillance authorities, notified bodies, and anyone else involved in applying this Regulation must keep information and data confidential according to EU or national law. They must specifically protect:

(a) Intellectual property rights, confidential business information, and trade secrets (including source code), except in cases covered by Article 5 of Directive (EU) 2016/943;

(b) The proper enforcement of this Regulation, especially during inspections, investigations, or audits;

(c) Public and national security;

(d) Criminal or administrative investigations;

(e) Information classified under EU or national law.

  1. Authorities applying this Regulation may only request data that is absolutely necessary to assess AI system risks and perform their duties under this Regulation and Regulation (EU) 2019/1020. They must use strong cybersecurity measures to protect information security and confidentiality, and must delete collected data as soon as it is no longer needed, following applicable EU or national law.

  2. [Text continues]

<p>or documentation obtained by the national public authorities or bodies referred to in paragraph 1 of <br /> this Article pursuant to this Article shall be treated in accordance with the confidentiality obligations set out in Article 78.<br /> Article 78<br /> Confidentiality<br /> 1.<br /> The Commission, market surveillance authorities and notified bodies and any other natural or legal person involved <br /> in the application of this Regulation shall, in accordance with Union or national law, respect the confidentiality of <br /> information and data obtained in carrying out their tasks and activities in such a manner as to protect, in particular:<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 105/144</p> <p>(a) the intellectual property rights and confidential business information or trade secrets of a natural or legal person, <br /> including source code, except in the cases referred to in Article 5 of Directive (EU) 2016/943 of the European <br /> Parliament and of the Council (57);<br /> (b) the effective implementation of this Regulation, in particular for the purposes of inspections, investigations or audits;<br /> (c) public and national security interests;<br /> (d) the conduct of criminal or administrative proceedings;<br /> (e) information classified pursuant to Union or national law.<br /> 2.<br /> The authorities involved in the application of this Regulation pursuant to paragraph 1 shall request only data that is <br /> strictly necessary for the assessment of the risk posed by AI systems and for the exercise of their powers in accordance with <br /> this Regulation and with Regulation (EU) 2019/1020. They shall put in place adequate and effective cybersecurity measures <br /> to protect the security and confidentiality of the information and data obtained, and shall delete the data collected as soon <br /> as it is no longer needed for the purpose for which it was obtained, in accordance with applicable Union or national law.<br /> 3.</p>
Show original text

Organizations must protect the security and confidentiality of collected information and data. They must delete data as soon as it is no longer needed, following EU or national laws.

Confidential information shared between national authorities or between national authorities and the Commission cannot be shared publicly without first asking the original authority and the AI system operator. This applies when high-risk AI systems (listed in Annex III, points 1, 6, or 7) are used by law enforcement, border control, immigration, or asylum authorities, and when sharing would harm public or national security. However, sensitive operational details about these authorities' activities should not be shared.

When law enforcement, immigration, or asylum authorities create high-risk AI systems, their technical documentation (Annex IV) must stay within their offices. These authorities must allow market surveillance authorities to access or copy the documentation upon request. Only staff with proper security clearance can view this documentation.

These rules do not prevent the Commission, Member States, and their authorities from sharing information or issuing warnings, including across borders. They also do not override obligations to provide information under Member States' criminal laws.

<p>measures <br /> to protect the security and confidentiality of the information and data obtained, and shall delete the data collected as soon <br /> as it is no longer needed for the purpose for which it was obtained, in accordance with applicable Union or national law.<br /> 3.<br /> Without prejudice to paragraphs 1 and 2, information exchanged on a confidential basis between the national <br /> competent authorities or between national competent authorities and the Commission shall not be disclosed without prior <br /> consultation of the originating national competent authority and the deployer when high-risk AI systems referred to in <br /> point 1, 6 or 7 of Annex III are used by law enforcement, border control, immigration or asylum authorities and when such <br /> disclosure would jeopardise public and national security interests. This exchange of information shall not cover sensitive <br /> operational data in relation to the activities of law enforcement, border control, immigration or asylum authorities.<br /> When the law enforcement, immigration or asylum authorities are providers of high-risk AI systems referred to in point 1, <br /> 6 or 7 of Annex III, the technical documentation referred to in Annex IV shall remain within the premises of those <br /> authorities. Those authorities shall ensure that the market surveillance authorities referred to in Article 74(8) and (9), as <br /> applicable, can, upon request, immediately access the documentation or obtain a copy thereof. Only staff of the market <br /> surveillance authority holding the appropriate level of security clearance shall be allowed to access that documentation or <br /> any copy thereof.<br /> 4.<br /> Paragraphs 1, 2 and 3 shall not affect the rights or obligations of the Commission, Member States and their relevant <br /> authorities, as well as those of notified bodies, with regard to the exchange of information and the dissemination of <br /> warnings, including in the context of cross-border cooperation, nor shall they affect the obligations of the parties concerned <br /> to provide information under criminal law of the Member States.<br /> 5.</p>
Show original text

The Commission and Member States can share confidential information with regulatory authorities in other countries if they have signed agreements that ensure adequate confidentiality protection, following international and trade agreement rules.

When a Member State's market surveillance authority suspects that an AI system poses a risk to people's health, safety, or fundamental rights, it must evaluate whether the AI system complies with all requirements in this Regulation. Special attention must be paid to AI systems that could harm vulnerable groups. If risks to fundamental rights are found, the market surveillance authority must inform and work closely with relevant national authorities. Operators must cooperate fully with the market surveillance authority and other national authorities involved in this process.

<p>notified bodies, with regard to the exchange of information and the dissemination of <br /> warnings, including in the context of cross-border cooperation, nor shall they affect the obligations of the parties concerned <br /> to provide information under criminal law of the Member States.<br /> 5.<br /> The Commission and Member States may exchange, where necessary and in accordance with relevant provisions of <br /> international and trade agreements, confidential information with regulatory authorities of third countries with which they <br /> have concluded bilateral or multilateral confidentiality arrangements guaranteeing an adequate level of confidentiality.<br /> Article 79<br /> Procedure at national level for dealing with AI systems presenting a risk<br /> 1.<br /> AI systems presenting a risk shall be understood as a ‘product presenting a risk’ as defined in Article 3, point 19 of <br /> Regulation (EU) 2019/1020, in so far as they present risks to the health or safety, or to fundamental rights, of persons.<br /> 2.<br /> Where the market surveillance authority of a Member State has sufficient reason to consider an AI system to present <br /> a risk as referred to in paragraph 1 of this Article, it shall carry out an evaluation of the AI system concerned in respect of <br /> its compliance with all the requirements and obligations laid down in this Regulation. Particular attention shall be given to <br /> AI systems presenting a risk to vulnerable groups. Where risks to fundamental rights are identified, the market surveillance <br /> authority shall also inform and fully cooperate with the relevant national public authorities or bodies referred to in Article <br /> 77(1). The relevant operators shall cooperate as necessary with the market surveillance authority and with the other <br /> national public authorities or bodies referred to in Article 77(1).<br /> EN<br /> OJ L, 12.7.</p>
Show original text

Operators must work with market surveillance authorities and other national public authorities as needed under Article 77(1). When a market surveillance authority (possibly working with a national public authority) evaluates an AI system and finds it does not meet the requirements of this Regulation, it must immediately order the operator to fix the problems, remove the AI system from the market, or recall it. The operator has a maximum of 15 working days to comply, or whatever shorter timeframe applies under relevant EU laws. The market surveillance authority must notify the relevant notified body about these actions. The rules in Article 18 of Regulation (EU) 2019/1020 apply to these enforcement measures. If the market surveillance authority believes the non-compliance affects multiple countries, it must promptly inform the European Commission and other Member States about its findings and the actions it required the operator to take.

<p>to in Article <br /> 77(1). The relevant operators shall cooperate as necessary with the market surveillance authority and with the other <br /> national public authorities or bodies referred to in Article 77(1).<br /> EN<br /> OJ L, 12.7.2024<br /> 106/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> (57)<br /> Directive (EU) 2016/943 of the European Parliament and of the Council of 8 June 2016 on the protection of undisclosed know-how <br /> and business information (trade secrets) against their unlawful acquisition, use and disclosure (OJ L 157, 15.6.2016, p. 1).</p> <p>Where, in the course of that evaluation, the market surveillance authority or, where applicable the market surveillance <br /> authority in cooperation with the national public authority referred to in Article 77(1), finds that the AI system does not <br /> comply with the requirements and obligations laid down in this Regulation, it shall without undue delay require the relevant <br /> operator to take all appropriate corrective actions to bring the AI system into compliance, to withdraw the AI system from <br /> the market, or to recall it within a period the market surveillance authority may prescribe, and in any event within the <br /> shorter of 15 working days, or as provided for in the relevant Union harmonisation legislation.<br /> The market surveillance authority shall inform the relevant notified body accordingly. Article 18 of Regulation (EU) <br /> 2019/1020 shall apply to the measures referred to in the second subparagraph of this paragraph.<br /> 3.<br /> Where the market surveillance authority considers that the non-compliance is not restricted to its national territory, it <br /> shall inform the Commission and the other Member States without undue delay of the results of the evaluation and of the <br /> actions which it has required the operator to take.<br /> 4.</p>
Show original text

If a problem with an AI system is found beyond one country's borders, that country must quickly tell the European Commission and other Member States about the evaluation results and what actions it required the operator to take. The operator must fix all AI systems it has sold in the EU market. If the operator does not make adequate corrections within the required timeframe, the market surveillance authority can take steps to stop selling the AI system, remove it from the market, or recall it. The authority must immediately notify the Commission and other Member States of these actions. This notification must include all relevant details: how to identify the problematic AI system, where it came from, the supply chain involved, what the problem is, what risk it poses, what actions were taken and for how long, and the operator's response. The authorities must also specify whether the problem is due to: (a) breaking the rules on prohibited AI practices in Article 5; (b) a high-risk AI system failing to meet requirements in Chapter III, Section 2; (c) problems with the approved standards or specifications in Articles 40 and 41 that confirm compliance; or (d) breaking Article 50.

<p>considers that the non-compliance is not restricted to its national territory, it <br /> shall inform the Commission and the other Member States without undue delay of the results of the evaluation and of the <br /> actions which it has required the operator to take.<br /> 4.<br /> The operator shall ensure that all appropriate corrective action is taken in respect of all the AI systems concerned that <br /> it has made available on the Union market.<br /> 5.<br /> Where the operator of an AI system does not take adequate corrective action within the period referred to in <br /> paragraph 2, the market surveillance authority shall take all appropriate provisional measures to prohibit or restrict the AI <br /> system’s being made available on its national market or put into service, to withdraw the product or the standalone AI <br /> system from that market or to recall it. That authority shall without undue delay notify the Commission and the other <br /> Member States of those measures.<br /> 6.<br /> The notification referred to in paragraph 5 shall include all available details, in particular the information necessary <br /> for the identification of the non-compliant AI system, the origin of the AI system and the supply chain, the nature of the <br /> non-compliance alleged and the risk involved, the nature and duration of the national measures taken and the arguments <br /> put forward by the relevant operator. In particular, the market surveillance authorities shall indicate whether the <br /> non-compliance is due to one or more of the following:<br /> (a) non-compliance with the prohibition of the AI practices referred to in Article 5;<br /> (b) a failure of a high-risk AI system to meet requirements set out in Chapter III, Section 2;<br /> (c) shortcomings in the harmonised standards or common specifications referred to in Articles 40 and 41 conferring <br /> a presumption of conformity;<br /> (d) non-compliance with Article 50.<br /> 7.</p>
Show original text

Market surveillance authorities must quickly inform the Commission and other Member States about any measures they take and additional information they find regarding non-compliant AI systems. If they disagree with a reported national measure, they must state their objections.

If no objections are raised within three months of notification, the temporary measure is considered justified. However, this does not affect the operator's legal rights under EU Regulation 2019/1020. This three-month period is shortened to 30 days if the AI system violates the prohibited practices listed in Article 5.

Market surveillance authorities must promptly take appropriate restrictive actions on the problematic product or AI system, such as removing it from the market.

The next section (Article 80) addresses the procedure for AI systems that providers classify as non-high-risk according to Annex III.

<p>set out in Chapter III, Section 2;<br /> (c) shortcomings in the harmonised standards or common specifications referred to in Articles 40 and 41 conferring <br /> a presumption of conformity;<br /> (d) non-compliance with Article 50.<br /> 7.<br /> The market surveillance authorities other than the market surveillance authority of the Member State initiating the <br /> procedure shall, without undue delay, inform the Commission and the other Member States of any measures adopted and of <br /> any additional information at their disposal relating to the non-compliance of the AI system concerned, and, in the event of <br /> disagreement with the notified national measure, of their objections.<br /> 8.<br /> Where, within three months of receipt of the notification referred to in paragraph 5 of this Article, no objection has <br /> been raised by either a market surveillance authority of a Member State or by the Commission in respect of a provisional <br /> measure taken by a market surveillance authority of another Member State, that measure shall be deemed justified. This <br /> shall be without prejudice to the procedural rights of the concerned operator in accordance with Article 18 of Regulation <br /> (EU) 2019/1020. The three-month period referred to in this paragraph shall be reduced to 30 days in the event of <br /> non-compliance with the prohibition of the AI practices referred to in Article 5 of this Regulation.<br /> 9.<br /> The market surveillance authorities shall ensure that appropriate restrictive measures are taken in respect of the <br /> product or the AI system concerned, such as withdrawal of the product or the AI system from their market, without undue <br /> delay.<br /> Article 80<br /> Procedure for dealing with AI systems classified by the provider as non-high-risk in application of Annex III<br /> 1.</p>
Show original text

Article 80 describes how market surveillance authorities handle AI systems that providers claim are low-risk but may actually be high-risk.

  1. If a market surveillance authority believes an AI system labeled as non-high-risk is actually high-risk, they must evaluate it using the criteria in Article 6(3) and Commission guidelines.

  2. If the evaluation confirms the AI system is high-risk, the authority must immediately require the provider to make it compliant with all regulations and take corrective action within a set timeframe.

  3. If the AI system is used across multiple countries, the authority must inform the Commission and other Member States about the evaluation results and required actions without delay.

  4. The provider must take all necessary steps to make the AI system compliant. If the provider fails to do so within the required timeframe, they will face fines according to Article 99.

<p>the AI system concerned, such as withdrawal of the product or the AI system from their market, without undue <br /> delay.<br /> Article 80<br /> Procedure for dealing with AI systems classified by the provider as non-high-risk in application of Annex III<br /> 1.<br /> Where a market surveillance authority has sufficient reason to consider that an AI system classified by the provider as <br /> non-high-risk pursuant to Article 6(3) is indeed high-risk, the market surveillance authority shall carry out an evaluation of <br /> the AI system concerned in respect of its classification as a high-risk AI system based on the conditions set out in Article <br /> 6(3) and the Commission guidelines.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 107/144</p> <p>2.<br /> Where, in the course of that evaluation, the market surveillance authority finds that the AI system concerned is <br /> high-risk, it shall without undue delay require the relevant provider to take all necessary actions to bring the AI system into <br /> compliance with the requirements and obligations laid down in this Regulation, as well as take appropriate corrective action <br /> within a period the market surveillance authority may prescribe.<br /> 3.<br /> Where the market surveillance authority considers that the use of the AI system concerned is not restricted to its <br /> national territory, it shall inform the Commission and the other Member States without undue delay of the results of the <br /> evaluation and of the actions which it has required the provider to take.<br /> 4.<br /> The provider shall ensure that all necessary action is taken to bring the AI system into compliance with the <br /> requirements and obligations laid down in this Regulation. Where the provider of an AI system concerned does not bring <br /> the AI system into compliance with those requirements and obligations within the period referred to in paragraph 2 of this <br /> Article, the provider shall be subject to fines in accordance with Article 99.<br /> 5.</p>
Show original text

If an AI system provider fails to fix compliance issues within the required timeframe, they will face fines under Article 99. The provider must take corrective action for all AI systems they have sold in the EU market. If they don't take adequate corrective action within the deadline, the rules in Article 79(5) to (9) apply. If a provider deliberately misclassifies an AI system as low-risk to avoid high-risk requirements, they will be fined according to Article 99. Market surveillance authorities can monitor compliance and perform inspections, using information from the EU database mentioned in Article 71. If a Member State's market surveillance authority objects to another Member State's enforcement action within three months (or 30 days for banned AI practices), or if the EU Commission believes the action violates EU law, the Commission will consult with the relevant authority and operators to review the national measure.

<p>provider of an AI system concerned does not bring <br /> the AI system into compliance with those requirements and obligations within the period referred to in paragraph 2 of this <br /> Article, the provider shall be subject to fines in accordance with Article 99.<br /> 5.<br /> The provider shall ensure that all appropriate corrective action is taken in respect of all the AI systems concerned that <br /> it has made available on the Union market.<br /> 6.<br /> Where the provider of the AI system concerned does not take adequate corrective action within the period referred to <br /> in paragraph 2 of this Article, Article 79(5) to (9) shall apply.<br /> 7.<br /> Where, in the course of the evaluation pursuant to paragraph 1 of this Article, the market surveillance authority <br /> establishes that the AI system was misclassified by the provider as non-high-risk in order to circumvent the application of <br /> requirements in Chapter III, Section 2, the provider shall be subject to fines in accordance with Article 99.<br /> 8.<br /> In exercising their power to monitor the application of this Article, and in accordance with Article 11 of Regulation <br /> (EU) 2019/1020, market surveillance authorities may perform appropriate checks, taking into account in particular <br /> information stored in the EU database referred to in Article 71 of this Regulation.<br /> Article 81<br /> Union safeguard procedure<br /> 1.<br /> Where, within three months of receipt of the notification referred to in Article 79(5), or within 30 days in the case of <br /> non-compliance with the prohibition of the AI practices referred to in Article 5, objections are raised by the market <br /> surveillance authority of a Member State to a measure taken by another market surveillance authority, or where the <br /> Commission considers the measure to be contrary to Union law, the Commission shall without undue delay enter into <br /> consultation with the market surveillance authority of the relevant Member State and the operator or operators, and shall <br /> evaluate the national measure.</p>
Show original text

If the Commission believes a Member State's measure violates EU law, it must quickly consult with that Member State's market surveillance authority and the AI system operator(s), then evaluate the measure. Within six months (or 60 days for violations of the AI practices banned in Article 5), the Commission decides whether the measure is justified and notifies the relevant Member State's market surveillance authority. It also informs all other market surveillance authorities of its decision.

If the Commission approves the Member State's measure, all Member States must take action to restrict the AI system, such as removing it from their markets without delay, and report this to the Commission. If the Commission rejects the measure, the Member State must withdraw it and inform the Commission.

If the measure is approved and the AI system's non-compliance is caused by problems in the harmonised standards or common specifications mentioned in Articles 40 and 41, the Commission must follow the procedure outlined in Article 11 of Regulation (EU) No 1025/2012.

Article 82 addresses compliant AI systems that still present a risk.

<p>or where the <br /> Commission considers the measure to be contrary to Union law, the Commission shall without undue delay enter into <br /> consultation with the market surveillance authority of the relevant Member State and the operator or operators, and shall <br /> evaluate the national measure. On the basis of the results of that evaluation, the Commission shall, within six months, or <br /> within 60 days in the case of non-compliance with the prohibition of the AI practices referred to in Article 5, starting from <br /> the notification referred to in Article 79(5), decide whether the national measure is justified and shall notify its decision to <br /> the market surveillance authority of the Member State concerned. The Commission shall also inform all other market <br /> surveillance authorities of its decision.<br /> 2.<br /> Where the Commission considers the measure taken by the relevant Member State to be justified, all Member States <br /> shall ensure that they take appropriate restrictive measures in respect of the AI system concerned, such as requiring the <br /> withdrawal of the AI system from their market without undue delay, and shall inform the Commission accordingly. Where <br /> the Commission considers the national measure to be unjustified, the Member State concerned shall withdraw the measure <br /> and shall inform the Commission accordingly.<br /> 3.<br /> Where the national measure is considered justified and the non-compliance of the AI system is attributed to <br /> shortcomings in the harmonised standards or common specifications referred to in Articles 40 and 41 of this Regulation, <br /> the Commission shall apply the procedure provided for in Article 11 of Regulation (EU) No 1025/2012.<br /> Article 82<br /> Compliant AI systems which present a risk<br /> 1.</p>
Show original text

Article 82 addresses high-risk AI systems that comply with regulations but still pose risks. If a market surveillance authority in a Member State finds that a compliant high-risk AI system presents a risk to people's health, safety, fundamental rights, or public interest, it must order the operator to fix the problem without unnecessary delay. The operator must apply these corrections to all affected AI systems it has distributed in the EU within the timeframe set by the authority. Member States must immediately notify the Commission and other Member States about such findings, providing details about the AI system, its origin, supply chain, the nature of the risk, and the corrective measures taken. The Commission will then consult with the affected Member States and operators to evaluate whether the national measures taken are appropriate.

<p>40 and 41 of this Regulation, <br /> the Commission shall apply the procedure provided for in Article 11 of Regulation (EU) No 1025/2012.<br /> Article 82<br /> Compliant AI systems which present a risk<br /> 1.<br /> Where, having performed an evaluation under Article 79, after consulting the relevant national public authority <br /> referred to in Article 77(1), the market surveillance authority of a Member State finds that although a high-risk AI system <br /> complies with this Regulation, it nevertheless presents a risk to the health or safety of persons, to fundamental rights, or to <br /> other aspects of public interest protection, it shall require the relevant operator to take all appropriate measures to ensure <br /> that the AI system concerned, when placed on the market or put into service, no longer presents that risk without undue <br /> delay, within a period it may prescribe.<br /> EN<br /> OJ L, 12.7.2024<br /> 108/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>2.<br /> The provider or other relevant operator shall ensure that corrective action is taken in respect of all the AI systems <br /> concerned that it has made available on the Union market within the timeline prescribed by the market surveillance <br /> authority of the Member State referred to in paragraph 1.<br /> 3.<br /> The Member States shall immediately inform the Commission and the other Member States of a finding under <br /> paragraph 1. That information shall include all available details, in particular the data necessary for the identification of the <br /> AI system concerned, the origin and the supply chain of the AI system, the nature of the risk involved and the nature and <br /> duration of the national measures taken.<br /> 4.<br /> The Commission shall without undue delay enter into consultation with the Member States concerned and the <br /> relevant operators, and shall evaluate the national measures taken.</p>
Show original text

The Commission must quickly consult with the affected Member States and relevant operators to evaluate the national measures taken. Based on this evaluation, the Commission decides if the measure is justified and may propose other appropriate measures. The Commission immediately informs the Member States and operators of its decision and notifies other Member States.

Market surveillance authorities must require providers to fix formal non-compliance issues, such as: incorrect or missing CE marking, missing or incorrectly prepared EU declaration of conformity, failure to register in the EU database, missing authorized representative (where required), or unavailable technical documentation. Providers must correct these issues within a specified timeframe.

If non-compliance continues, the market surveillance authority must take appropriate action to restrict or prohibit the high-risk AI system from being sold, or ensure it is recalled or withdrawn from the market immediately.

The Commission will designate one or more Union AI testing support structures to perform AI-related testing tasks as specified in EU Regulation 2019/1020.

<p>system, the nature of the risk involved and the nature and <br /> duration of the national measures taken.<br /> 4.<br /> The Commission shall without undue delay enter into consultation with the Member States concerned and the <br /> relevant operators, and shall evaluate the national measures taken. On the basis of the results of that evaluation, the <br /> Commission shall decide whether the measure is justified and, where necessary, propose other appropriate measures.<br /> 5.<br /> The Commission shall immediately communicate its decision to the Member States concerned and to the relevant <br /> operators. It shall also inform the other Member States.<br /> Article 83<br /> Formal non-compliance<br /> 1.<br /> Where the market surveillance authority of a Member State makes one of the following findings, it shall require the <br /> relevant provider to put an end to the non-compliance concerned, within a period it may prescribe:<br /> (a) the CE marking has been affixed in violation of Article 48;<br /> (b) the CE marking has not been affixed;<br /> (c) the EU declaration of conformity referred to in Article 47 has not been drawn up;<br /> (d) the EU declaration of conformity referred to in Article 47 has not been drawn up correctly;<br /> (e) the registration in the EU database referred to in Article 71 has not been carried out;<br /> (f) where applicable, no authorised representative has been appointed;<br /> (g) technical documentation is not available.<br /> 2.<br /> Where the non-compliance referred to in paragraph 1 persists, the market surveillance authority of the Member State <br /> concerned shall take appropriate and proportionate measures to restrict or prohibit the high-risk AI system being made <br /> available on the market or to ensure that it is recalled or withdrawn from the market without delay.<br /> Article 84<br /> Union AI testing support structures<br /> 1.<br /> The Commission shall designate one or more Union AI testing support structures to perform the tasks listed under <br /> Article 21(6) of Regulation (EU) 2019/1020 in the area of AI.<br /> 2.</p>
Show original text

TESTING SUPPORT STRUCTURES

  1. The Commission will choose one or more EU AI testing support structures to carry out the tasks described in Article 21(6) of Regulation (EU) 2019/1020 for artificial intelligence.

  2. In addition to the tasks in paragraph 1, these EU AI testing support structures will provide independent technical or scientific advice when requested by the Board, the Commission, or market surveillance authorities.

SECTION 4: REMEDIES

ARTICLE 85 - Filing Complaints with Market Surveillance Authorities

Any person or organization that believes this Regulation has been broken can file a complaint with the appropriate market surveillance authority. These complaints will be used to help conduct market surveillance activities and will be handled according to the procedures set up by market surveillance authorities under Regulation (EU) 2019/1020.

ARTICLE 86 - Right to Understand AI Decisions

  1. If a deployer makes a decision using a high-risk AI system listed in Annex III (except those in point 2), and that decision has legal consequences or significantly affects a person's health, safety, or fundamental rights, that person has the right to receive a clear explanation from the deployer. This explanation must describe how the AI system was used in the decision-making process and what the main reasons for the decision were.

  2. [Text continues...]

<p>testing support structures<br /> 1.<br /> The Commission shall designate one or more Union AI testing support structures to perform the tasks listed under <br /> Article 21(6) of Regulation (EU) 2019/1020 in the area of AI.<br /> 2.<br /> Without prejudice to the tasks referred to in paragraph 1, Union AI testing support structures shall also provide <br /> independent technical or scientific advice at the request of the Board, the Commission, or of market surveillance authorities.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 109/144</p> <p>SECTION 4<br /> Remedies<br /> Article 85<br /> Right to lodge a complaint with a market surveillance authority<br /> Without prejudice to other administrative or judicial remedies, any natural or legal person having grounds to consider that <br /> there has been an infringement of the provisions of this Regulation may submit complaints to the relevant market <br /> surveillance authority.<br /> In accordance with Regulation (EU) 2019/1020, such complaints shall be taken into account for the purpose of conducting <br /> market surveillance activities, and shall be handled in line with the dedicated procedures established therefor by the market <br /> surveillance authorities.<br /> Article 86<br /> Right to explanation of individual decision-making<br /> 1.<br /> Any affected person subject to a decision which is taken by the deployer on the basis of the output from a high-risk AI <br /> system listed in Annex III, with the exception of systems listed under point 2 thereof, and which produces legal effects or <br /> similarly significantly affects that person in a way that they consider to have an adverse impact on their health, safety or <br /> fundamental rights shall have the right to obtain from the deployer clear and meaningful explanations of the role of the AI <br /> system in the decision-making procedure and the main elements of the decision taken.<br /> 2.</p>
Show original text

People who are negatively affected by an AI system's decision regarding their health, safety, or basic rights can ask the organization using the AI system to explain how the AI was used in making that decision and what the main reasons for the decision were. However, this right does not apply if EU or national laws provide exceptions or restrictions to this requirement. This rule only applies when the right to explanation is not already guaranteed by other EU laws. When someone reports violations of AI regulations, they are protected under EU Directive 2019/1937. The European Commission has sole authority to oversee and enforce rules for general-purpose AI model providers, with support from the AI Office. Market surveillance authorities can ask the Commission to take action if needed to help them carry out their duties under these regulations.

<p>adverse impact on their health, safety or <br /> fundamental rights shall have the right to obtain from the deployer clear and meaningful explanations of the role of the AI <br /> system in the decision-making procedure and the main elements of the decision taken.<br /> 2.<br /> Paragraph 1 shall not apply to the use of AI systems for which exceptions from, or restrictions to, the obligation <br /> under that paragraph follow from Union or national law in compliance with Union law.<br /> 3.<br /> This Article shall apply only to the extent that the right referred to in paragraph 1 is not otherwise provided for under <br /> Union law.<br /> Article 87<br /> Reporting of infringements and protection of reporting persons<br /> Directive (EU) 2019/1937 shall apply to the reporting of infringements of this Regulation and the protection of persons <br /> reporting such infringements.<br /> SECTION 5<br /> Supervision, investigation, enforcement and monitoring in respect of providers of general-purpose AI models<br /> Article 88<br /> Enforcement of the obligations of providers of general-purpose AI models<br /> 1.<br /> The Commission shall have exclusive powers to supervise and enforce Chapter V, taking into account the procedural <br /> guarantees under Article 94. The Commission shall entrust the implementation of these tasks to the AI Office, without <br /> prejudice to the powers of organisation of the Commission and the division of competences between Member States and <br /> the Union based on the Treaties.<br /> 2.<br /> Without prejudice to Article 75(3), market surveillance authorities may request the Commission to exercise the <br /> powers laid down in this Section, where that is necessary and proportionate to assist with the fulfilment of their tasks under <br /> this Regulation.<br /> EN<br /> OJ L, 12.7.2024<br /> 110/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>Article 89<br /> Monitoring actions<br /> 1.</p>
Show original text

Article 89 - Monitoring Actions: The AI Office can take necessary steps to monitor whether providers of general-purpose AI models follow this Regulation and comply with approved codes of practice. Downstream providers (companies using these models) can file complaints if they believe a provider violated the Regulation. Complaints must include: the provider's contact information, a description of what happened and which rules were broken, and an explanation of why the downstream provider believes a violation occurred. Additional relevant information can also be included.

Article 90 - Risk Alerts from the Scientific Panel: The scientific panel can alert the AI Office if it suspects that a general-purpose AI model creates a serious, identifiable risk across the European Union, or if it meets certain conditions listed in Article 51. When such an alert is issued, the Commission (through the AI Office and after notifying the Board) can use its powers to investigate the matter. The AI Office must inform the Board about any actions taken under Articles 91 to 94.

<p>Regulation.<br /> EN<br /> OJ L, 12.7.2024<br /> 110/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>Article 89<br /> Monitoring actions<br /> 1.<br /> For the purpose of carrying out the tasks assigned to it under this Section, the AI Office may take the necessary <br /> actions to monitor the effective implementation and compliance with this Regulation by providers of general-purpose AI <br /> models, including their adherence to approved codes of practice.<br /> 2.<br /> Downstream providers shall have the right to lodge a complaint alleging an infringement of this Regulation. <br /> A complaint shall be duly reasoned and indicate at least:<br /> (a) the point of contact of the provider of the general-purpose AI model concerned;<br /> (b) a description of the relevant facts, the provisions of this Regulation concerned, and the reason why the downstream <br /> provider considers that the provider of the general-purpose AI model concerned infringed this Regulation;<br /> (c) any other information that the downstream provider that sent the request considers relevant, including, where <br /> appropriate, information gathered on its own initiative.<br /> Article 90<br /> Alerts of systemic risks by the scientific panel<br /> 1.<br /> The scientific panel may provide a qualified alert to the AI Office where it has reason to suspect that:<br /> (a) a general-purpose AI model poses concrete identifiable risk at Union level; or<br /> (b) a general-purpose AI model meets the conditions referred to in Article 51.<br /> 2.<br /> Upon such qualified alert, the Commission, through the AI Office and after having informed the Board, may exercise <br /> the powers laid down in this Section for the purpose of assessing the matter. The AI Office shall inform the Board of any <br /> measure according to Articles 91 to 94.<br /> 3.</p>
Show original text

The AI Office, after notifying the Board, can use its powers to investigate matters related to general-purpose AI models with systemic risks. The AI Office must inform the Board about any actions taken under Articles 91 to 94.

A qualified alert must be well-reasoned and include: (a) the provider's contact information for the AI model with systemic risk; (b) a description of the relevant facts and reasons for the alert from the scientific panel; (c) any other relevant information the scientific panel considers important, including information gathered independently.

Article 91 - Power to Request Documentation and Information

The Commission can ask the provider of a general-purpose AI model to provide documentation created according to Articles 53 and 55, or any additional information needed to check if the provider follows this Regulation.

Before requesting information, the AI Office may have a structured discussion with the provider.

If the scientific panel makes a justified request, the Commission can ask a provider for information when it is necessary and appropriate for the scientific panel's tasks under Article 68(2).

Any request for information must state the legal reason and purpose, specify what information is needed, set a deadline for providing it, and mention the penalties in Article 101 for providing false, incomplete, or misleading information.

<p>the AI Office and after having informed the Board, may exercise <br /> the powers laid down in this Section for the purpose of assessing the matter. The AI Office shall inform the Board of any <br /> measure according to Articles 91 to 94.<br /> 3.<br /> A qualified alert shall be duly reasoned and indicate at least:<br /> (a) the point of contact of the provider of the general-purpose AI model with systemic risk concerned;<br /> (b) a description of the relevant facts and the reasons for the alert by the scientific panel;<br /> (c) any other information that the scientific panel considers to be relevant, including, where appropriate, information <br /> gathered on its own initiative.<br /> Article 91<br /> Power to request documentation and information<br /> 1.<br /> The Commission may request the provider of the general-purpose AI model concerned to provide the documentation <br /> drawn up by the provider in accordance with Articles 53 and 55, or any additional information that is necessary for the <br /> purpose of assessing compliance of the provider with this Regulation.<br /> 2.<br /> Before sending the request for information, the AI Office may initiate a structured dialogue with the provider of the <br /> general-purpose AI model.<br /> 3.<br /> Upon a duly substantiated request from the scientific panel, the Commission may issue a request for information to <br /> a provider of a general-purpose AI model, where the access to information is necessary and proportionate for the fulfilment <br /> of the tasks of the scientific panel under Article 68(2).<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 111/144</p> <p>4.<br /> The request for information shall state the legal basis and the purpose of the request, specify what information is <br /> required, set a period within which the information is to be provided, and indicate the fines provided for in Article 101 for <br /> supplying incorrect, incomplete or misleading information.<br /> 5.</p>
Show original text

When requesting information, authorities must explain the legal reason for the request, specify what information is needed, set a deadline for providing it, and mention the penalties under Article 101 for giving false, incomplete, or misleading information.

The AI model provider, or their authorized representative, must supply the requested information. For companies or organizations, authorized legal representatives or lawyers acting on behalf of the provider can submit the information. However, the provider remains fully responsible if the information is incomplete, incorrect, or misleading.

Article 92 - Power to Conduct Evaluations

The AI Office, after consulting the Board, may evaluate general-purpose AI models to: (a) check if the provider follows the rules in this Regulation when gathered information is not enough, or (b) investigate risks to the European Union from AI models with systemic risks, especially after receiving a qualified alert from the scientific panel.

The Commission may hire independent experts to conduct these evaluations, including experts from the scientific panel. These experts must meet the criteria set out in Article 68(2).

To evaluate the AI model, the Commission may request access to it through APIs, technical tools, or source code.

When requesting access, the Commission must state the legal reason, purpose, reasons for the request, set a deadline for providing access, and mention the penalties under Article 101 for refusing to provide access.

<p>basis and the purpose of the request, specify what information is <br /> required, set a period within which the information is to be provided, and indicate the fines provided for in Article 101 for <br /> supplying incorrect, incomplete or misleading information.<br /> 5.<br /> The provider of the general-purpose AI model concerned, or its representative shall supply the information requested. <br /> In the case of legal persons, companies or firms, or where the provider has no legal personality, the persons authorised to <br /> represent them by law or by their statutes, shall supply the information requested on behalf of the provider of the <br /> general-purpose AI model concerned. Lawyers duly authorised to act may supply information on behalf of their clients. The <br /> clients shall nevertheless remain fully responsible if the information supplied is incomplete, incorrect or misleading.<br /> Article 92<br /> Power to conduct evaluations<br /> 1.<br /> The AI Office, after consulting the Board, may conduct evaluations of the general-purpose AI model concerned:<br /> (a) to assess compliance of the provider with obligations under this Regulation, where the information gathered pursuant <br /> to Article 91 is insufficient; or<br /> (b) to investigate systemic risks at Union level of general-purpose AI models with systemic risk, in particular following <br /> a qualified alert from the scientific panel in accordance with Article 90(1), point (a).<br /> 2.<br /> The Commission may decide to appoint independent experts to carry out evaluations on its behalf, including from the <br /> scientific panel established pursuant to Article 68. Independent experts appointed for this task shall meet the criteria <br /> outlined in Article 68(2).<br /> 3.<br /> For the purposes of paragraph 1, the Commission may request access to the general-purpose AI model concerned <br /> through APIs or further appropriate technical means and tools, including source code.<br /> 4.<br /> The request for access shall state the legal basis, the purpose and reasons of the request and set the period within <br /> which the access is to be provided, and the fines provided for in Article 101 for failure to provide access.<br /> 5.</p>
Show original text

When requesting access to an AI model, the request must include the legal reason, purpose, explanation, and timeline for providing access. It must also mention the penalties in Article 101 for not providing access.

The company that created the AI model (or its authorized representative) must provide the requested information. For companies or organizations, the legally authorized representatives must supply the access on behalf of the AI model provider.

The Commission will create detailed rules for how evaluations will be conducted, including how independent experts will be involved and selected. These rules will follow the standard approval process described in Article 98(2).

Before requesting access to an AI model, the AI Office can have a preliminary discussion with the provider to learn more about their internal testing, safety measures, and other steps they have taken to prevent serious risks.

Under Article 93, the Commission can require providers to take action when necessary. This includes: (a) taking appropriate steps to meet the requirements in Articles 53 and 54; (b) implementing safety measures if an evaluation shows serious concerns about risks across the EU; and (c) limiting the sale, removing, or recalling the model from the market.

<p>4.<br /> The request for access shall state the legal basis, the purpose and reasons of the request and set the period within <br /> which the access is to be provided, and the fines provided for in Article 101 for failure to provide access.<br /> 5.<br /> The providers of the general-purpose AI model concerned or its representative shall supply the information requested. <br /> In the case of legal persons, companies or firms, or where the provider has no legal personality, the persons authorised to <br /> represent them by law or by their statutes, shall provide the access requested on behalf of the provider of the <br /> general-purpose AI model concerned.<br /> 6.<br /> The Commission shall adopt implementing acts setting out the detailed arrangements and the conditions for the <br /> evaluations, including the detailed arrangements for involving independent experts, and the procedure for the selection <br /> thereof. Those implementing acts shall be adopted in accordance with the examination procedure referred to in Article <br /> 98(2).<br /> 7.<br /> Prior to requesting access to the general-purpose AI model concerned, the AI Office may initiate a structured dialogue <br /> with the provider of the general-purpose AI model to gather more information on the internal testing of the model, internal <br /> safeguards for preventing systemic risks, and other internal procedures and measures the provider has taken to mitigate <br /> such risks.<br /> Article 93<br /> Power to request measures<br /> 1.<br /> Where necessary and appropriate, the Commission may request providers to:<br /> (a) take appropriate measures to comply with the obligations set out in Articles 53 and 54;<br /> EN<br /> OJ L, 12.7.2024<br /> 112/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>(b) implement mitigation measures, where the evaluation carried out in accordance with Article 92 has given rise to serious <br /> and substantiated concern of a systemic risk at Union level;<br /> (c) restrict the making available on the market, withdraw or recall the model.<br /> 2.</p>
Show original text

The AI Office can take action if an evaluation shows serious concerns about systemic risk across the EU. Possible actions include requiring safety measures, restricting market availability, withdrawing products, or recalling models. Before taking action, the AI Office may have structured discussions with the AI model provider. If the provider agrees to implement safety measures during these discussions, the Commission can make those commitments legally binding and close the matter. Providers of general-purpose AI models have procedural rights under EU Regulation 2019/1020. The AI Office and Member States will encourage companies to develop voluntary codes of conduct and governance rules. These codes will help non-high-risk AI systems follow some or all of the safety requirements outlined in Chapter III, Section 2, using available technical solutions and industry best practices.

<p>mitigation measures, where the evaluation carried out in accordance with Article 92 has given rise to serious <br /> and substantiated concern of a systemic risk at Union level;<br /> (c) restrict the making available on the market, withdraw or recall the model.<br /> 2.<br /> Before a measure is requested, the AI Office may initiate a structured dialogue with the provider of the <br /> general-purpose AI model.<br /> 3.<br /> If, during the structured dialogue referred to in paragraph 2, the provider of the general-purpose AI model with <br /> systemic risk offers commitments to implement mitigation measures to address a systemic risk at Union level, the <br /> Commission may, by decision, make those commitments binding and declare that there are no further grounds for action.<br /> Article 94<br /> Procedural rights of economic operators of the general-purpose AI model<br /> Article 18 of Regulation (EU) 2019/1020 shall apply mutatis mutandis to the providers of the general-purpose AI model, <br /> without prejudice to more specific procedural rights provided for in this Regulation.<br /> CHAPTER X<br /> CODES OF CONDUCT AND GUIDELINES<br /> Article 95<br /> Codes of conduct for voluntary application of specific requirements<br /> 1.<br /> The AI Office and the Member States shall encourage and facilitate the drawing up of codes of conduct, including <br /> related governance mechanisms, intended to foster the voluntary application to AI systems, other than high-risk AI systems, <br /> of some or all of the requirements set out in Chapter III, Section 2 taking into account the available technical solutions and <br /> industry best practices allowing for the application of such requirements.<br /> 2.</p>
Show original text

The AI Office and Member States will help create voluntary codes of conduct for AI systems. These codes will set standards that AI developers and users can follow, based on clear goals and measurable performance indicators. The codes should address: ethical AI guidelines, environmental impact and energy efficiency, AI education and training, diverse and inclusive AI development teams, and protection of vulnerable people and groups including those with disabilities and gender equality concerns. Any AI provider, deployer, or organization representing them can create these codes, working with stakeholders, civil society, and academic institutions. The codes can cover one or more AI systems with similar purposes. The AI Office and Member States will pay special attention to the needs of small and medium-sized businesses and startups when developing these codes.

<p>to AI systems, other than high-risk AI systems, <br /> of some or all of the requirements set out in Chapter III, Section 2 taking into account the available technical solutions and <br /> industry best practices allowing for the application of such requirements.<br /> 2.<br /> The AI Office and the Member States shall facilitate the drawing up of codes of conduct concerning the voluntary <br /> application, including by deployers, of specific requirements to all AI systems, on the basis of clear objectives and key <br /> performance indicators to measure the achievement of those objectives, including elements such as, but not limited to:<br /> (a) applicable elements provided for in Union ethical guidelines for trustworthy AI;<br /> (b) assessing and minimising the impact of AI systems on environmental sustainability, including as regards energy-efficient <br /> programming and techniques for the efficient design, training and use of AI;<br /> (c) promoting AI literacy, in particular that of persons dealing with the development, operation and use of AI;<br /> (d) facilitating an inclusive and diverse design of AI systems, including through the establishment of inclusive and diverse <br /> development teams and the promotion of stakeholders’ participation in that process;<br /> (e) assessing and preventing the negative impact of AI systems on vulnerable persons or groups of vulnerable persons, <br /> including as regards accessibility for persons with a disability, as well as on gender equality.<br /> 3.<br /> Codes of conduct may be drawn up by individual providers or deployers of AI systems or by organisations <br /> representing them or by both, including with the involvement of any interested stakeholders and their representative <br /> organisations, including civil society organisations and academia. Codes of conduct may cover one or more AI systems <br /> taking into account the similarity of the intended purpose of the relevant systems.<br /> 4.<br /> The AI Office and the Member States shall take into account the specific interests and needs of SMEs, including <br /> start-ups, when encouraging and facilitating the drawing up of codes of conduct.<br /> OJ L, 12.7.</p>
Show original text

The AI Office and Member States must consider the specific needs of small and medium-sized businesses (SMEs) and start-ups when creating codes of conduct for AI systems.

The European Commission will create guidelines to help implement this AI regulation. These guidelines will cover: how to follow the main requirements in Articles 8-15 and Article 25; what practices are banned under Article 5; how to handle significant changes to AI systems; transparency rules from Article 50; how this regulation relates to other EU laws listed in Annex I; and how to define an AI system according to Article 3.

When writing these guidelines, the Commission will focus on the needs of SMEs, start-ups, local governments, and industries most affected by this regulation. The guidelines will also consider current best practices in AI technology and relevant industry standards mentioned in Articles 40 and 41.

The Commission can update these guidelines at any time if Member States, the AI Office, or the Commission itself thinks changes are necessary.

<p>systems.<br /> 4.<br /> The AI Office and the Member States shall take into account the specific interests and needs of SMEs, including <br /> start-ups, when encouraging and facilitating the drawing up of codes of conduct.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 113/144</p> <p>Article 96<br /> Guidelines from the Commission on the implementation of this Regulation<br /> 1.<br /> The Commission shall develop guidelines on the practical implementation of this Regulation, and in particular on:<br /> (a) the application of the requirements and obligations referred to in Articles 8 to 15 and in Article 25;<br /> (b) the prohibited practices referred to in Article 5;<br /> (c) the practical implementation of the provisions related to substantial modification;<br /> (d) the practical implementation of transparency obligations laid down in Article 50;<br /> (e) detailed information on the relationship of this Regulation with the Union harmonisation legislation listed in Annex I, <br /> as well as with other relevant Union law, including as regards consistency in their enforcement;<br /> (f) the application of the definition of an AI system as set out in Article 3, point (1).<br /> When issuing such guidelines, the Commission shall pay particular attention to the needs of SMEs including start-ups, of <br /> local public authorities and of the sectors most likely to be affected by this Regulation.<br /> The guidelines referred to in the first subparagraph of this paragraph shall take due account of the generally acknowledged <br /> state of the art on AI, as well as of relevant harmonised standards and common specifications that are referred to in <br /> Articles 40 and 41, or of those harmonised standards or technical specifications that are set out pursuant to Union <br /> harmonisation law.<br /> 2.<br /> At the request of the Member States or the AI Office, or on its own initiative, the Commission shall update guidelines <br /> previously adopted when deemed necessary.</p>
Show original text

The Commission can create updated guidelines whenever the Member States, the AI Office, or the Commission itself thinks it's necessary.

The Commission has the power to make delegated acts (detailed rules) for specific articles listed in this section. This power lasts for five years starting August 1, 2024. Before the five-year period ends, the Commission must write a report about how it used this power. After five years, the power automatically continues for another five-year period unless the European Parliament or the Council objects within three months before the period ends. Either the European Parliament or the Council can cancel this power at any time, and that cancellation takes effect immediately.

<p>standards or technical specifications that are set out pursuant to Union <br /> harmonisation law.<br /> 2.<br /> At the request of the Member States or the AI Office, or on its own initiative, the Commission shall update guidelines <br /> previously adopted when deemed necessary.<br /> CHAPTER XI<br /> DELEGATION OF POWER AND COMMITTEE PROCEDURE<br /> Article 97<br /> Exercise of the delegation<br /> 1.<br /> The power to adopt delegated acts is conferred on the Commission subject to the conditions laid down in this Article.<br /> 2.<br /> The power to adopt delegated acts referred to in Article 6(6) and (7), Article 7(1) and (3), Article 11(3), Article 43(5) <br /> and (6), Article 47(5), Article 51(3), Article 52(4) and Article 53(5) and (6) shall be conferred on the Commission for <br /> a period of five years from 1 August 2024. The Commission shall draw up a report in respect of the delegation of power <br /> not later than nine months before the end of the five-year period. The delegation of power shall be tacitly extended for <br /> periods of an identical duration, unless the European Parliament or the Council opposes such extension not later than three <br /> months before the end of each period.<br /> 3.<br /> The delegation of power referred to in Article 6(6) and (7), Article 7(1) and (3), Article 11(3), Article 43(5) and (6), <br /> Article 47(5), Article 51(3), Article 52(4) and Article 53(5) and (6) may be revoked at any time by the European Parliament <br /> or by the Council. A decision of revocation shall put an end to the delegation of power specified in that decision.</p>
Show original text

The European Parliament or Council can cancel delegation powers at any time under Articles 52(4) and 53(5)-(6). The cancellation takes effect the day after it is published in the Official Journal of the European Union, or on a later date if specified. Any delegated acts already in force remain valid. Before creating a delegated act, the Commission must consult experts from each Member State following the Interinstitutional Agreement of 13 April 2016 on Better Law-Making. The Commission must immediately notify both the European Parliament and Council when it adopts a delegated act. Delegated acts created under Articles 6(6)-(7), 7(1) or (3), 11(3), 43(5)-(6), 47(5), 51(3), 52(4), or 53(5)-(6) only take effect if neither the European Parliament nor Council objects within three months of notification, or if both institutions inform the Commission they will not object before the deadline. Either institution can extend this period by three additional months. The Commission is assisted by a committee in this process.

<p>52(4) and Article 53(5) and (6) may be revoked at any time by the European Parliament <br /> or by the Council. A decision of revocation shall put an end to the delegation of power specified in that decision. It shall <br /> take effect the day following that of its publication in the Official Journal of the European Union or at a later date specified <br /> therein. It shall not affect the validity of any delegated acts already in force.<br /> 4.<br /> Before adopting a delegated act, the Commission shall consult experts designated by each Member State in accordance <br /> with the principles laid down in the Interinstitutional Agreement of 13 April 2016 on Better Law-Making.<br /> EN<br /> OJ L, 12.7.2024<br /> 114/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>5.<br /> As soon as it adopts a delegated act, the Commission shall notify it simultaneously to the European Parliament and to <br /> the Council.<br /> 6.<br /> Any delegated act adopted pursuant to Article 6(6) or (7), Article 7(1) or (3), Article 11(3), Article 43(5) or (6), <br /> Article 47(5), Article 51(3), Article 52(4) or Article 53(5) or (6) shall enter into force only if no objection has been <br /> expressed by either the European Parliament or the Council within a period of three months of notification of that act to <br /> the European Parliament and the Council or if, before the expiry of that period, the European Parliament and the Council <br /> have both informed the Commission that they will not object. That period shall be extended by three months at the <br /> initiative of the European Parliament or of the Council.<br /> Article 98<br /> Committee procedure<br /> 1.<br /> The Commission shall be assisted by a committee.</p>
Show original text

The European Parliament or Council can extend a review period by three months if they choose to do so.

A committee will assist the Commission in implementing this regulation. This committee follows the rules set out in EU Regulation 182/2011. When this regulation refers to the committee, Article 5 of Regulation 182/2011 applies.

Member States must create and enforce penalty rules for companies that break this regulation. Penalties can include warnings, fines, or other non-monetary measures. These penalties must be fair, proportionate, and effective. They should consider the needs of small and medium-sized businesses and startups. Member States must tell the Commission about their penalty rules by the date this regulation takes effect, and inform the Commission of any changes afterward.

Companies that violate the prohibited AI practices listed in Article 5 face administrative fines of up to EUR 35 million or up to 7% of their total worldwide annual turnover from the previous year—whichever amount is higher.

<p>informed the Commission that they will not object. That period shall be extended by three months at the <br /> initiative of the European Parliament or of the Council.<br /> Article 98<br /> Committee procedure<br /> 1.<br /> The Commission shall be assisted by a committee. That committee shall be a committee within the meaning of <br /> Regulation (EU) No 182/2011.<br /> 2.<br /> Where reference is made to this paragraph, Article 5 of Regulation (EU) No 182/2011 shall apply.<br /> CHAPTER XII<br /> PENALTIES<br /> Article 99<br /> Penalties<br /> 1.<br /> In accordance with the terms and conditions laid down in this Regulation, Member States shall lay down the rules on <br /> penalties and other enforcement measures, which may also include warnings and non-monetary measures, applicable to <br /> infringements of this Regulation by operators, and shall take all measures necessary to ensure that they are properly and <br /> effectively implemented, thereby taking into account the guidelines issued by the Commission pursuant to Article 96. The <br /> penalties provided for shall be effective, proportionate and dissuasive. They shall take into account the interests of SMEs, <br /> including start-ups, and their economic viability.<br /> 2.<br /> The Member States shall, without delay and at the latest by the date of entry into application, notify the Commission <br /> of the rules on penalties and of other enforcement measures referred to in paragraph 1, and shall notify it, without delay, of <br /> any subsequent amendment to them.<br /> 3.<br /> Non-compliance with the prohibition of the AI practices referred to in Article 5 shall be subject to administrative <br /> fines of up to EUR 35 000 000 or, if the offender is an undertaking, up to 7 % of its total worldwide annual turnover for the <br /> preceding financial year, whichever is higher.<br /> 4.</p>
Show original text

Companies that break the rules can face penalties. The most serious violations result in fines up to EUR 35 million or 7% of the company's total worldwide annual turnover from the previous year, whichever is higher. Less serious violations related to operators and notified bodies (such as failures by providers under Article 16, authorized representatives under Article 22, importers under Article 23, distributors under Article 24, deployers under Article 26, notified bodies under Articles 31, 33, or 34, and transparency obligations under Article 50) result in fines up to EUR 15 million or 3% of annual worldwide turnover, whichever is higher. Providing false, incomplete, or misleading information to notified bodies or national authorities results in fines up to EUR 7.5 million or 1% of annual worldwide turnover, whichever is higher. Small and medium-sized enterprises (SMEs) and start-ups receive reduced fines—they pay the lower of the stated percentages or amounts.

<p>subject to administrative <br /> fines of up to EUR 35 000 000 or, if the offender is an undertaking, up to 7 % of its total worldwide annual turnover for the <br /> preceding financial year, whichever is higher.<br /> 4.<br /> Non-compliance with any of the following provisions related to operators or notified bodies, other than those laid <br /> down in Articles 5, shall be subject to administrative fines of up to EUR 15 000 000 or, if the offender is an undertaking, up <br /> to 3 % of its total worldwide annual turnover for the preceding financial year, whichever is higher:<br /> (a) obligations of providers pursuant to Article 16;<br /> (b) obligations of authorised representatives pursuant to Article 22;<br /> (c) obligations of importers pursuant to Article 23;<br /> (d) obligations of distributors pursuant to Article 24;<br /> (e) obligations of deployers pursuant to Article 26;<br /> (f) requirements and obligations of notified bodies pursuant to Article 31, Article 33(1), (3) and (4) or Article 34;<br /> (g) transparency obligations for providers and deployers pursuant to Article 50.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 115/144</p> <p>5.<br /> The supply of incorrect, incomplete or misleading information to notified bodies or national competent authorities in <br /> reply to a request shall be subject to administrative fines of up to EUR 7 500 000 or, if the offender is an undertaking, up to <br /> 1 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.<br /> 6.<br /> In the case of SMEs, including start-ups, each fine referred to in this Article shall be up to the percentages or amount <br /> referred to in paragraphs 3, 4 and 5, whichever thereof is lower.<br /> 7.</p>
Show original text

For small and medium-sized enterprises (SMEs) and start-ups, fines cannot exceed the lower of the percentages or amounts specified in the previous sections.

When deciding whether to impose a fine and determining its amount, authorities must consider all relevant facts of the situation, including:

(a) The nature, seriousness, and length of the violation, including its effects, the purpose of the AI system involved, the number of people affected, and the extent of their losses;

(b) Whether other market surveillance authorities have already fined the same company for the same violation;

(c) Whether other authorities have already fined the same company for breaking other EU or national laws that stem from the same action or failure that caused this violation;

(d) The company's size, yearly revenue, and market share;

(e) Any other factors that make the violation worse or better, such as money gained or losses avoided through the violation;

(f) How much the company cooperated with authorities to fix the violation and reduce its harmful effects;

(g) How responsible the company is, based on the safety measures it put in place;

(h) How the violation was discovered, particularly whether the company reported it itself;

(i) Whether the company intentionally or carelessly caused the violation;

(j) Any steps the company took to help people harmed by the violation.

<p>6.<br /> In the case of SMEs, including start-ups, each fine referred to in this Article shall be up to the percentages or amount <br /> referred to in paragraphs 3, 4 and 5, whichever thereof is lower.<br /> 7.<br /> When deciding whether to impose an administrative fine and when deciding on the amount of the administrative fine <br /> in each individual case, all relevant circumstances of the specific situation shall be taken into account and, as appropriate, <br /> regard shall be given to the following:<br /> (a) the nature, gravity and duration of the infringement and of its consequences, taking into account the purpose of the AI <br /> system, as well as, where appropriate, the number of affected persons and the level of damage suffered by them;<br /> (b) whether administrative fines have already been applied by other market surveillance authorities to the same operator for <br /> the same infringement;<br /> (c) whether administrative fines have already been applied by other authorities to the same operator for infringements of <br /> other Union or national law, when such infringements result from the same activity or omission constituting a relevant <br /> infringement of this Regulation;<br /> (d) the size, the annual turnover and market share of the operator committing the infringement;<br /> (e) any other aggravating or mitigating factor applicable to the circumstances of the case, such as financial benefits gained, <br /> or losses avoided, directly or indirectly, from the infringement;<br /> (f) the degree of cooperation with the national competent authorities, in order to remedy the infringement and mitigate the <br /> possible adverse effects of the infringement;<br /> (g) the degree of responsibility of the operator taking into account the technical and organisational measures implemented <br /> by it;<br /> (h) the manner in which the infringement became known to the national competent authorities, in particular whether, and <br /> if so to what extent, the operator notified the infringement;<br /> (i) the intentional or negligent character of the infringement;<br /> (j) any action taken by the operator to mitigate the harm suffered by the affected persons.<br /> 8.</p>
Show original text

The text outlines rules for administrative fines related to data protection violations:

Key points about violations:
- Whether the operator reported the infringement
- Whether the violation was intentional or negligent
- What actions the operator took to reduce harm to affected people

Fines for public authorities:
- Each Member State decides how much to fine public authorities and bodies within their country

How fines are applied:
- Depending on each Member State's legal system, fines can be imposed by national courts or other authorized bodies
- The outcome should be equivalent regardless of which body imposes the fine

Protections:
- All fining decisions must follow proper legal procedures, including the right to appeal and fair treatment
- Member States must report annually to the European Commission about all fines issued and any related court cases

European level:
- The European Data Protection Supervisor can impose fines on EU institutions, bodies, offices, and agencies
- When deciding on a fine, all relevant circumstances must be considered

<p>, in particular whether, and <br /> if so to what extent, the operator notified the infringement;<br /> (i) the intentional or negligent character of the infringement;<br /> (j) any action taken by the operator to mitigate the harm suffered by the affected persons.<br /> 8.<br /> Each Member State shall lay down rules on to what extent administrative fines may be imposed on public authorities <br /> and bodies established in that Member State.<br /> 9.<br /> Depending on the legal system of the Member States, the rules on administrative fines may be applied in such <br /> a manner that the fines are imposed by competent national courts or by other bodies, as applicable in those Member States. <br /> The application of such rules in those Member States shall have an equivalent effect.<br /> 10.<br /> The exercise of powers under this Article shall be subject to appropriate procedural safeguards in accordance with <br /> Union and national law, including effective judicial remedies and due process.<br /> 11.<br /> Member States shall, on an annual basis, report to the Commission about the administrative fines they have issued <br /> during that year, in accordance with this Article, and about any related litigation or judicial proceedings.<br /> Article 100<br /> Administrative fines on Union institutions, bodies, offices and agencies<br /> 1.<br /> The European Data Protection Supervisor may impose administrative fines on Union institutions, bodies, offices and <br /> agencies falling within the scope of this Regulation. When deciding whether to impose an administrative fine and when <br /> deciding on the amount of the administrative fine in each individual case, all relevant circumstances of the specific situation <br /> shall be taken into account and due regard shall be given to the following:<br /> EN<br /> OJ L, 12.7.</p>
Show original text

When setting an administrative fine for a specific violation, all relevant circumstances must be considered. The following factors are especially important:

(a) The nature, seriousness, and length of the violation and its effects. This includes the purpose of the AI system involved, the number of people affected, and how much damage they suffered.

(b) How responsible the EU institution, body, office, or agency is. This considers the technical and organizational safety measures they put in place.

(c) Any steps the EU institution, body, office, or agency took to reduce harm to affected people.

(d) How well the EU institution, body, office, or agency worked with the European Data Protection Supervisor to fix the violation and prevent further harm. This includes following any previous orders from the Supervisor about the same issue.

(e) Any similar violations by the EU institution, body, office, or agency in the past.

(f) How the European Data Protection Supervisor found out about the violation. Specifically, whether the EU institution, body, office, or agency reported it themselves.

(g) The yearly budget of the EU institution, body, office, or agency.

Violations of the AI practice rules in Article 5 can result in fines up to EUR 1,500,000.

<p>when <br /> deciding on the amount of the administrative fine in each individual case, all relevant circumstances of the specific situation <br /> shall be taken into account and due regard shall be given to the following:<br /> EN<br /> OJ L, 12.7.2024<br /> 116/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>(a) the nature, gravity and duration of the infringement and of its consequences, taking into account the purpose of the AI <br /> system concerned, as well as, where appropriate, the number of affected persons and the level of damage suffered by <br /> them;<br /> (b) the degree of responsibility of the Union institution, body, office or agency, taking into account technical and <br /> organisational measures implemented by them;<br /> (c) any action taken by the Union institution, body, office or agency to mitigate the damage suffered by affected persons;<br /> (d) the degree of cooperation with the European Data Protection Supervisor in order to remedy the infringement and <br /> mitigate the possible adverse effects of the infringement, including compliance with any of the measures previously <br /> ordered by the European Data Protection Supervisor against the Union institution, body, office or agency concerned <br /> with regard to the same subject matter;<br /> (e) any similar previous infringements by the Union institution, body, office or agency;<br /> (f) the manner in which the infringement became known to the European Data Protection Supervisor, in particular <br /> whether, and if so to what extent, the Union institution, body, office or agency notified the infringement;<br /> (g) the annual budget of the Union institution, body, office or agency.<br /> 2.<br /> Non-compliance with the prohibition of the AI practices referred to in Article 5 shall be subject to administrative <br /> fines of up to EUR 1 500 000.<br /> 3.</p>
Show original text

Penalties for breaking AI rules:

  1. Breaking the banned AI practices listed in Article 5 will result in fines up to EUR 1,500,000.

  2. If an AI system fails to meet other requirements in this regulation (not Article 5), the fine can be up to EUR 750,000.

  3. Before issuing a penalty, the European Data Protection Supervisor must allow the organization being investigated to respond to the charges. Decisions can only be based on facts that both sides have discussed. Any complainants must be included in the process.

  4. The organization has full rights to defend itself. They can see all evidence against them, except for information protected by privacy or business confidentiality laws.

  5. Money from fines goes to the EU's general budget. Fines will not prevent the organization from operating effectively.

  6. Every year, the European Data Protection Supervisor must report to the Commission about all fines issued and any legal cases started under this rule.

Article 101 covers fines for companies that provide general-purpose AI models.

<p>of the Union institution, body, office or agency.<br /> 2.<br /> Non-compliance with the prohibition of the AI practices referred to in Article 5 shall be subject to administrative <br /> fines of up to EUR 1 500 000.<br /> 3.<br /> The non-compliance of the AI system with any requirements or obligations under this Regulation, other than those <br /> laid down in Article 5, shall be subject to administrative fines of up to EUR 750 000.<br /> 4.<br /> Before taking decisions pursuant to this Article, the European Data Protection Supervisor shall give the Union <br /> institution, body, office or agency which is the subject of the proceedings conducted by the European Data Protection <br /> Supervisor the opportunity of being heard on the matter regarding the possible infringement. The European Data <br /> Protection Supervisor shall base his or her decisions only on elements and circumstances on which the parties concerned <br /> have been able to comment. Complainants, if any, shall be associated closely with the proceedings.<br /> 5.<br /> The rights of defence of the parties concerned shall be fully respected in the proceedings. They shall be entitled to <br /> have access to the European Data Protection Supervisor’s file, subject to the legitimate interest of individuals or <br /> undertakings in the protection of their personal data or business secrets.<br /> 6.<br /> Funds collected by imposition of fines in this Article shall contribute to the general budget of the Union. The fines <br /> shall not affect the effective operation of the Union institution, body, office or agency fined.<br /> 7.<br /> The European Data Protection Supervisor shall, on an annual basis, notify the Commission of the administrative fines <br /> it has imposed pursuant to this Article and of any litigation or judicial proceedings it has initiated.<br /> Article 101<br /> Fines for providers of general-purpose AI models<br /> 1.</p>
Show original text

The Supervisor must inform the Commission once a year about any fines it has issued under this Article and any legal cases it has started.

Article 101: Fines for General-Purpose AI Model Providers

  1. The Commission can fine providers of general-purpose AI models up to 3% of their yearly worldwide revenue (from the previous year) or EUR 15,000,000, whichever amount is larger. The Commission can impose these fines when it finds that a provider intentionally or carelessly:
    (a) broke the rules in this Regulation;
    (b) did not respond to a request for documents or information (Article 91), or gave false, incomplete, or misleading information;
    (c) did not follow an order from the Commission (Article 93);
    (d) did not let the Commission access the AI model to test it (Article 92).

When deciding the fine amount, the Commission must consider how serious the violation was, how long it lasted, and whether the punishment is fair and appropriate. The Commission will also consider any promises the provider made (Article 93(3)) or commitments in industry guidelines (Article 56).

  1. Before issuing a fine, the Commission must tell the provider what it found and give the provider a chance to respond.

  2. All fines must be effective, fair, and strong enough to discourage violations.

  3. The Commission must also inform the Board about any fines imposed.

<p>Supervisor shall, on an annual basis, notify the Commission of the administrative fines <br /> it has imposed pursuant to this Article and of any litigation or judicial proceedings it has initiated.<br /> Article 101<br /> Fines for providers of general-purpose AI models<br /> 1.<br /> The Commission may impose on providers of general-purpose AI models fines not exceeding 3 % of their annual total <br /> worldwide turnover in the preceding financial year or EUR 15 000 000, whichever is higher., when the Commission finds <br /> that the provider intentionally or negligently:<br /> (a) infringed the relevant provisions of this Regulation;<br /> (b) failed to comply with a request for a document or for information pursuant to Article 91, or supplied incorrect, <br /> incomplete or misleading information;<br /> (c) failed to comply with a measure requested under Article 93;<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 117/144</p> <p>(d) failed to make available to the Commission access to the general-purpose AI model or general-purpose AI model with <br /> systemic risk with a view to conducting an evaluation pursuant to Article 92.<br /> In fixing the amount of the fine or periodic penalty payment, regard shall be had to the nature, gravity and duration of the <br /> infringement, taking due account of the principles of proportionality and appropriateness. The Commission shall also into <br /> account commitments made in accordance with Article 93(3) or made in relevant codes of practice in accordance with <br /> Article 56.<br /> 2.<br /> Before adopting the decision pursuant to paragraph 1, the Commission shall communicate its preliminary findings to <br /> the provider of the general-purpose AI model and give it an opportunity to be heard.<br /> 3.<br /> Fines imposed in accordance with this Article shall be effective, proportionate and dissuasive.<br /> 4.<br /> Information on fines imposed under this Article shall also be communicated to the Board as appropriate.<br /> 5.</p>
Show original text

Fines must be effective, fair, and strong enough to discourage violations. Information about fines will be shared with the Board when necessary. The Court of Justice of the European Union can review any fines set by the Commission and has the power to cancel, reduce, or increase them. The Commission will create detailed rules and procedures for cases that may result in fines. These rules will follow the examination procedure described in Article 98(2). When creating technical rules for security equipment that uses Artificial Intelligence systems under Regulation (EU) 2024/1689, the requirements from Chapter III, Section 2 of that regulation must be followed. Regulation (EU) 2024/1689, passed on June 13, 2024, sets unified rules for artificial intelligence and updates several existing regulations and directives.

<p>give it an opportunity to be heard.<br /> 3.<br /> Fines imposed in accordance with this Article shall be effective, proportionate and dissuasive.<br /> 4.<br /> Information on fines imposed under this Article shall also be communicated to the Board as appropriate.<br /> 5.<br /> The Court of Justice of the European Union shall have unlimited jurisdiction to review decisions of the Commission <br /> fixing a fine under this Article. It may cancel, reduce or increase the fine imposed.<br /> 6.<br /> The Commission shall adopt implementing acts containing detailed arrangements and procedural safeguards for <br /> proceedings in view of the possible adoption of decisions pursuant to paragraph 1 of this Article. Those implementing acts <br /> shall be adopted in accordance with the examination procedure referred to in Article 98(2).<br /> CHAPTER XIII<br /> FINAL PROVISIONS<br /> Article 102<br /> Amendment to Regulation (EC) No 300/2008<br /> In Article 4(3) of Regulation (EC) No 300/2008, the following subparagraph is added:<br /> ‘When adopting detailed measures related to technical specifications and procedures for approval and use of security <br /> equipment concerning Artificial Intelligence systems within the meaning of Regulation (EU) 2024/1689 of the European <br /> Parliament and of the Council (<em>), the requirements set out in Chapter III, Section 2, of that Regulation shall be taken into <br /> account. <br /> (</em>)<br /> Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised <br /> rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, <br /> (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) <br /> 2020/1828 (Art</p>
Show original text

Article 103 updates Regulation (EU) No 167/2013. When creating new rules under Article 17(5) about artificial intelligence systems that serve as safety components under the EU's Artificial Intelligence Act (Regulation (EU) 2024/1689 from June 13, 2024), the requirements from Chapter III, Section 2 of that Act must be followed. This Artificial Intelligence Act sets harmonized rules for AI across the EU and modifies several existing regulations including those on aviation safety, agricultural machinery, motorcycles, vehicle emissions, and maritime safety.

<p>(EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) <br /> 2020/1828 (Artificial Intelligence Act) (OJ L, 2024/1689, 12.7.2024, ELI: http://data.europa.eu/eli/reg/ <br /> 2024/1689/oj).’.<br /> Article 103<br /> Amendment to Regulation (EU) No 167/2013<br /> In Article 17(5) of Regulation (EU) No 167/2013, the following subparagraph is added:<br /> ‘When adopting delegated acts pursuant to the first subparagraph concerning artificial intelligence systems which are safety <br /> components within the meaning of Regulation (EU) 2024/1689 of the European Parliament and of the Council (<em>), the <br /> requirements set out in Chapter III, Section 2, of that Regulation shall be taken into account. <br /> (</em>)<br /> Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised <br /> rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, <br /> (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) <br /> 2020/1828 (Artificial Intelligence Act) (OJ L, 2024/1689, 12.7.2024, ELI: http://data.europa.eu/eli/reg/ <br /> 2024/1689/oj).’.<br /> EN<br /> OJ L, 12.7.</p>
Show original text

Article 104 updates Regulation (EU) No 168/2013. It adds a new requirement to Article 22(5) stating that when the European Commission creates delegated acts about Artificial Intelligence systems that serve as safety components under the EU's Artificial Intelligence Act (Regulation 2024/1689, adopted June 13, 2024), those acts must follow the requirements outlined in Chapter III, Section 2 of the Artificial Intelligence Act. The Artificial Intelligence Act also amends several other EU regulations and directives related to aviation, vehicles, and maritime safety.

<p>2024/1689, 12.7.2024, ELI: http://data.europa.eu/eli/reg/ <br /> 2024/1689/oj).’.<br /> EN<br /> OJ L, 12.7.2024<br /> 118/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>Article 104<br /> Amendment to Regulation (EU) No 168/2013<br /> In Article 22(5) of Regulation (EU) No 168/2013, the following subparagraph is added:<br /> ‘When adopting delegated acts pursuant to the first subparagraph concerning Artificial Intelligence systems which are safety <br /> components within the meaning of Regulation (EU) 2024/1689 of the European Parliament and of the Council (<em>), the <br /> requirements set out in Chapter III, Section 2, of that Regulation shall be taken into account. <br /> (</em>)<br /> Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised <br /> rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, <br /> (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) <br /> 2020/1828 (Artificial Intelligence Act) (OJ L, 2024/1689, 12.7.2024, ELI: http://data.europa.eu/eli/reg/ <br /> 2024/1689/oj).’.</p>
Show original text

This document makes changes to two EU directives to include requirements for artificial intelligence systems. First, it updates Directive 2014/90/EU by adding a new rule stating that when the European Commission develops technical specifications and testing standards, it must consider the AI safety requirements from the new EU Artificial Intelligence Act (Regulation 2024/1689), specifically those in Chapter III, Section 2 of that Act. Second, it begins to update Directive (EU) 2016/797 by adding a new paragraph 12 to Article 5, though the specific content of this addition is not shown in the provided text.

<p>(Artificial Intelligence Act) (OJ L, 2024/1689, 12.7.2024, ELI: http://data.europa.eu/eli/reg/ <br /> 2024/1689/oj).’.<br /> Article 105<br /> Amendment to Directive 2014/90/EU<br /> In Article 8 of Directive 2014/90/EU, the following paragraph is added:<br /> ‘5.<br /> For Artificial Intelligence systems which are safety components within the meaning of Regulation (EU) 2024/1689 of <br /> the European Parliament and of the Council (<em>), when carrying out its activities pursuant to paragraph 1 and when adopting <br /> technical specifications and testing standards in accordance with paragraphs 2 and 3, the Commission shall take into <br /> account the requirements set out in Chapter III, Section 2, of that Regulation. <br /> (</em>)<br /> Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised <br /> rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, <br /> (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) <br /> 2020/1828 (Artificial Intelligence Act) (OJ L, 2024/1689, 12.7.2024, ELI: http://data.europa.eu/eli/reg/ <br /> 2024/1689/oj).’.<br /> Article 106<br /> Amendment to Directive (EU) 2016/797<br /> In Article 5 of Directive (EU) 2016/797, the following paragraph is added:<br /> ‘12.</p>
Show original text

Article 106 updates Directive (EU) 2016/797 by adding a new paragraph 12 to Article 5. This paragraph requires that when creating delegated acts and implementing acts related to Artificial Intelligence systems that function as safety components under Regulation (EU) 2024/1689, the requirements from Chapter III, Section 2 of that Regulation must be followed. Regulation (EU) 2024/1689, adopted on June 13, 2024, establishes harmonized rules for artificial intelligence and modifies several existing EU regulations and directives. Article 107 makes a similar amendment to Regulation (EU) 2018/858 by adding a new paragraph 4 to Article 5.

<p>4/1689/oj).’.<br /> Article 106<br /> Amendment to Directive (EU) 2016/797<br /> In Article 5 of Directive (EU) 2016/797, the following paragraph is added:<br /> ‘12.<br /> When adopting delegated acts pursuant to paragraph 1 and implementing acts pursuant to paragraph 11 <br /> concerning Artificial Intelligence systems which are safety components within the meaning of Regulation (EU) 2024/1689 <br /> of the European Parliament and of the Council (<em>), the requirements set out in Chapter III, Section 2, of that Regulation shall <br /> be taken into account. <br /> (</em>)<br /> Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised <br /> rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, <br /> (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) <br /> 2020/1828 (Artificial Intelligence Act) (OJ L, 2024/1689, 12.7.2024, ELI: http://data.europa.eu/eli/reg/ <br /> 2024/1689/oj).’.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 119/144</p> <p>Article 107<br /> Amendment to Regulation (EU) 2018/858<br /> In Article 5 of Regulation (EU) 2018/858 the following paragraph is added:<br /> ‘4.</p>
Show original text

Article 107 updates Regulation (EU) 2018/858 by adding a new paragraph 4 to Article 5. This new paragraph states that when the European Commission creates delegated acts about Artificial Intelligence systems that function as safety components under the EU's Artificial Intelligence Act (Regulation (EU) 2024/1689 from June 13, 2024), it must follow the requirements outlined in Chapter III, Section 2 of that Act.

Article 108 makes changes to Regulation (EU) 2018/1139. Specifically, it adds a new paragraph 3 to Article 17 of that regulation. (The full text of this addition is not provided in the excerpt.)

<p>/1689/oj<br /> 119/144</p> <p>Article 107<br /> Amendment to Regulation (EU) 2018/858<br /> In Article 5 of Regulation (EU) 2018/858 the following paragraph is added:<br /> ‘4.<br /> When adopting delegated acts pursuant to paragraph 3 concerning Artificial Intelligence systems which are safety <br /> components within the meaning of Regulation (EU) 2024/1689 of the European Parliament and of the Council (<em>), the <br /> requirements set out in Chapter III, Section 2, of that Regulation shall be taken into account. <br /> (</em>)<br /> Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised <br /> rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, <br /> (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) <br /> 2020/1828 (Artificial Intelligence Act) (OJ L, 2024/1689, 12.7.2024, ELI: http://data.europa.eu/eli/reg/ <br /> 2024/1689/oj).’.<br /> Article 108<br /> Amendments to Regulation (EU) 2018/1139<br /> Regulation (EU) 2018/1139 is amended as follows:<br /> (1) in Article 17, the following paragraph is added:<br /> ‘3.</p>
Show original text

Regulation (EU) 2018/1139 is being updated in three ways: First, Article 17 is amended to add a new paragraph stating that when creating implementing acts about Artificial Intelligence systems that function as safety components under Regulation (EU) 2024/1689 (the AI Act passed on June 13, 2024), the requirements from Chapter III, Section 2 of that AI Act must be considered. Second, Article 19 is amended to add a new paragraph requiring that when adopting delegated acts concerning Artificial Intelligence systems that are safety components under Regulation (EU) 2024/1689, the same requirements from Chapter III, Section 2 of the AI Act must be taken into account. Third, Article 43 is amended by adding a new paragraph (the text of which is not provided in this excerpt).

<p>108<br /> Amendments to Regulation (EU) 2018/1139<br /> Regulation (EU) 2018/1139 is amended as follows:<br /> (1) in Article 17, the following paragraph is added:<br /> ‘3.<br /> Without prejudice to paragraph 2, when adopting implementing acts pursuant to paragraph 1 concerning <br /> Artificial Intelligence systems which are safety components within the meaning of Regulation (EU) 2024/1689 of the <br /> European Parliament and of the Council (<em>), the requirements set out in Chapter III, Section 2, of that Regulation shall be <br /> taken into account. <br /> (</em>)<br /> Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down <br /> harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) <br /> No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 <br /> and (EU) 2020/1828 (Artificial Intelligence Act) (OJ L, 2024/1689, 12.7.2024, ELI: http://data.europa. <br /> eu/eli/reg/2024/1689/oj).’;<br /> (2) in Article 19, the following paragraph is added:<br /> ‘4.<br /> When adopting delegated acts pursuant to paragraphs 1 and 2 concerning Artificial Intelligence systems which <br /> are safety components within the meaning of Regulation (EU) 2024/1689, the requirements set out in Chapter III, <br /> Section 2, of that Regulation shall be taken into account.’;<br /> (3) in Article 43, the following paragraph is added:<br /> ‘4.</p>
Show original text

This document outlines amendments to several articles that require compliance with specific safety requirements when dealing with Artificial Intelligence systems that function as safety components under EU Regulation 2024/1689. Specifically: Article 43 must consider Chapter III, Section 2 requirements when adopting implementing acts for AI safety components. Article 47 must consider these same requirements when adopting delegated acts for AI safety components. Article 57 must apply these requirements when adopting implementing acts for AI safety components. Article 58 must consider Chapter III, Section 2 requirements when adopting delegated acts for AI safety components. All amendments ensure that whenever EU regulations address AI systems used as safety components, they must follow the safety standards outlined in Chapter III, Section 2 of Regulation (EU) 2024/1689.

<p>(EU) 2024/1689, the requirements set out in Chapter III, <br /> Section 2, of that Regulation shall be taken into account.’;<br /> (3) in Article 43, the following paragraph is added:<br /> ‘4.<br /> When adopting implementing acts pursuant to paragraph 1 concerning Artificial Intelligence systems which are <br /> safety components within the meaning of Regulation (EU) 2024/1689, the requirements set out in Chapter III, <br /> Section 2, of that Regulation shall be taken into account.’;<br /> (4) in Article 47, the following paragraph is added:<br /> ‘3.<br /> When adopting delegated acts pursuant to paragraphs 1 and 2 concerning Artificial Intelligence systems which <br /> are safety components within the meaning of Regulation (EU) 2024/1689, the requirements set out in Chapter III, <br /> Section 2, of that Regulation shall be taken into account.’;<br /> (5) in Article 57, the following subparagraph is added:<br /> ‘When adopting those implementing acts concerning Artificial Intelligence systems which are safety components within <br /> the meaning of Regulation (EU) 2024/1689, the requirements set out in Chapter III, Section 2, of that Regulation shall <br /> be taken into account.’;<br /> EN<br /> OJ L, 12.7.2024<br /> 120/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>(6) in Article 58, the following paragraph is added:<br /> ‘3.<br /> When adopting delegated acts pursuant to paragraphs 1 and 2 concerning Artificial Intelligence systems which <br /> are safety components within the meaning of Regulation (EU) 2024/1689, the requirements set out in Chapter III, <br /> Section 2, of that Regulation shall be taken into account.’.</p>
Show original text

Article 109 updates Regulation (EU) 2019/2144 by adding a new paragraph 3 to Article 11. This new paragraph states that when creating implementing rules under paragraph 2 for artificial intelligence systems that function as safety components (as defined in Regulation (EU) 2024/1689), the requirements from Chapter III, Section 2 of that regulation must be followed. Regulation (EU) 2024/1689, adopted by the European Parliament and Council on June 13, 2024, is the Artificial Intelligence Act. It establishes harmonized rules for artificial intelligence and modifies several existing EU regulations and directives related to aviation, vehicles, maritime safety, and rail transport.

<p>and 2 concerning Artificial Intelligence systems which <br /> are safety components within the meaning of Regulation (EU) 2024/1689, the requirements set out in Chapter III, <br /> Section 2, of that Regulation shall be taken into account.’.<br /> Article 109<br /> Amendment to Regulation (EU) 2019/2144<br /> In Article 11 of Regulation (EU) 2019/2144, the following paragraph is added:<br /> ‘3.<br /> When adopting the implementing acts pursuant to paragraph 2, concerning artificial intelligence systems which are <br /> safety components within the meaning of Regulation (EU) 2024/1689 of the European Parliament and of the Council (<em>), <br /> the requirements set out in Chapter III, Section 2, of that Regulation shall be taken into account. <br /> (</em>)<br /> Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised <br /> rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, <br /> (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) <br /> 2020/1828 (Artificial Intelligence Act) (OJ L, 2024/1689, 12.7.2024, ELI: http://data.europa.eu/eli/reg/ <br /> 2024/1689/oj).’.</p>
Show original text

The European Union has created new rules for artificial intelligence (AI) called the Artificial Intelligence Act, which was officially published on July 12, 2024. This act updates several existing EU regulations and directives to include AI oversight. One of these updates modifies Directive 2020/1828 by adding a reference to the new AI Act. For AI systems that were already being used or sold before August 2, 2027, companies have until December 31, 2030 to make sure they follow the new AI rules. This applies to AI systems that are part of large computer systems created by specific EU laws listed in the regulation.

<p>(Artificial Intelligence Act) (OJ L, 2024/1689, 12.7.2024, ELI: http://data.europa.eu/eli/reg/ <br /> 2024/1689/oj).’.<br /> Article 110<br /> Amendment to Directive (EU) 2020/1828<br /> In Annex I to Directive (EU) 2020/1828 of the European Parliament and of the Council (58), the following point is added:<br /> ‘(68) Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised <br /> rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, <br /> (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) <br /> 2020/1828 (Artificial Intelligence Act) (OJ L, 2024/1689, 12.7.2024, ELI: http://data.europa.eu/eli/reg/ <br /> 2024/1689/oj).’.<br /> Article 111<br /> AI systems already placed on the market or put into service and general-purpose AI models already placed on the <br /> marked<br /> 1.<br /> Without prejudice to the application of Article 5 as referred to in Article 113(3), point (a), AI systems which are <br /> components of the large-scale IT systems established by the legal acts listed in Annex X that have been placed on the market <br /> or put into service before 2 August 2027 shall be brought into compliance with this Regulation by 31 December 2030.</p>
Show original text

Large-scale IT systems listed in Annex X that were already in use before August 2, 2027 must follow this Regulation by December 31, 2030. When these systems are evaluated or updated, the Regulation's requirements must be considered. High-risk AI systems (other than those mentioned above) that were in use before August 2, 2026 must follow this Regulation only if they undergo major design changes after that date. However, companies providing or using high-risk AI systems for government agencies must comply with all Regulation requirements by August 2, 2030. Companies that released general-purpose AI models before August 2, 2025 must meet the Regulation's requirements by August 2, 2027. This Regulation references Directive (EU) 2020/1828 from November 25, 2020, which allows representative legal actions to protect consumer interests and replaces Directive 2009/22/EC.

<p>of the large-scale IT systems established by the legal acts listed in Annex X that have been placed on the market <br /> or put into service before 2 August 2027 shall be brought into compliance with this Regulation by 31 December 2030.<br /> The requirements laid down in this Regulation shall be taken into account in the evaluation of each large-scale IT system <br /> established by the legal acts listed in Annex X to be undertaken as provided for in those legal acts and where those legal acts <br /> are replaced or amended.<br /> 2.<br /> Without prejudice to the application of Article 5 as referred to in Article 113(3), point (a), this Regulation shall apply <br /> to operators of high-risk AI systems, other than the systems referred to in paragraph 1 of this Article, that have been placed <br /> on the market or put into service before 2 August 2026, only if, as from that date, those systems are subject to significant <br /> changes in their designs. In any case, the providers and deployers of high-risk AI systems intended to be used by public <br /> authorities shall take the necessary steps to comply with the requirements and obligations of this Regulation by 2 August <br /> 2030.<br /> 3.<br /> Providers of general-purpose AI models that have been placed on the market before 2 August 2025 shall take the <br /> necessary steps in order to comply with the obligations laid down in this Regulation by 2 August 2027.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 121/144<br /> (58)<br /> Directive (EU) 2020/1828 of the European Parliament and of the Council of 25 November 2020 on representative actions for the <br /> protection of the collective interests of consumers and repealing Directive 2009/22/EC (OJ L 409, 4.12.2020, p.</p>
Show original text

This regulation, approved by the Council on 25 November 2020, protects consumers' collective interests through representative legal actions and replaces an earlier directive from 2009.

Article 112 outlines review and evaluation requirements:

  1. Starting from when this regulation takes effect and continuing until 2027, the European Commission must review once per year whether the list of banned AI practices (in Article 5) and the list in Annex III need changes. The Commission must report its findings to the European Parliament and Council.

  2. By 2 August 2028 and every four years after that, the Commission must evaluate and report on: (a) whether new categories should be added to Annex III or existing ones expanded; (b) whether AI systems needing extra transparency rules (Article 50) should be updated; and (c) whether the supervision and governance system can work better.

  3. By 2 August 2029 and every four years after that, the Commission must submit a full review report to the European Parliament and Council. This report must assess whether the current enforcement structure is working and whether a new EU agency is needed. If problems are found, the Commission may propose changes to this regulation. All reports must be made public.

<p>Council of 25 November 2020 on representative actions for the <br /> protection of the collective interests of consumers and repealing Directive 2009/22/EC (OJ L 409, 4.12.2020, p. 1).</p> <p>Article 112<br /> Evaluation and review<br /> 1.<br /> The Commission shall assess the need for amendment of the list set out in Annex III and of the list of prohibited AI <br /> practices laid down in Article 5, once a year following the entry into force of this Regulation, and until the end of the period <br /> of the delegation of power laid down in Article 97. The Commission shall submit the findings of that assessment to the <br /> European Parliament and the Council.<br /> 2.<br /> By 2 August 2028 and every four years thereafter, the Commission shall evaluate and report to the European <br /> Parliament and to the Council on the following:<br /> (a) the need for amendments extending existing area headings or adding new area headings in Annex III;<br /> (b) amendments to the list of AI systems requiring additional transparency measures in Article 50;<br /> (c) amendments enhancing the effectiveness of the supervision and governance system.<br /> 3.<br /> By 2 August 2029 and every four years thereafter, the Commission shall submit a report on the evaluation and review <br /> of this Regulation to the European Parliament and to the Council. The report shall include an assessment with regard to the <br /> structure of enforcement and the possible need for a Union agency to resolve any identified shortcomings. On the basis of <br /> the findings, that report shall, where appropriate, be accompanied by a proposal for amendment of this Regulation. The <br /> reports shall be made public.<br /> 4.</p>
Show original text

The Commission will publish reports on how well this Regulation is working. If problems are found, the reports may include proposals to change this Regulation or create a new EU agency to fix the issues.

These reports must specifically examine:
(a) Whether national authorities have enough money, staff, and equipment to do their jobs under this Regulation;
(b) What penalties and fines Member States have given for breaking this Regulation;
(c) What standard rules and specifications have been created to support this Regulation;
(d) How many new companies have entered the market since this Regulation started, including how many are small or medium-sized businesses.

By August 2, 2028, the Commission will review how well the AI Office is working. It will check if the AI Office has enough power and resources to do its job, and whether it needs more staff, funding, or authority to properly enforce this Regulation. The Commission will send this evaluation report to the European Parliament and Council.

By August 2, 2028, and then every four years after that, the Commission will report on progress in developing standards for energy-efficient AI models. It will assess whether new rules or actions are needed. This report will be sent to the European Parliament and Council and made public.

<p>and the possible need for a Union agency to resolve any identified shortcomings. On the basis of <br /> the findings, that report shall, where appropriate, be accompanied by a proposal for amendment of this Regulation. The <br /> reports shall be made public.<br /> 4.<br /> The reports referred to in paragraph 2 shall pay specific attention to the following:<br /> (a) the status of the financial, technical and human resources of the national competent authorities in order to effectively <br /> perform the tasks assigned to them under this Regulation;<br /> (b) the state of penalties, in particular administrative fines as referred to in Article 99(1), applied by Member States for <br /> infringements of this Regulation;<br /> (c) adopted harmonised standards and common specifications developed to support this Regulation;<br /> (d) the number of undertakings that enter the market after the entry into application of this Regulation, and how many of <br /> them are SMEs.<br /> 5.<br /> By 2 August 2028, the Commission shall evaluate the functioning of the AI Office, whether the AI Office has been <br /> given sufficient powers and competences to fulfil its tasks, and whether it would be relevant and needed for the proper <br /> implementation and enforcement of this Regulation to upgrade the AI Office and its enforcement competences and to <br /> increase its resources. The Commission shall submit a report on its evaluation to the European Parliament and to the <br /> Council.<br /> 6.<br /> By 2 August 2028 and every four years thereafter, the Commission shall submit a report on the review of the progress <br /> on the development of standardisation deliverables on the energy-efficient development of general-purpose AI models, and <br /> asses the need for further measures or actions, including binding measures or actions. The report shall be submitted to the <br /> European Parliament and to the Council, and it shall be made public.<br /> 7.</p>
Show original text

The European Commission must review how well voluntary guidelines are working for non-high-risk AI systems by August 2, 2028, and then every three years after that. These reviews will check if the guidelines help meet requirements in Chapter III, Section 2, including environmental sustainability standards. The Commission will share its findings with the European Parliament and the Council, and make the report public. To conduct these reviews, the Commission can request information from the AI Board, Member States, and national authorities, who must respond promptly. When evaluating progress, the Commission will consider input from the AI Board, European Parliament, Council, and other relevant organizations. If needed, the Commission will propose changes to this regulation, especially based on new technology developments, impacts on health and safety, effects on fundamental rights, and progress in the digital society.

<p>energy-efficient development of general-purpose AI models, and <br /> asses the need for further measures or actions, including binding measures or actions. The report shall be submitted to the <br /> European Parliament and to the Council, and it shall be made public.<br /> 7.<br /> By 2 August 2028 and every three years thereafter, the Commission shall evaluate the impact and effectiveness of <br /> voluntary codes of conduct to foster the application of the requirements set out in Chapter III, Section 2 for AI systems <br /> other than high-risk AI systems and possibly other additional requirements for AI systems other than high-risk AI systems, <br /> including as regards environmental sustainability.<br /> 8.<br /> For the purposes of paragraphs 1 to 7, the Board, the Member States and national competent authorities shall provide <br /> the Commission with information upon its request and without undue delay.<br /> 9.<br /> In carrying out the evaluations and reviews referred to in paragraphs 1 to 7, the Commission shall take into account <br /> the positions and findings of the Board, of the European Parliament, of the Council, and of other relevant bodies or sources.<br /> EN<br /> OJ L, 12.7.2024<br /> 122/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>10.<br /> The Commission shall, if necessary, submit appropriate proposals to amend this Regulation, in particular taking into <br /> account developments in technology, the effect of AI systems on health and safety, and on fundamental rights, and in light <br /> of the state of progress in the information society.<br /> 11.</p>
Show original text

The AI Office will develop a clear and inclusive method to evaluate how risky different AI systems are. This evaluation will be based on specific criteria and will help decide which AI systems should be added to the official lists of: high-risk AI systems (Annex III), banned AI practices (Article 5), and AI systems needing extra transparency rules (Article 50). When updating this regulation, the Commission must consider how technology changes, how AI affects health, safety, and human rights, and the progress of the digital society. Any changes to this regulation must respect the specific rules and oversight systems already in place for different industries listed in Annex I. By August 2, 2031, the Commission must review how well this regulation is being enforced and report to the European Parliament, Council, and European Economic and Social Committee. Based on this review, the Commission may propose changes to improve enforcement or suggest creating a new EU agency if needed. This regulation becomes official twenty days after it is published in the Official Journal of the European Union and will start being used on August 2, 2026.

<p>, submit appropriate proposals to amend this Regulation, in particular taking into <br /> account developments in technology, the effect of AI systems on health and safety, and on fundamental rights, and in light <br /> of the state of progress in the information society.<br /> 11.<br /> To guide the evaluations and reviews referred to in paragraphs 1 to 7 of this Article, the AI Office shall undertake to <br /> develop an objective and participative methodology for the evaluation of risk levels based on the criteria outlined in the <br /> relevant Articles and the inclusion of new systems in:<br /> (a) the list set out in Annex III, including the extension of existing area headings or the addition of new area headings in <br /> that Annex;<br /> (b) the list of prohibited practices set out in Article 5; and<br /> (c) the list of AI systems requiring additional transparency measures pursuant to Article 50.<br /> 12.<br /> Any amendment to this Regulation pursuant to paragraph 10, or relevant delegated or implementing acts, which <br /> concerns sectoral Union harmonisation legislation listed in Section B of Annex I shall take into account the regulatory <br /> specificities of each sector, and the existing governance, conformity assessment and enforcement mechanisms and <br /> authorities established therein.<br /> 13.<br /> By 2 August 2031, the Commission shall carry out an assessment of the enforcement of this Regulation and shall <br /> report on it to the European Parliament, the Council and the European Economic and Social Committee, taking into <br /> account the first years of application of this Regulation. On the basis of the findings, that report shall, where appropriate, be <br /> accompanied by a proposal for amendment of this Regulation with regard to the structure of enforcement and the need for <br /> a Union agency to resolve any identified shortcomings.<br /> Article 113<br /> Entry into force and application<br /> This Regulation shall enter into force on the twentieth day following that of its publication in the Official Journal of the <br /> European Union.<br /> It shall apply from 2 August 2026.</p>
Show original text

Article 113: When This Regulation Takes Effect

This Regulation will become official 20 days after it is published in the Official Journal of the European Union. It will start being used on August 2, 2026.

However, some parts start earlier:
- Chapters I and II begin on February 2, 2025
- Chapter III Section 4, Chapter V, Chapter VII, Chapter XII, and Article 78 begin on August 2, 2025 (except Article 101)
- Article 6(1) and its related requirements begin on August 2, 2027

This Regulation is binding and applies directly to all EU Member States.

Signed in Brussels on June 13, 2024 by the President of the European Parliament (R. Metsola) and the President of the Council (M. Michel).

Annex I: List of EU Laws That Set Common Standards

Section A includes:
1. Directive 2006/42/EC (May 17, 2006) - Rules for machinery safety
2. Directive 2009/48/EC (June 18, 2009) - Rules for toy safety

<p>any identified shortcomings.<br /> Article 113<br /> Entry into force and application<br /> This Regulation shall enter into force on the twentieth day following that of its publication in the Official Journal of the <br /> European Union.<br /> It shall apply from 2 August 2026.<br /> However:<br /> (a) Chapters I and II shall apply from 2 February 2025;<br /> (b) Chapter III Section 4, Chapter V, Chapter VII and Chapter XII and Article 78 shall apply from 2 August 2025, with the <br /> exception of Article 101;<br /> (c) Article 6(1) and the corresponding obligations in this Regulation shall apply from 2 August 2027.<br /> This Regulation shall be binding in its entirety and directly applicable in all Member States.<br /> Done at Brussels, 13 June 2024.<br /> For the European Parliament<br /> The President<br /> R. METSOLA<br /> For the Council<br /> The President<br /> M. MICHEL<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 123/144</p> <p>ANNEX I<br /> List of Union harmonisation legislation<br /> Section A. List of Union harmonisation legislation based on the New Legislative Framework<br /> 1.<br /> Directive 2006/42/EC of the European Parliament and of the Council of 17 May 2006 on machinery, and amending <br /> Directive 95/16/EC (OJ L 157, 9.6.2006, p. 24);<br /> 2.<br /> Directive 2009/48/EC of the European Parliament and of the Council of 18 June 2009 on the safety of toys (OJ <br /> L 170, 30.6.2009, p. 1);<br /> 3.</p>
Show original text

This document lists eight European Union directives that set safety and quality standards across member countries:

  1. Directive 2009/48/EC (June 18, 2009) - Establishes safety requirements for toys.

  2. Directive 2013/53/EU (November 20, 2013) - Sets rules for recreational boats and personal watercraft, replacing the earlier Directive 94/25/EC.

  3. Directive 2014/33/EU (February 26, 2014) - Harmonizes lift (elevator) safety standards and safety components across EU member states.

  4. Directive 2014/34/EU (February 26, 2014) - Harmonizes standards for equipment and safety systems used in potentially explosive environments.

  5. Directive 2014/53/EU (April 16, 2014) - Harmonizes rules for radio equipment sold in the EU market, replacing Directive 1999/5/EC.

  6. Directive 2014/68/EU (May 15, 2014) - Harmonizes standards for pressure equipment sold in the EU market.

Each directive includes an official journal reference with publication date and page number.

<p>2009/48/EC of the European Parliament and of the Council of 18 June 2009 on the safety of toys (OJ <br /> L 170, 30.6.2009, p. 1);<br /> 3.<br /> Directive 2013/53/EU of the European Parliament and of the Council of 20 November 2013 on recreational craft <br /> and personal watercraft and repealing Directive 94/25/EC (OJ L 354, 28.12.2013, p. 90);<br /> 4.<br /> Directive 2014/33/EU of the European Parliament and of the Council of 26 February 2014 on the harmonisation of <br /> the laws of the Member States relating to lifts and safety components for lifts (OJ L 96, 29.3.2014, p. 251);<br /> 5.<br /> Directive 2014/34/EU of the European Parliament and of the Council of 26 February 2014 on the harmonisation of <br /> the laws of the Member States relating to equipment and protective systems intended for use in potentially explosive <br /> atmospheres (OJ L 96, 29.3.2014, p. 309);<br /> 6.<br /> Directive 2014/53/EU of the European Parliament and of the Council of 16 April 2014 on the harmonisation of the <br /> laws of the Member States relating to the making available on the market of radio equipment and repealing Directive <br /> 1999/5/EC (OJ L 153, 22.5.2014, p. 62);<br /> 7.<br /> Directive 2014/68/EU of the European Parliament and of the Council of 15 May 2014 on the harmonisation of the <br /> laws of the Member States relating to the making available on the market of pressure equipment (OJ L 189, <br /> 27.6.2014, p. 164);<br /> 8.</p>
Show original text

This document lists European Union regulations that set safety and quality standards for various products:

  1. A May 15, 2014 regulation that makes pressure equipment laws the same across all EU member countries.

  2. A March 9, 2016 regulation about cableway installations (like ski lifts and cable cars), which replaced an older 2000 directive.

  3. A March 9, 2016 regulation about personal protective equipment (like safety gear), which replaced an older 1989 directive.

  4. A March 9, 2016 regulation about appliances that use gas as fuel, which replaced an older 2009 directive.

  5. An April 5, 2017 regulation about medical devices, which updated and replaced several older directives and regulations from 1990, 1993, 2001, and 2002.

Each regulation includes its official publication details in the EU Official Journal.

<p>15 May 2014 on the harmonisation of the <br /> laws of the Member States relating to the making available on the market of pressure equipment (OJ L 189, <br /> 27.6.2014, p. 164);<br /> 8.<br /> Regulation (EU) 2016/424 of the European Parliament and of the Council of 9 March 2016 on cableway <br /> installations and repealing Directive 2000/9/EC (OJ L 81, 31.3.2016, p. 1);<br /> 9.<br /> Regulation (EU) 2016/425 of the European Parliament and of the Council of 9 March 2016 on personal protective <br /> equipment and repealing Council Directive 89/686/EEC (OJ L 81, 31.3.2016, p. 51);<br /> 10.<br /> Regulation (EU) 2016/426 of the European Parliament and of the Council of 9 March 2016 on appliances burning <br /> gaseous fuels and repealing Directive 2009/142/EC (OJ L 81, 31.3.2016, p. 99);<br /> 11.<br /> Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, <br /> amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing <br /> Council Directives 90/385/EEC and 93/42/EEC (OJ L 117, 5.5.2017, p. 1);<br /> 12.</p>
Show original text

This document lists European Union regulations that set common standards across member states. It includes: Regulation 2017/746 about medical devices used for laboratory testing (replacing an older 1998 directive), Regulation 300/2008 about security rules for civil aviation (replacing 2002 regulations), Regulation 168/2013 about approving and monitoring two, three-wheel vehicles and quadricycles for the market, and Regulation 167/2013 about approving and monitoring agricultural and forestry vehicles for the market. These regulations were published in the Official Journal of the European Union on the dates specified.

<p>1223/2009 and repealing <br /> Council Directives 90/385/EEC and 93/42/EEC (OJ L 117, 5.5.2017, p. 1);<br /> 12.<br /> Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic <br /> medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU (OJ L 117, 5.5.2017, <br /> p. 176).<br /> Section B. List of other Union harmonisation legislation<br /> 13.<br /> Regulation (EC) No 300/2008 of the European Parliament and of the Council of 11 March 2008 on common rules <br /> in the field of civil aviation security and repealing Regulation (EC) No 2320/2002 (OJ L 97, 9.4.2008, p. 72);<br /> 14.<br /> Regulation (EU) No 168/2013 of the European Parliament and of the Council of 15 January 2013 on the approval <br /> and market surveillance of two- or three-wheel vehicles and quadricycles (OJ L 60, 2.3.2013, p. 52);<br /> 15.<br /> Regulation (EU) No 167/2013 of the European Parliament and of the Council of 5 February 2013 on the approval <br /> and market surveillance of agricultural and forestry vehicles (OJ L 60, 2.3.2013, p. 1);<br /> EN<br /> OJ L, 12.7.2024<br /> 124/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>16.</p>
Show original text

This document references four key European Union regulations and directives related to transportation safety and equipment standards: (1) Directive 2014/90/EU from July 23, 2014, which sets standards for marine equipment and replaced an earlier 1996 directive; (2) Directive 2016/797/EU from May 11, 2016, which ensures that rail systems work together properly across the European Union; (3) Regulation 2018/858/EU from May 30, 2018, which controls how motor vehicles, trailers, and their parts are approved and monitored in the market, and updated two earlier regulations; and (4) Regulation 2019/2144/EU from November 27, 2019, which sets safety requirements for motor vehicles and their components to protect passengers and pedestrians, and replaced three earlier regulations.

<p>.2013, p. 1);<br /> EN<br /> OJ L, 12.7.2024<br /> 124/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>16.<br /> Directive 2014/90/EU of the European Parliament and of the Council of 23 July 2014 on marine equipment and <br /> repealing Council Directive 96/98/EC (OJ L 257, 28.8.2014, p. 146);<br /> 17.<br /> Directive (EU) 2016/797 of the European Parliament and of the Council of 11 May 2016 on the interoperability of <br /> the rail system within the European Union (OJ L 138, 26.5.2016, p. 44);<br /> 18.<br /> Regulation (EU) 2018/858 of the European Parliament and of the Council of 30 May 2018 on the approval and <br /> market surveillance of motor vehicles and their trailers, and of systems, components and separate technical units <br /> intended for such vehicles, amending Regulations (EC) No 715/2007 and (EC) No 595/2009 and repealing Directive <br /> 2007/46/EC (OJ L 151, 14.6.2018, p. 1);<br /> 19.<br /> Regulation (EU) 2019/2144 of the European Parliament and of the Council of 27 November 2019 on type-approval <br /> requirements for motor vehicles and their trailers, and systems, components and separate technical units intended <br /> for such vehicles, as regards their general safety and the protection of vehicle occupants and vulnerable road users, <br /> amending Regulation (EU) 2018/858 of the European Parliament and of the Council and repealing Regulations (EC) <br /> No 78/2009, (EC) No 79/2009 and (EC) No 661</p>
Show original text

This document updates EU Regulation 2018/858 and removes several older regulations related to vehicle safety standards. These replaced regulations are: EC 78/2009, 79/2009, 661/2009, and Commission Regulations 631/2009, 406/2010, 672/2010, 1003/2010, 1005/2010, 1008/2010, 1009/2010, 19/2011, 109/2011, 458/2011, 65/2012, 130/2012, 347/2012, 351/2012, 1230/2012, and 2015/166. Additionally, EU Regulation 2018/1139 from July 4, 2018 establishes common rules for civil aviation in Europe and creates the European Union Aviation Safety Agency. This regulation updates earlier aviation rules (EC 2111/2005, 1008/2008, EU 996/2010, 376/2014) and removes the older EC 552/2004 and 216/2008 regulations.

<p>amending Regulation (EU) 2018/858 of the European Parliament and of the Council and repealing Regulations (EC) <br /> No 78/2009, (EC) No 79/2009 and (EC) No 661/2009 of the European Parliament and of the Council and <br /> Commission Regulations (EC) No 631/2009, (EU) No 406/2010, (EU) No 672/2010, (EU) No 1003/2010, <br /> (EU) No 1005/2010, (EU) No 1008/2010, (EU) No 1009/2010, (EU) No 19/2011, (EU) No 109/2011, (EU) <br /> No 458/2011, (EU) No 65/2012, (EU) No 130/2012, (EU) No 347/2012, (EU) No 351/2012, (EU) No 1230/2012 <br /> and (EU) 2015/166 (OJ L 325, 16.12.2019, p. 1);<br /> 20.<br /> Regulation (EU) 2018/1139 of the European Parliament and of the Council of 4 July 2018 on common rules in the <br /> field of civil aviation and establishing a European Union Aviation Safety Agency, and amending Regulations (EC) <br /> No 2111/2005, (EC) No 1008/2008, (EU) No 996/2010, (EU) No 376/2014 and Directives 2014/30/EU and <br /> 2014/53/EU of the European Parliament and of the Council, and repealing Regulations (EC) No 552/2004 and (EC) <br /> No 216/2008 of the European Parliament and</p>
Show original text

This regulation applies to the design, production, and sale of unmanned aircraft (drones) and their related components, including engines, propellers, and remote control equipment. It updates and replaces several previous European regulations on aircraft safety and radio equipment standards.

Annex II lists serious criminal offences that are covered under Article 5(1), point (h)(iii). These offences include: terrorism, human trafficking, child sexual abuse and child pornography, illegal drug trafficking, illegal weapons and explosives trafficking, murder, serious assault, illegal organ or tissue trade, illegal nuclear or radioactive material trafficking, kidnapping and hostage-taking, crimes prosecuted by the International Criminal Court, unlawful seizure of aircraft or ships, rape, environmental crimes, organized or armed robbery, sabotage, and participation in criminal organizations involved in any of these offences.

<p>/30/EU and <br /> 2014/53/EU of the European Parliament and of the Council, and repealing Regulations (EC) No 552/2004 and (EC) <br /> No 216/2008 of the European Parliament and of the Council and Council Regulation (EEC) No 3922/91 (OJ L 212, <br /> 22.8.2018, p. 1), in so far as the design, production and placing on the market of aircrafts referred to in Article 2(1), <br /> points (a) and (b) thereof, where it concerns unmanned aircraft and their engines, propellers, parts and equipment to <br /> control them remotely, are concerned.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 125/144</p> <p>ANNEX II<br /> List of criminal offences referred to in Article 5(1), first subparagraph, point (h)(iii)<br /> Criminal offences referred to in Article 5(1), first subparagraph, point (h)(iii):<br /> — terrorism,<br /> — trafficking in human beings,<br /> — sexual exploitation of children, and child pornography,<br /> — illicit trafficking in narcotic drugs or psychotropic substances,<br /> — illicit trafficking in weapons, munitions or explosives,<br /> — murder, grievous bodily injury,<br /> — illicit trade in human organs or tissue,<br /> — illicit trafficking in nuclear or radioactive materials,<br /> — kidnapping, illegal restraint or hostage-taking,<br /> — crimes within the jurisdiction of the International Criminal Court,<br /> — unlawful seizure of aircraft or ships,<br /> — rape,<br /> — environmental crime,<br /> — organised or armed robbery,<br /> — sabotage,<br /> — participation in a criminal organisation involved in one or more of the offences listed above.<br /> EN<br /> OJ L, 12.7.</p>
Show original text

This document lists serious crimes and high-risk artificial intelligence (AI) systems that require special regulation under European Union law (published July 12, 2024).

Serious Crimes Include:
- Hijacking aircraft or ships
- Rape
- Environmental crimes
- Organized or armed robbery
- Sabotage
- Participation in criminal organizations involved in any of the above crimes

High-Risk AI Systems (Annex III):

  1. Biometric Systems (facial recognition and similar technology):
    - Remote biometric identification systems that identify people from a distance
    - Exception: Systems that only verify a person's identity when they claim to be someone specific are not included
    - AI systems that categorize people based on sensitive characteristics (like race or gender)
    - AI systems designed to recognize human emotions

  2. Critical Infrastructure:
    - AI systems used for safety in managing digital systems, road traffic, water, gas, heating, or electricity supply

  3. Education and Training:
    - AI systems that decide who can attend schools or training programs
    - AI systems that evaluate student learning and progress
    - AI systems that determine what level of education a person can access

<p>seizure of aircraft or ships,<br /> — rape,<br /> — environmental crime,<br /> — organised or armed robbery,<br /> — sabotage,<br /> — participation in a criminal organisation involved in one or more of the offences listed above.<br /> EN<br /> OJ L, 12.7.2024<br /> 126/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>ANNEX III<br /> High-risk AI systems referred to in Article 6(2)<br /> High-risk AI systems pursuant to Article 6(2) are the AI systems listed in any of the following areas:<br /> 1.<br /> Biometrics, in so far as their use is permitted under relevant Union or national law:<br /> (a) remote biometric identification systems.<br /> This shall not include AI systems intended to be used for biometric verification the sole purpose of which is to <br /> confirm that a specific natural person is the person he or she claims to be;<br /> (b) AI systems intended to be used for biometric categorisation, according to sensitive or protected attributes or <br /> characteristics based on the inference of those attributes or characteristics;<br /> (c) AI systems intended to be used for emotion recognition.<br /> 2.<br /> Critical infrastructure: AI systems intended to be used as safety components in the management and operation of <br /> critical digital infrastructure, road traffic, or in the supply of water, gas, heating or electricity.<br /> 3.<br /> Education and vocational training:<br /> (a) AI systems intended to be used to determine access or admission or to assign natural persons to educational and <br /> vocational training institutions at all levels;<br /> (b) AI systems intended to be used to evaluate learning outcomes, including when those outcomes are used to steer <br /> the learning process of natural persons in educational and vocational training institutions at all levels;<br /> (c) AI systems intended to be used for the purpose of assessing the appropriate level of education that an individual <br /> will receive or will be able to access, in the context of or within educational and vocational training institutions <br /> at all levels;<br /> (d) AI systems intended</p>
Show original text

This text describes high-risk AI systems that require special oversight in five areas:

  1. Education: AI systems that decide what level of education a student can access, or that monitor students during tests to catch cheating.

  2. Employment: AI systems used to hire workers, such as posting job ads, screening applications, and rating candidates. Also includes AI that makes decisions about job promotions, terminations, task assignments, or performance reviews.

  3. Essential Services: AI systems that government agencies use to determine if people qualify for public benefits like healthcare, or to approve, deny, or cancel those benefits. Also includes AI that evaluates whether someone can borrow money or determines their credit score, and AI that assesses insurance risk and sets prices for life and health insurance.

These AI applications are considered high-risk because they significantly impact people's lives and opportunities.

<p>AI systems intended to be used for the purpose of assessing the appropriate level of education that an individual <br /> will receive or will be able to access, in the context of or within educational and vocational training institutions <br /> at all levels;<br /> (d) AI systems intended to be used for monitoring and detecting prohibited behaviour of students during tests in the <br /> context of or within educational and vocational training institutions at all levels.<br /> 4.<br /> Employment, workers’ management and access to self-employment:<br /> (a) AI systems intended to be used for the recruitment or selection of natural persons, in particular to place targeted <br /> job advertisements, to analyse and filter job applications, and to evaluate candidates;<br /> (b) AI systems intended to be used to make decisions affecting terms of work-related relationships, the promotion or <br /> termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal <br /> traits or characteristics or to monitor and evaluate the performance and behaviour of persons in such <br /> relationships.<br /> 5.<br /> Access to and enjoyment of essential private services and essential public services and benefits:<br /> (a) AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility <br /> of natural persons for essential public assistance benefits and services, including healthcare services, as well as to <br /> grant, reduce, revoke, or reclaim such benefits and services;<br /> (b) AI systems intended to be used to evaluate the creditworthiness of natural persons or establish their credit score, <br /> with the exception of AI systems used for the purpose of detecting financial fraud;<br /> (c) AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life <br /> and health insurance;<br /> OJ L, 12.7.</p>
Show original text

This section covers high-risk AI systems in several areas:

  1. Financial and Insurance: AI systems that detect fraud in financial transactions, or assess risk and set prices for life and health insurance policies.

  2. Emergency Services: AI systems that analyze emergency calls, decide which emergency services to send (police, firefighters, or medical teams), determine response priority, or help hospitals triage patients.

  3. Law Enforcement: AI systems used by police and other law enforcement agencies (where permitted by law) for:
    - Predicting if someone might become a crime victim
    - Acting as lie detectors or similar tools
    - Evaluating evidence quality during criminal investigations
    - Assessing whether someone might commit or repeat a crime (beyond just analyzing patterns of individuals, but also evaluating personality traits and criminal history)

<p>exception of AI systems used for the purpose of detecting financial fraud;<br /> (c) AI systems intended to be used for risk assessment and pricing in relation to natural persons in the case of life <br /> and health insurance;<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 127/144</p> <p>(d) AI systems intended to evaluate and classify emergency calls by natural persons or to be used to dispatch, or to <br /> establish priority in the dispatching of, emergency first response services, including by police, firefighters and <br /> medical aid, as well as of emergency healthcare patient triage systems.<br /> 6.<br /> Law enforcement, in so far as their use is permitted under relevant Union or national law:<br /> (a) AI systems intended to be used by or on behalf of law enforcement authorities, or by Union institutions, bodies, <br /> offices or agencies in support of law enforcement authorities or on their behalf to assess the risk of a natural <br /> person becoming the victim of criminal offences;<br /> (b) AI systems intended to be used by or on behalf of law enforcement authorities or by Union institutions, bodies, <br /> offices or agencies in support of law enforcement authorities as polygraphs or similar tools;<br /> (c) AI systems intended to be used by or on behalf of law enforcement authorities, or by Union institutions, bodies, <br /> offices or agencies, in support of law enforcement authorities to evaluate the reliability of evidence in the course <br /> of the investigation or prosecution of criminal offences;<br /> (d) AI systems intended to be used by law enforcement authorities or on their behalf or by Union institutions, <br /> bodies, offices or agencies in support of law enforcement authorities for assessing the risk of a natural person <br /> offending or re-offending not solely on the basis of the profiling of natural persons as referred to in Article 3(4) <br /> of Directive (EU) 2016/680, or to assess personality traits and characteristics or past</p>
Show original text

This text describes high-risk AI systems that require strict regulation in law enforcement and border control. It covers: (1) AI systems used by law enforcement to predict if someone will commit crimes or assess their personality and criminal history; (2) AI systems used by law enforcement to profile people during criminal investigations; (3) AI systems used by government authorities for border and migration control, including tools to detect lies, assess security and health risks, evaluate asylum and visa applications, and identify individuals entering a country. All these uses must comply with EU and national laws.

<p>person <br /> offending or re-offending not solely on the basis of the profiling of natural persons as referred to in Article 3(4) <br /> of Directive (EU) 2016/680, or to assess personality traits and characteristics or past criminal behaviour of <br /> natural persons or groups;<br /> (e) AI systems intended to be used by or on behalf of law enforcement authorities or by Union institutions, bodies, <br /> offices or agencies in support of law enforcement authorities for the profiling of natural persons as referred to in <br /> Article 3(4) of Directive (EU) 2016/680 in the course of the detection, investigation or prosecution of criminal <br /> offences.<br /> 7.<br /> Migration, asylum and border control management, in so far as their use is permitted under relevant Union or <br /> national law:<br /> (a) AI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, <br /> offices or agencies as polygraphs or similar tools;<br /> (b) AI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, <br /> offices or agencies to assess a risk, including a security risk, a risk of irregular migration, or a health risk, posed <br /> by a natural person who intends to enter or who has entered into the territory of a Member State;<br /> (c) AI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, <br /> offices or agencies to assist competent public authorities for the examination of applications for asylum, visa or <br /> residence permits and for associated complaints with regard to the eligibility of the natural persons applying for <br /> a status, including related assessments of the reliability of evidence;<br /> (d) AI systems intended to be used by or on behalf of competent public authorities, or by Union institutions, bodies, <br /> offices or agencies, in the context of migration, asylum or border control management, for the purpose of <br /> detecting, recognising or identifying natural persons,</p>
Show original text

This text describes high-risk AI systems that require special regulation under EU law:

  1. Migration and Border Control: AI systems used by government authorities or EU institutions to identify people during migration, asylum, or border control operations. This does not include systems that simply check travel documents.

  2. Justice and Democratic Processes:
    a) Court AI Systems: AI tools used by judges or courts to help research laws, understand facts, and apply the law to specific cases. Similar AI systems can also be used in alternative dispute resolution (like mediation).
    b) Election Interference: AI systems designed to influence election or referendum outcomes or change how people vote. However, this rule does not apply to AI tools used behind the scenes for administrative tasks like organizing or planning political campaigns.

Additionally, the text references technical documentation requirements that AI system providers must prepare according to Article 11(1) of EU Regulation 2024/1689.

<p>used by or on behalf of competent public authorities, or by Union institutions, bodies, <br /> offices or agencies, in the context of migration, asylum or border control management, for the purpose of <br /> detecting, recognising or identifying natural persons, with the exception of the verification of travel documents.<br /> 8.<br /> Administration of justice and democratic processes:<br /> (a) AI systems intended to be used by a judicial authority or on their behalf to assist a judicial authority in <br /> researching and interpreting facts and the law and in applying the law to a concrete set of facts, or to be used in <br /> a similar way in alternative dispute resolution;<br /> EN<br /> OJ L, 12.7.2024<br /> 128/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>(b) AI systems intended to be used for influencing the outcome of an election or referendum or the voting <br /> behaviour of natural persons in the exercise of their vote in elections or referenda. This does not include AI <br /> systems to the output of which natural persons are not directly exposed, such as tools used to organise, optimise <br /> or structure political campaigns from an administrative or logistical point of view.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 129/144</p> <p>ANNEX IV<br /> Technical documentation referred to in Article 11(1)<br /> The technical documentation referred to in Article 11(1) shall contain at least the following information, as applicable to <br /> the relevant AI system:<br /> 1.</p>
Show original text

ANNEX IV - Technical Documentation Requirements for AI Systems

Companies must provide technical documentation for AI systems that includes:

  1. GENERAL DESCRIPTION
    Provide basic information about the AI system:
    - What it is designed to do, who created it, and what version it is
    - How it connects to other software, hardware, or AI systems
    - What software versions are needed and any update requirements
    - How the system is sold or used (as software, downloads, or online services)
    - What hardware it needs to run on
    - Photos or diagrams showing how it looks and works inside (if it is part of a larger product)
    - A simple explanation of how users interact with it
    - Instructions for using the system

  2. DETAILED DESCRIPTION
    Explain how the AI system was built:
    - What methods and steps were used to create it, including any pre-made tools or systems from other companies
    - The system's design and how its algorithms work
    - The main decisions made when designing it and why those choices were made
    - Who the system is intended to be used by
    - The main categories or classifications the system uses

<p>129/144</p> <p>ANNEX IV<br /> Technical documentation referred to in Article 11(1)<br /> The technical documentation referred to in Article 11(1) shall contain at least the following information, as applicable to <br /> the relevant AI system:<br /> 1.<br /> A general description of the AI system including:<br /> (a) its intended purpose, the name of the provider and the version of the system reflecting its relation to previous <br /> versions;<br /> (b) how the AI system interacts with, or can be used to interact with, hardware or software, including with other AI <br /> systems, that are not part of the AI system itself, where applicable;<br /> (c) the versions of relevant software or firmware, and any requirements related to version updates;<br /> (d) the description of all the forms in which the AI system is placed on the market or put into service, such as <br /> software packages embedded into hardware, downloads, or APIs;<br /> (e) the description of the hardware on which the AI system is intended to run;<br /> (f) where the AI system is a component of products, photographs or illustrations showing external features, the <br /> marking and internal layout of those products;<br /> (g) a basic description of the user-interface provided to the deployer;<br /> (h) instructions for use for the deployer, and a basic description of the user-interface provided to the deployer, where <br /> applicable;<br /> 2.<br /> A detailed description of the elements of the AI system and of the process for its development, including:<br /> (a) the methods and steps performed for the development of the AI system, including, where relevant, recourse to <br /> pre-trained systems or tools provided by third parties and how those were used, integrated or modified by the <br /> provider;<br /> (b) the design specifications of the system, namely the general logic of the AI system and of the algorithms; the key <br /> design choices including the rationale and assumptions made, including with regard to persons or groups of <br /> persons in respect of who, the system is intended to be used; the main classification choices</p>
Show original text

AI systems must document: (a) how the AI works, key design decisions, who it's meant to serve, what it classifies, what it optimizes for, expected outputs, and any technical trade-offs made; (b) the system architecture showing how software components work together and the computing resources needed for development, training, testing, and validation; (c) data requirements including training methods, datasets used, where data came from, how it was selected and cleaned, and labeling procedures; (d) human oversight measures needed and technical tools to help users understand AI outputs; (e) if applicable, planned updates to the system and how it will stay compliant with requirements; (f) validation and testing methods used, including test data characteristics and metrics measuring accuracy, robustness, and compliance with requirements.

<p>general logic of the AI system and of the algorithms; the key <br /> design choices including the rationale and assumptions made, including with regard to persons or groups of <br /> persons in respect of who, the system is intended to be used; the main classification choices; what the system is <br /> designed to optimise for, and the relevance of the different parameters; the description of the expected output <br /> and output quality of the system; the decisions about any possible trade-off made regarding the technical <br /> solutions adopted to comply with the requirements set out in Chapter III, Section 2;<br /> (c) the description of the system architecture explaining how software components build on or feed into each other <br /> and integrate into the overall processing; the computational resources used to develop, train, test and validate the <br /> AI system;<br /> (d) where relevant, the data requirements in terms of datasheets describing the training methodologies and <br /> techniques and the training data sets used, including a general description of these data sets, information about <br /> their provenance, scope and main characteristics; how the data was obtained and selected; labelling procedures <br /> (e.g. for supervised learning), data cleaning methodologies (e.g. outliers detection);<br /> (e) assessment of the human oversight measures needed in accordance with Article 14, including an assessment of <br /> the technical measures needed to facilitate the interpretation of the outputs of AI systems by the deployers, in <br /> accordance with Article 13(3), point (d);<br /> (f) where applicable, a detailed description of pre-determined changes to the AI system and its performance, <br /> together with all the relevant information related to the technical solutions adopted to ensure continuous <br /> compliance of the AI system with the relevant requirements set out in Chapter III, Section 2;<br /> (g) the validation and testing procedures used, including information about the validation and testing data used and <br /> their main characteristics; metrics used to measure accuracy, robustness and compliance with other relevant <br /> requirements set out in Chapter III, Section 2</p>
Show original text

The documentation must include: (g) Details about how the AI system was tested and validated, including information about the test data used and its characteristics; measurements of accuracy, robustness, and compliance with requirements; potential discriminatory effects; and signed test reports from responsible persons, including tests for any planned changes; (h) Security measures to protect against cyber attacks; 3. Detailed information about how the AI system is monitored, operates, and controlled, specifically: what it can and cannot do, accuracy levels for different groups of people, overall expected accuracy for its intended use, potential unintended outcomes and risks to health, safety, rights, and discrimination, human oversight measures required, and technical tools to help users understand the system's outputs; specifications for input data where relevant; 4. An explanation of why the chosen performance measurements are appropriate for this specific AI system; 5. A detailed description of the risk management plan; 6. A description of any significant changes made to the system during its lifetime; 7.

<p>2;<br /> (g) the validation and testing procedures used, including information about the validation and testing data used and <br /> their main characteristics; metrics used to measure accuracy, robustness and compliance with other relevant <br /> requirements set out in Chapter III, Section 2, as well as potentially discriminatory impacts; test logs and all test <br /> reports dated and signed by the responsible persons, including with regard to pre-determined changes as referred <br /> to under point (f);<br /> EN<br /> OJ L, 12.7.2024<br /> 130/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>(h) cybersecurity measures put in place;<br /> 3.<br /> Detailed information about the monitoring, functioning and control of the AI system, in particular with regard to: <br /> its capabilities and limitations in performance, including the degrees of accuracy for specific persons or groups of <br /> persons on which the system is intended to be used and the overall expected level of accuracy in relation to its <br /> intended purpose; the foreseeable unintended outcomes and sources of risks to health and safety, fundamental rights <br /> and discrimination in view of the intended purpose of the AI system; the human oversight measures needed in <br /> accordance with Article 14, including the technical measures put in place to facilitate the interpretation of the <br /> outputs of AI systems by the deployers; specifications on input data, as appropriate;<br /> 4.<br /> A description of the appropriateness of the performance metrics for the specific AI system;<br /> 5.<br /> A detailed description of the risk management system in accordance with Article 9;<br /> 6.<br /> A description of relevant changes made by the provider to the system through its lifecycle;<br /> 7.</p>
Show original text

High-risk AI systems must include detailed documentation covering: (1) whether the performance metrics are suitable for the specific AI system; (2) a complete description of the risk management system; (3) any changes made to the system during its lifetime; (4) a list of European standards used, or if none were used, a detailed explanation of how the system meets requirements instead; (5) a copy of the EU declaration of conformity; and (6) a detailed plan for monitoring the AI system's performance after it is released to the market. The EU declaration of conformity must include: (1) the AI system's name, type, and identifying information; (2) the provider's name and address; (3) a statement that the provider takes full responsibility for the declaration; and (4) a statement confirming that the AI system complies with European regulations.

<p>appropriateness of the performance metrics for the specific AI system;<br /> 5.<br /> A detailed description of the risk management system in accordance with Article 9;<br /> 6.<br /> A description of relevant changes made by the provider to the system through its lifecycle;<br /> 7.<br /> A list of the harmonised standards applied in full or in part the references of which have been published in the <br /> Official Journal of the European Union; where no such harmonised standards have been applied, a detailed description <br /> of the solutions adopted to meet the requirements set out in Chapter III, Section 2, including a list of other relevant <br /> standards and technical specifications applied;<br /> 8.<br /> A copy of the EU declaration of conformity referred to in Article 47;<br /> 9.<br /> A detailed description of the system in place to evaluate the AI system performance in the post-market phase in <br /> accordance with Article 72, including the post-market monitoring plan referred to in Article 72(3).<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 131/144</p> <p>ANNEX V<br /> EU declaration of conformity<br /> The EU declaration of conformity referred to in Article 47, shall contain all of the following information:<br /> 1.<br /> AI system name and type and any additional unambiguous reference allowing the identification and traceability of <br /> the AI system;<br /> 2.<br /> The name and address of the provider or, where applicable, of their authorised representative;<br /> 3.<br /> A statement that the EU declaration of conformity referred to in Article 47 is issued under the sole responsibility of <br /> the provider;<br /> 4.<br /> A statement that the AI system is in conformity with this Regulation and, if applicable, with any other relevant <br /> Union law that provides for the issuing of the EU declaration of conformity referred to in Article 47;<br /> 5.</p>
Show original text

The provider must include the following in their declaration: (1) A statement confirming the AI system meets this Regulation and any other relevant EU laws requiring an EU declaration of conformity; (2) If the AI system processes personal data, a statement that it complies with EU data protection regulations (2016/679, 2018/1725, and Directive 2016/680); (3) References to any relevant standards or specifications used to demonstrate conformity; (4) If applicable, the name and identification number of the notified body that performed the assessment, a description of the assessment procedure, and the certificate number; (5) The location and date the declaration was issued, the name and title of the person who signed it, who they represent, and their signature. The conformity assessment procedure based on internal control requires the provider to: (1) Verify that their quality management system meets Article 17 requirements; (2) Review the technical documentation to confirm the AI system complies with essential requirements in Chapter III, Section 2; (3) Confirm that the AI system's design, development process, and post-market monitoring align with the technical documentation.

<p>the provider;<br /> 4.<br /> A statement that the AI system is in conformity with this Regulation and, if applicable, with any other relevant <br /> Union law that provides for the issuing of the EU declaration of conformity referred to in Article 47;<br /> 5.<br /> Where an AI system involves the processing of personal data, a statement that that AI system complies with <br /> Regulations (EU) 2016/679 and (EU) 2018/1725 and Directive (EU) 2016/680;<br /> 6.<br /> References to any relevant harmonised standards used or any other common specification in relation to which <br /> conformity is declared;<br /> 7.<br /> Where applicable, the name and identification number of the notified body, a description of the conformity <br /> assessment procedure performed, and identification of the certificate issued;<br /> 8.<br /> The place and date of issue of the declaration, the name and function of the person who signed it, as well as an <br /> indication for, or on behalf of whom, that person signed, a signature.<br /> EN<br /> OJ L, 12.7.2024<br /> 132/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>ANNEX VI<br /> Conformity assessment procedure based on internal control<br /> 1.<br /> The conformity assessment procedure based on internal control is the conformity assessment procedure based on <br /> points 2, 3 and 4.<br /> 2.<br /> The provider verifies that the established quality management system is in compliance with the requirements of <br /> Article 17.<br /> 3.<br /> The provider examines the information contained in the technical documentation in order to assess the compliance <br /> of the AI system with the relevant essential requirements set out in Chapter III, Section 2.<br /> 4.<br /> The provider also verifies that the design and development process of the AI system and its post-market monitoring <br /> as referred to in Article 72 is consistent with the technical documentation.<br /> OJ L, 12.7.</p>
Show original text

The provider must confirm that the AI system's design, development process, and after-market monitoring (as described in Article 72) match the technical documentation.

ANNEX VII: Conformity Assessment Through Quality Management System and Technical Documentation Review

  1. Introduction
    This conformity assessment procedure involves evaluating both the quality management system and the technical documentation (see points 2-5).

  2. Overview
    Two things are examined:
    - The approved quality management system for designing, developing, and testing AI systems (under Article 17) is reviewed and monitored as described in point 5.
    - The AI system's technical documentation is reviewed as described in point 4.

  3. Quality Management System

3.1. The provider's application must include:
(a) The provider's name and address (and the authorized representative's name and address, if applicable)
(b) A list of all AI systems covered by the same quality management system
(c) Technical documentation for each AI system
(d) Documentation of the quality management system covering all requirements in Article 17
(e) A description of procedures to keep the quality management system working properly
(f) A written statement confirming this application has not been submitted to any other notified body

3.2. A notified body will assess whether the quality management system meets the requirements in Article 17 and notify the provider or their representative of the decision.

<p>, Section 2.<br /> 4.<br /> The provider also verifies that the design and development process of the AI system and its post-market monitoring <br /> as referred to in Article 72 is consistent with the technical documentation.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 133/144</p> <p>ANNEX VII<br /> Conformity based on an assessment of the quality management system and an assessment of the <br /> technical documentation<br /> 1.<br /> Introduction<br /> Conformity based on an assessment of the quality management system and an assessment of the technical <br /> documentation is the conformity assessment procedure based on points 2 to 5.<br /> 2.<br /> Overview<br /> The approved quality management system for the design, development and testing of AI systems pursuant to <br /> Article 17 shall be examined in accordance with point 3 and shall be subject to surveillance as specified in point 5. <br /> The technical documentation of the AI system shall be examined in accordance with point 4.<br /> 3.<br /> Quality management system<br /> 3.1.<br /> The application of the provider shall include:<br /> (a) the name and address of the provider and, if the application is lodged by an authorised representative, also their <br /> name and address;<br /> (b) the list of AI systems covered under the same quality management system;<br /> (c) the technical documentation for each AI system covered under the same quality management system;<br /> (d) the documentation concerning the quality management system which shall cover all the aspects listed under <br /> Article 17;<br /> (e) a description of the procedures in place to ensure that the quality management system remains adequate and <br /> effective;<br /> (f) a written declaration that the same application has not been lodged with any other notified body.<br /> 3.2.<br /> The quality management system shall be assessed by the notified body, which shall determine whether it satisfies the <br /> requirements referred to in Article 17.<br /> The decision shall be notified to the provider or its authorised representative.</p>
Show original text

A notified body must review the quality management system to ensure it meets the requirements in Article 17. The notified body will inform the provider or their representative of the decision, including the assessment results and reasons.

The provider must continue to use and maintain the approved quality management system to keep it effective and suitable.

If the provider wants to change the approved quality management system or the list of AI systems it covers, they must tell the notified body. The notified body will examine these changes and decide if the updated system still meets the requirements or if a new assessment is needed. The notified body will notify the provider of their decision with the examination results and reasons.

For technical documentation review, the provider must submit an application to a notified body of their choice. This application covers the AI system the provider plans to sell or use, which is part of the quality management system mentioned above.

The application must include: (a) the provider's name and address, (b) a written statement that no other notified body has received the same application, and (c) the technical documentation listed in Annex IV.

The notified body will examine the technical documentation.

<p>any other notified body.<br /> 3.2.<br /> The quality management system shall be assessed by the notified body, which shall determine whether it satisfies the <br /> requirements referred to in Article 17.<br /> The decision shall be notified to the provider or its authorised representative.<br /> The notification shall contain the conclusions of the assessment of the quality management system and the reasoned <br /> assessment decision.<br /> 3.3.<br /> The quality management system as approved shall continue to be implemented and maintained by the provider so <br /> that it remains adequate and efficient.<br /> 3.4.<br /> Any intended change to the approved quality management system or the list of AI systems covered by the latter shall <br /> be brought to the attention of the notified body by the provider.<br /> The proposed changes shall be examined by the notified body, which shall decide whether the modified quality <br /> management system continues to satisfy the requirements referred to in point 3.2 or whether a reassessment is <br /> necessary.<br /> The notified body shall notify the provider of its decision. The notification shall contain the conclusions of the <br /> examination of the changes and the reasoned assessment decision.<br /> 4.<br /> Control of the technical documentation.<br /> 4.1.<br /> In addition to the application referred to in point 3, an application with a notified body of their choice shall be <br /> lodged by the provider for the assessment of the technical documentation relating to the AI system which the <br /> provider intends to place on the market or put into service and which is covered by the quality management system <br /> referred to under point 3.<br /> 4.2.<br /> The application shall include:<br /> (a) the name and address of the provider;<br /> (b) a written declaration that the same application has not been lodged with any other notified body;<br /> (c) the technical documentation referred to in Annex IV.<br /> EN<br /> OJ L, 12.7.2024<br /> 134/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>4.3.<br /> The technical documentation shall be examined by the notified body.</p>
Show original text

A notified body must review the technical documentation for an AI system. The notified body needs full access to the training, validation, and testing data sets used to develop the AI system. This access can be provided through secure remote methods like APIs if needed.

During the review, the notified body can ask the AI provider for more evidence or additional tests to verify that the AI system meets the required standards. If the notified body is not satisfied with the provider's tests, it can conduct its own tests.

In cases where other verification methods are insufficient, the notified body can request access to the AI system's training models and trained models, including their parameters. This access must comply with EU laws protecting intellectual property and trade secrets.

Once the review is complete, the notified body must notify the provider of its decision. The notification must include the assessment conclusions and the reasoned decision. If the AI system meets all required standards, the notified body issues a Union technical documentation assessment certificate.

<p>L, 12.7.2024<br /> 134/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>4.3.<br /> The technical documentation shall be examined by the notified body. Where relevant, and limited to what is <br /> necessary to fulfil its tasks, the notified body shall be granted full access to the training, validation, and testing data <br /> sets used, including, where appropriate and subject to security safeguards, through API or other relevant technical <br /> means and tools enabling remote access.<br /> 4.4.<br /> In examining the technical documentation, the notified body may require that the provider supply further evidence <br /> or carry out further tests so as to enable a proper assessment of the conformity of the AI system with the <br /> requirements set out in Chapter III, Section 2. Where the notified body is not satisfied with the tests carried out by <br /> the provider, the notified body shall itself directly carry out adequate tests, as appropriate.<br /> 4.5.<br /> Where necessary to assess the conformity of the high-risk AI system with the requirements set out in Chapter III, <br /> Section 2, after all other reasonable means to verify conformity have been exhausted and have proven to be <br /> insufficient, and upon a reasoned request, the notified body shall also be granted access to the training and trained <br /> models of the AI system, including its relevant parameters. Such access shall be subject to existing Union law on the <br /> protection of intellectual property and trade secrets.<br /> 4.6.<br /> The decision of the notified body shall be notified to the provider or its authorised representative. The notification <br /> shall contain the conclusions of the assessment of the technical documentation and the reasoned assessment <br /> decision.<br /> Where the AI system is in conformity with the requirements set out in Chapter III, Section 2, the notified body shall <br /> issue a Union technical documentation assessment certificate.</p>
Show original text

When an AI system meets all the requirements in Chapter III, Section 2, the notified body must issue a Union technical documentation assessment certificate. This certificate must include the provider's name and address, examination results, any conditions for validity, and information needed to identify the AI system. The certificate and supporting documents must contain all relevant information to verify that the AI system is compliant and to allow monitoring of the system during use.

If the AI system does not meet the requirements in Chapter III, Section 2, the notified body must refuse to issue the certificate and explain the reasons in detail to the applicant.

If the AI system fails because of problems with its training data, the system must be retrained before a new conformity assessment can be requested. In this case, the notified body's refusal letter must specifically explain the data quality issues and why the system did not comply.

If any changes are made to the AI system that could affect its compliance with requirements or its intended purpose, the notified body that issued the original certificate must assess these changes. The provider must notify the notified body before making such changes or inform them if changes have already occurred.

<p>contain the conclusions of the assessment of the technical documentation and the reasoned assessment <br /> decision.<br /> Where the AI system is in conformity with the requirements set out in Chapter III, Section 2, the notified body shall <br /> issue a Union technical documentation assessment certificate. The certificate shall indicate the name and address of <br /> the provider, the conclusions of the examination, the conditions (if any) for its validity and the data necessary for the <br /> identification of the AI system.<br /> The certificate and its annexes shall contain all relevant information to allow the conformity of the AI system to be <br /> evaluated, and to allow for control of the AI system while in use, where applicable.<br /> Where the AI system is not in conformity with the requirements set out in Chapter III, Section 2, the notified body <br /> shall refuse to issue a Union technical documentation assessment certificate and shall inform the applicant <br /> accordingly, giving detailed reasons for its refusal.<br /> Where the AI system does not meet the requirement relating to the data used to train it, re-training of the AI system <br /> will be needed prior to the application for a new conformity assessment. In this case, the reasoned assessment <br /> decision of the notified body refusing to issue the Union technical documentation assessment certificate shall <br /> contain specific considerations on the quality data used to train the AI system, in particular on the reasons for <br /> non-compliance.<br /> 4.7.<br /> Any change to the AI system that could affect the compliance of the AI system with the requirements or its intended <br /> purpose shall be assessed by the notified body which issued the Union technical documentation assessment <br /> certificate. The provider shall inform such notified body of its intention to introduce any of the abovementioned <br /> changes, or if it otherwise becomes aware of the occurrence of such changes.</p>
Show original text

The notified body that issued the Union technical documentation assessment certificate must review any changes the provider wants to make. The provider must tell this notified body about planned changes or if changes have already happened. The notified body will decide if the changes need a full new assessment or if a simple supplement to the certificate is enough. If a supplement is sufficient, the notified body will review the changes, inform the provider of its decision, and issue a supplement to the certificate if approved.

The notified body must monitor the provider's quality management system to ensure the provider follows all approved rules and conditions. The provider must allow the notified body to visit the facilities where AI systems are designed, developed, and tested. The provider must also share all necessary information with the notified body. The notified body will conduct regular audits to confirm the provider maintains and uses the quality management system properly and will give the provider an audit report. During these audits, the notified body may also test the AI systems that received a Union technical documentation assessment certificate.

<p>be assessed by the notified body which issued the Union technical documentation assessment <br /> certificate. The provider shall inform such notified body of its intention to introduce any of the abovementioned <br /> changes, or if it otherwise becomes aware of the occurrence of such changes. The intended changes shall be assessed <br /> by the notified body, which shall decide whether those changes require a new conformity assessment in accordance <br /> with Article 43(4) or whether they could be addressed by means of a supplement to the Union technical <br /> documentation assessment certificate. In the latter case, the notified body shall assess the changes, notify the <br /> provider of its decision and, where the changes are approved, issue to the provider a supplement to the Union <br /> technical documentation assessment certificate.<br /> 5.<br /> Surveillance of the approved quality management system.<br /> 5.1.<br /> The purpose of the surveillance carried out by the notified body referred to in Point 3 is to make sure that the <br /> provider duly complies with the terms and conditions of the approved quality management system.<br /> 5.2.<br /> For assessment purposes, the provider shall allow the notified body to access the premises where the design, <br /> development, testing of the AI systems is taking place. The provider shall further share with the notified body all <br /> necessary information.<br /> 5.3.<br /> The notified body shall carry out periodic audits to make sure that the provider maintains and applies the quality <br /> management system and shall provide the provider with an audit report. In the context of those audits, the notified <br /> body may carry out additional tests of the AI systems for which a Union technical documentation assessment <br /> certificate was issued.<br /> OJ L, 12.7.</p>
Show original text

Auditors must give the provider an audit report. During these audits, the notified body can perform additional tests on AI systems that have received a Union technical documentation assessment certificate. According to Article 49, high-risk AI systems must be registered with the following information, which must be kept current: (1) The provider's name, address, and contact details; (2) If someone else submits information on behalf of the provider, their name, address, and contact details; (3) The authorized representative's name, address, and contact details, if applicable; (4) The AI system's trade name and any other identifying information for tracking; (5) What the AI system is designed to do and what components and functions it supports; (6) A simple explanation of the data and inputs the system uses and how it works; (7) Whether the AI system is currently on the market or in service, or if it has been removed from the market or recalled; (8) The type, number, and expiration date of any certificate from the notified body, plus the notified body's name or identification number; (9) A scanned copy of the certificate mentioned above, if applicable.

<p>shall provide the provider with an audit report. In the context of those audits, the notified <br /> body may carry out additional tests of the AI systems for which a Union technical documentation assessment <br /> certificate was issued.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 135/144</p> <p>ANNEX VIII<br /> Information to be submitted upon the registration of high-risk AI systems in accordance with <br /> Article 49<br /> Section A — Information to be submitted by providers of high-risk AI systems in accordance with Article 49(1)<br /> The following information shall be provided and thereafter kept up to date with regard to high-risk AI systems to be <br /> registered in accordance with Article 49(1):<br /> 1.<br /> The name, address and contact details of the provider;<br /> 2.<br /> Where submission of information is carried out by another person on behalf of the provider, the name, address and <br /> contact details of that person;<br /> 3.<br /> The name, address and contact details of the authorised representative, where applicable;<br /> 4.<br /> The AI system trade name and any additional unambiguous reference allowing the identification and traceability of <br /> the AI system;<br /> 5.<br /> A description of the intended purpose of the AI system and of the components and functions supported through <br /> this AI system;<br /> 6.<br /> A basic and concise description of the information used by the system (data, inputs) and its operating logic;<br /> 7.<br /> The status of the AI system (on the market, or in service; no longer placed on the market/in service, recalled);<br /> 8.<br /> The type, number and expiry date of the certificate issued by the notified body and the name or identification <br /> number of that notified body, where applicable;<br /> 9.<br /> A scanned copy of the certificate referred to in point 8, where applicable;<br /> 10.</p>
Show original text

This document outlines what information must be provided about AI systems:

Basic Requirements (Section A):
- Certificate details: type, number, expiry date, and the notified body's name or ID number (if applicable)
- A scanned copy of the certificate (if applicable)
- Which EU countries the AI system is being sold or used in
- A copy of the EU declaration of conformity
- Instructions for use in electronic format (except for high-risk AI systems used in law enforcement, migration, asylum, or border control)
- Optional: a website link for more information

High-Risk AI Systems (Section B):
Providers of high-risk AI systems must submit and regularly update the following information:
- Provider's name, address, and contact details
- Name, address, and contact details of the person submitting information (if different from the provider)
- Authorized representative's name, address, and contact details (if applicable)
- AI system's trade name and identification details for tracking
- What the AI system is designed to do
- The specific reason why the AI system is not considered high-risk
- A brief explanation of why it is not high-risk
- Current status: whether it is being sold, in use, no longer available, or recalled

<p>The type, number and expiry date of the certificate issued by the notified body and the name or identification <br /> number of that notified body, where applicable;<br /> 9.<br /> A scanned copy of the certificate referred to in point 8, where applicable;<br /> 10.<br /> Any Member States in which the AI system has been placed on the market, put into service or made available in the <br /> Union;<br /> 11.<br /> A copy of the EU declaration of conformity referred to in Article 47;<br /> 12.<br /> Electronic instructions for use; this information shall not be provided for high-risk AI systems in the areas of law <br /> enforcement or migration, asylum and border control management referred to in Annex III, points 1, 6 and 7;<br /> 13.<br /> A URL for additional information (optional).<br /> Section B — Information to be submitted by providers of high-risk AI systems in accordance with Article 49(2)<br /> The following information shall be provided and thereafter kept up to date with regard to AI systems to be registered in <br /> accordance with Article 49(2):<br /> 1.<br /> The name, address and contact details of the provider;<br /> 2.<br /> Where submission of information is carried out by another person on behalf of the provider, the name, address and <br /> contact details of that person;<br /> 3.<br /> The name, address and contact details of the authorised representative, where applicable;<br /> 4.<br /> The AI system trade name and any additional unambiguous reference allowing the identification and traceability of <br /> the AI system;<br /> 5.<br /> A description of the intended purpose of the AI system;<br /> 6.<br /> The condition or conditions under Article 6(3)based on which the AI system is considered to be not-high-risk;<br /> 7.<br /> A short summary of the grounds on which the AI system is considered to be not-high-risk in application of the <br /> procedure under Article 6(3);<br /> 8.<br /> The status of the AI system (on the market, or in service; no longer placed on the market/in service, recalled);<br /> 9.</p>
Show original text

This document outlines information requirements for AI systems and their users. First, it notes that certain AI systems are classified as low-risk under Article 6(3). Second, it requires reporting on the current status of an AI system—whether it is actively on the market or in service, or if it has been removed from the market or recalled. Third, it requires listing all EU Member States where the AI system has been made available. For high-risk AI systems that must be registered according to Article 49(3), deployers must submit and maintain updated information including: the deployer's name, address, and contact details; the name, address, and contact details of the person submitting information on behalf of the deployer; a link to the AI system's entry in the EU database provided by the manufacturer; a summary of the fundamental rights impact assessment conducted under Article 27; and a summary of the data protection impact assessment completed under Article 35 of EU Regulation 2016/679 or Article 27 of EU Directive 2016/680, as specified in Article 26(8) of this regulation, where applicable.

<p>is considered to be not-high-risk in application of the <br /> procedure under Article 6(3);<br /> 8.<br /> The status of the AI system (on the market, or in service; no longer placed on the market/in service, recalled);<br /> 9.<br /> Any Member States in which the AI system has been placed on the market, put into service or made available in the <br /> Union.<br /> EN<br /> OJ L, 12.7.2024<br /> 136/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>Section C — Information to be submitted by deployers of high-risk AI systems in accordance with Article 49(3)<br /> The following information shall be provided and thereafter kept up to date with regard to high-risk AI systems to be <br /> registered in accordance with Article 49(3):<br /> 1.<br /> The name, address and contact details of the deployer;<br /> 2.<br /> The name, address and contact details of the person submitting information on behalf of the deployer;<br /> 3.<br /> The URL of the entry of the AI system in the EU database by its provider;<br /> 4.<br /> A summary of the findings of the fundamental rights impact assessment conducted in accordance with Article 27;<br /> 5.<br /> A summary of the data protection impact assessment carried out in accordance with Article 35 of Regulation (EU) <br /> 2016/679 or Article 27 of Directive (EU) 2016/680 as specified in Article 26(8) of this Regulation, where <br /> applicable.<br /> OJ L, 12.7.</p>
Show original text

This document outlines two annexes from EU Regulation 2024/1689:

ANNEX IX - Registration Requirements for High-Risk AI System Testing:
When registering high-risk AI systems for real-world testing (as described in Article 60), the following information must be provided and kept current:
1. A unique identification number for the testing across the EU
2. Names and contact information for the AI system provider and any organizations deploying it during testing
3. A brief description of the AI system, what it is designed to do, and details needed to identify it
4. A summary of the main features of the real-world testing plan
5. Details about any suspension or stopping of the testing

ANNEX X - EU Laws on Large-Scale IT Systems for Security and Justice:
This annex references EU laws governing major information technology systems used for border security and law enforcement, including the Schengen Information System. The Schengen Information System is regulated by EU Regulation 2018/1860 (adopted November 28, 2018), which governs how this system is used to manage the return of third-country nationals who are illegally present in the EU.

<p>Regulation (EU) <br /> 2016/679 or Article 27 of Directive (EU) 2016/680 as specified in Article 26(8) of this Regulation, where <br /> applicable.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 137/144</p> <p>ANNEX IX<br /> Information to be submitted upon the registration of high-risk AI systems listed in Annex III in <br /> relation to testing in real world conditions in accordance with Article 60<br /> The following information shall be provided and thereafter kept up to date with regard to testing in real world conditions to <br /> be registered in accordance with Article 60:<br /> 1.<br /> A Union-wide unique single identification number of the testing in real world conditions;<br /> 2.<br /> The name and contact details of the provider or prospective provider and of the deployers involved in the testing in <br /> real world conditions;<br /> 3.<br /> A brief description of the AI system, its intended purpose, and other information necessary for the identification of <br /> the system;<br /> 4.<br /> A summary of the main characteristics of the plan for testing in real world conditions;<br /> 5.<br /> Information on the suspension or termination of the testing in real world conditions.<br /> EN<br /> OJ L, 12.7.2024<br /> 138/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>ANNEX X<br /> Union legislative acts on large-scale IT systems in the area of Freedom, Security and Justice<br /> 1.<br /> Schengen Information System<br /> (a) Regulation (EU) 2018/1860 of the European Parliament and of the Council of 28 November 2018 on the use of <br /> the Schengen Information System for the return of illegally staying third-country nationals (OJ L 312, <br /> 7.12.2018, p. 1).</p>
Show original text

This document outlines three key EU regulations from November 28, 2018, concerning the Schengen Information System (SIS): (1) A regulation on using SIS to track and return illegal immigrants; (2) Regulation 2018/1861 establishing how SIS operates for border control checks, updating the original Schengen Agreement rules; and (3) Regulation 2018/1862 establishing how SIS operates for police and criminal justice cooperation. Additionally, it references a 2021 regulation (2021/1133) from July 7, 2021, that modified several EU regulations to allow the Visa Information System to access other EU information systems for visa processing purposes.

<p>of the Council of 28 November 2018 on the use of <br /> the Schengen Information System for the return of illegally staying third-country nationals (OJ L 312, <br /> 7.12.2018, p. 1).<br /> (b) Regulation (EU) 2018/1861 of the European Parliament and of the Council of 28 November 2018 on the <br /> establishment, operation and use of the Schengen Information System (SIS) in the field of border checks, and <br /> amending the Convention implementing the Schengen Agreement, and amending and repealing Regulation (EC) <br /> No 1987/2006 (OJ L 312, 7.12.2018, p. 14).<br /> (c) Regulation (EU) 2018/1862 of the European Parliament and of the Council of 28 November 2018 on the <br /> establishment, operation and use of the Schengen Information System (SIS) in the field of police cooperation and <br /> judicial cooperation in criminal matters, amending and repealing Council Decision 2007/533/JHA, and <br /> repealing Regulation (EC) No 1986/2006 of the European Parliament and of the Council and Commission <br /> Decision 2010/261/EU (OJ L 312, 7.12.2018, p. 56).<br /> 2.<br /> Visa Information System<br /> (a) Regulation (EU) 2021/1133 of the European Parliament and of the Council of 7 July 2021 amending <br /> Regulations (EU) No 603/2013, (EU) 2016/794, (EU) 2018/1862, (EU) 2019/816 and (EU) 2019/818 as regards <br /> the establishment of the conditions for accessing other EU information systems for the purposes of the Visa <br /> Information System (OJ L </p>
Show original text

This text describes European Union regulations related to two main systems:

  1. Visa Information System (VIS): Regulation (EU) 2021/1134, adopted by the European Parliament and Council on July 7, 2021, updates multiple EU regulations governing visa procedures. It allows access to other EU information systems for visa purposes and reforms how the Visa Information System operates. This regulation was published in the Official Journal on July 13, 2021.

  2. Eurodac: Regulation (EU) 2024/1358, adopted on May 14, 2024, establishes a system called Eurodac that compares biometric data (fingerprints and facial recognition) to identify people illegally staying in the EU and stateless persons. It supports the implementation of other EU migration regulations and allows EU member states' police forces and Europol to use Eurodac data for law enforcement purposes. This regulation updates earlier EU rules on border management and biometric data.

<p>8/1862, (EU) 2019/816 and (EU) 2019/818 as regards <br /> the establishment of the conditions for accessing other EU information systems for the purposes of the Visa <br /> Information System (OJ L 248, 13.7.2021, p. 1).<br /> (b) Regulation (EU) 2021/1134 of the European Parliament and of the Council of 7 July 2021 amending <br /> Regulations (EC) No 767/2008, (EC) No 810/2009, (EU) 2016/399, (EU) 2017/2226, (EU) 2018/1240, (EU) <br /> 2018/1860, (EU) 2018/1861, (EU) 2019/817 and (EU) 2019/1896 of the European Parliament and of the <br /> Council and repealing Council Decisions 2004/512/EC and 2008/633/JHA, for the purpose of reforming the <br /> Visa Information System (OJ L 248, 13.7.2021, p. 11).<br /> 3.<br /> Eurodac<br /> Regulation (EU) 2024/1358 of the European Parliament and of the Council of 14 May 2024 on the establishment of <br /> ‘Eurodac’ for the comparison of biometric data in order to effectively apply Regulations (EU) 2024/1315 and (EU) <br /> 2024/1350 of the European Parliament and of the Council and Council Directive 2001/55/EC and to identify <br /> illegally staying third-country nationals and stateless persons and on requests for the comparison with Eurodac data <br /> by Member States’ law enforcement authorities and Europol for law enforcement purposes, amending Regulations <br /> (EU) 2018/1240 and (EU) 2019/818</p>
Show original text

This document describes four European Union border and security systems:

  1. Eurodac Data Comparison: A regulation (EU 2024/1358) that allows law enforcement authorities and Europol in EU member states to request and compare data with Eurodac (the EU fingerprint database) for law enforcement purposes. This regulation updates previous rules from 2018 and 2019, and replaces an earlier regulation from 2013.

  2. Entry/Exit System (EES): A regulation (EU 2017/2226) that requires EU member states to record when third-country nationals (non-EU citizens) enter and exit the EU's external borders. It also tracks entry refusals. Law enforcement agencies can access this system for security purposes.

  3. European Travel Information and Authorisation System (ETIAS): A regulation (EU 2018/1240) that establishes a system for travel authorization and information for visitors to the EU. This regulation updates several previous border and security regulations.

<p>less persons and on requests for the comparison with Eurodac data <br /> by Member States’ law enforcement authorities and Europol for law enforcement purposes, amending Regulations <br /> (EU) 2018/1240 and (EU) 2019/818 of the European Parliament and of the Council and repealing Regulation (EU) <br /> No 603/2013 of the European Parliament and of the Council (OJ L, 2024/1358, 22.5.2024, ELI: http://data.europa. <br /> eu/eli/reg/2024/1358/oj).<br /> 4.<br /> Entry/Exit System<br /> Regulation (EU) 2017/2226 of the European Parliament and of the Council of 30 November 2017 establishing an <br /> Entry/Exit System (EES) to register entry and exit data and refusal of entry data of third-country nationals crossing <br /> the external borders of the Member States and determining the conditions for access to the EES for law enforcement <br /> purposes, and amending the Convention implementing the Schengen Agreement and Regulations (EC) <br /> No 767/2008 and (EU) No 1077/2011 (OJ L 327, 9.12.2017, p. 20).<br /> 5.<br /> European Travel Information and Authorisation System<br /> (a) Regulation (EU) 2018/1240 of the European Parliament and of the Council of 12 September 2018 establishing <br /> a European Travel Information and Authorisation System (ETIAS) and amending Regulations (EU) <br /> No 1077/2011, (EU) No 515/2014, (EU) 2016/399, (EU) 2016/1624 and (EU) 2017/2226 (OJ L 236, <br /> 19.9.2018, p. 1).</p>
Show original text

This document references several European Union regulations related to border security and information systems:

  1. ETIAS (European Travel Information and Authorisation System): Regulation (EU) 2018/1241 from September 12, 2018, which modifies an earlier regulation to create a travel authorization system for visitors to the EU.

  2. ECRIS-TCN (European Criminal Records Information System for Third-Country Nationals and Stateless Persons): Regulation (EU) 2019/816 from April 17, 2019, which establishes a centralized database where EU member states can share criminal conviction records of non-EU citizens and stateless persons.

  3. Interoperability Framework: Regulation (EU) 2019/817 from May 20, 2019, which creates a system allowing different EU information systems related to borders and visas to work together and share data effectively.

These regulations work together to improve security and information sharing across EU member states regarding travel, immigration, and criminal records.

<p>4, (EU) 2016/399, (EU) 2016/1624 and (EU) 2017/2226 (OJ L 236, <br /> 19.9.2018, p. 1).<br /> (b) Regulation (EU) 2018/1241 of the European Parliament and of the Council of 12 September 2018 amending <br /> Regulation (EU) 2016/794 for the purpose of establishing a European Travel Information and Authorisation <br /> System (ETIAS) (OJ L 236, 19.9.2018, p. 72).<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 139/144</p> <p>6.<br /> European Criminal Records Information System on third-country nationals and stateless persons<br /> Regulation (EU) 2019/816 of the European Parliament and of the Council of 17 April 2019 establishing <br /> a centralised system for the identification of Member States holding conviction information on third-country <br /> nationals and stateless persons (ECRIS-TCN) to supplement the European Criminal Records Information System and <br /> amending Regulation (EU) 2018/1726 (OJ L 135, 22.5.2019, p. 1).<br /> 7.<br /> Interoperability<br /> (a) Regulation (EU) 2019/817 of the European Parliament and of the Council of 20 May 2019 on establishing <br /> a framework for interoperability between EU information systems in the field of borders and visa and amending <br /> Regulations (EC) No 767/2008, (EU) 2016/399, (EU) 2017/2226, (EU) 2018/1240, (EU) 2018/1726 and (EU) <br /> 2018/1861 of the</p>
Show original text

This document references several European Union regulations about information systems used by police, courts, and immigration authorities. Specifically, it mentions Regulation (EU) 2019/818 from May 20, 2019, which creates a system allowing different EU databases to work together in law enforcement, justice, asylum, and migration matters. The regulation updates three earlier rules: (EU) 2018/1726, (EU) 2018/1862, and (EU) 2019/816. This text is from the Official Journal of the European Union, published July 12, 2024. The document also introduces Annex XI, which outlines technical documentation requirements that providers of general artificial intelligence models must submit. According to Article 53(1), all AI model providers must include specific information in their technical documentation, with the amount of detail depending on the model's size and potential risks.

<p>(EU) 2016/399, (EU) 2017/2226, (EU) 2018/1240, (EU) 2018/1726 and (EU) <br /> 2018/1861 of the European Parliament and of the Council and Council Decisions 2004/512/EC and <br /> 2008/633/JHA (OJ L 135, 22.5.2019, p. 27).<br /> (b) Regulation (EU) 2019/818 of the European Parliament and of the Council of 20 May 2019 on establishing <br /> a framework for interoperability between EU information systems in the field of police and judicial cooperation, <br /> asylum and migration and amending Regulations (EU) 2018/1726, (EU) 2018/1862 and (EU) 2019/816 (OJ <br /> L 135, 22.5.2019, p. 85).<br /> EN<br /> OJ L, 12.7.2024<br /> 140/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>ANNEX XI<br /> Technical documentation referred to in Article 53(1), point (a) — technical documentation for <br /> providers of general-purpose AI models<br /> Section 1<br /> Information to be provided by all providers of general-purpose AI models<br /> The technical documentation referred to in Article 53(1), point (a) shall contain at least the following information as <br /> appropriate to the size and risk profile of the model:<br /> 1.</p>
Show original text

All providers of general-purpose AI models must provide technical documentation that includes the following information, adjusted based on the model's size and risk level:

  1. Basic Model Information:
    (a) What tasks the model can perform and what types of AI systems it can be used in
    (b) Rules for acceptable use
    (c) When it was released and how it is distributed
    (d) How the model is built and how many parameters it has
    (e) What types of inputs and outputs it uses (such as text or images)
    (f) The license type

  2. Detailed Development Information:
    (a) Technical requirements needed to integrate the model into AI systems (instructions, infrastructure, tools)
    (b) How the model was designed and trained, including the methods used, key decisions made, what the model was optimized for, and why certain choices were made
    (c) Details about the training data, including: the type and source of data, how it was cleaned and filtered, the number of data points, what the data covers, how it was selected, and methods used to identify problems and bias in the data
    (d) Computing power used to train the model, how long training took, and other relevant training details
    (e) How much energy the model uses or is expected to use

<p>to be provided by all providers of general-purpose AI models<br /> The technical documentation referred to in Article 53(1), point (a) shall contain at least the following information as <br /> appropriate to the size and risk profile of the model:<br /> 1.<br /> A general description of the general-purpose AI model including:<br /> (a) the tasks that the model is intended to perform and the type and nature of AI systems in which it can be <br /> integrated;<br /> (b) the acceptable use policies applicable;<br /> (c) the date of release and methods of distribution;<br /> (d) the architecture and number of parameters;<br /> (e) the modality (e.g. text, image) and format of inputs and outputs;<br /> (f) the licence.<br /> 2.<br /> A detailed description of the elements of the model referred to in point 1, and relevant information of the process <br /> for the development, including the following elements:<br /> (a) the technical means (e.g. instructions of use, infrastructure, tools) required for the general-purpose AI model to <br /> be integrated in AI systems;<br /> (b) the design specifications of the model and training process, including training methodologies and techniques, <br /> the key design choices including the rationale and assumptions made; what the model is designed to optimise for <br /> and the relevance of the different parameters, as applicable;<br /> (c) information on the data used for training, testing and validation, where applicable, including the type and <br /> provenance of data and curation methodologies (e.g. cleaning, filtering, etc.), the number of data points, their <br /> scope and main characteristics; how the data was obtained and selected as well as all other measures to detect the <br /> unsuitability of data sources and methods to detect identifiable biases, where applicable;<br /> (d) the computational resources used to train the model (e.g. number of floating point operations), training time, <br /> and other relevant details related to the training;<br /> (e) known or estimated energy consumption of the model.</p>
Show original text

Providers of general-purpose AI models with systemic risk must provide the following information:

  1. Training Details: Report the computational resources used to train the model (such as the number of floating point operations), how long training took, and other relevant training information. Also include the model's energy consumption. If energy consumption is unknown, it can be estimated based on the computational resources used.

  2. Evaluation Information: Provide a detailed description of how the model was tested and evaluated, including the results. This should use available public evaluation tools and methods. The evaluation description must include the criteria used, the metrics measured, and how limitations were identified.

  3. Safety Testing: Where applicable, describe the measures taken to test the model for potential problems through internal and external testing (such as red teaming), as well as any adjustments made to improve the model, including alignment and fine-tuning.

  4. System Architecture: Where applicable, provide a detailed explanation of how the model's software components work together and integrate into the overall system.

Providers must also give downstream providers who use their model in AI systems technical documentation containing at least this information.

<p>identifiable biases, where applicable;<br /> (d) the computational resources used to train the model (e.g. number of floating point operations), training time, <br /> and other relevant details related to the training;<br /> (e) known or estimated energy consumption of the model.<br /> With regard to point (e), where the energy consumption of the model is unknown, the energy consumption may be <br /> based on information about computational resources used.<br /> Section 2<br /> Additional information to be provided by providers of general-purpose AI models with systemic risk<br /> 1.<br /> A detailed description of the evaluation strategies, including evaluation results, on the basis of available public <br /> evaluation protocols and tools or otherwise of other evaluation methodologies. Evaluation strategies shall include <br /> evaluation criteria, metrics and the methodology on the identification of limitations.<br /> 2.<br /> Where applicable, a detailed description of the measures put in place for the purpose of conducting internal and/or <br /> external adversarial testing (e.g. red teaming), model adaptations, including alignment and fine-tuning.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 141/144</p> <p>3.<br /> Where applicable, a detailed description of the system architecture explaining how software components build or <br /> feed into each other and integrate into the overall processing.<br /> EN<br /> OJ L, 12.7.2024<br /> 142/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p> <p>ANNEX XII<br /> Transparency information referred to in Article 53(1), point (b) — technical documentation for <br /> providers of general-purpose AI models to downstream providers that integrate the model into their <br /> AI system<br /> The information referred to in Article 53(1), point (b) shall contain at least the following:<br /> 1.</p>
Show original text

Providers of general-purpose AI models must give technical documentation to companies that use their models in AI systems. This documentation must include:

  1. Basic Model Information:
    - What tasks the model can perform and what types of AI systems it can be used in
    - Rules for acceptable use
    - When the model was released and how it is distributed
    - How the model works with other hardware or software, if applicable
    - Required software versions for using the model
    - The model's structure and number of parameters
    - What types of inputs and outputs it accepts (such as text or images) and their format
    - The model's license

  2. Model Development Details:
    - Technical tools and instructions needed to integrate the model into AI systems
    - The types of inputs and outputs the model accepts, their format, and maximum size (such as how much text it can process at once)
    - Information about the data used to train, test, and improve the model, including where the data came from and how it was prepared

<p>b) — technical documentation for <br /> providers of general-purpose AI models to downstream providers that integrate the model into their <br /> AI system<br /> The information referred to in Article 53(1), point (b) shall contain at least the following:<br /> 1.<br /> A general description of the general-purpose AI model including:<br /> (a) the tasks that the model is intended to perform and the type and nature of AI systems into which it can be <br /> integrated;<br /> (b) the acceptable use policies applicable;<br /> (c) the date of release and methods of distribution;<br /> (d) how the model interacts, or can be used to interact, with hardware or software that is not part of the model <br /> itself, where applicable;<br /> (e) the versions of relevant software related to the use of the general-purpose AI model, where applicable;<br /> (f) the architecture and number of parameters;<br /> (g) the modality (e.g. text, image) and format of inputs and outputs;<br /> (h) the licence for the model.<br /> 2.<br /> A description of the elements of the model and of the process for its development, including:<br /> (a) the technical means (e.g. instructions for use, infrastructure, tools) required for the general-purpose AI model to <br /> be integrated into AI systems;<br /> (b) the modality (e.g. text, image, etc.) and format of the inputs and outputs and their maximum size (e.g. context <br /> window length, etc.);<br /> (c) information on the data used for training, testing and validation, where applicable, including the type and <br /> provenance of data and curation methodologies.<br /> OJ L, 12.7.</p>
Show original text

This document outlines the criteria the European Commission will use to identify general-purpose AI models that pose systemic risks, as defined in Article 51. The Commission will evaluate these models based on seven key factors: (a) the number of parameters in the model; (b) the quality and size of the training data, measured in tokens; (c) the computational resources used for training, measured in floating point operations or indicated through training cost, time, or energy consumption; (d) the types of inputs and outputs the model can handle, such as text-to-text, text-to-image, or multi-modal formats, and whether it meets high-impact capability thresholds for each type; (e) benchmark test results and capability evaluations, including how many tasks it can perform without additional training, its ability to learn new tasks, its level of independence, and its scalability; (f) whether it has significant impact on the EU market, which is assumed if it has been made available to at least 10,000 registered business users in the EU; and (g) the total number of registered end-users.

<p>.g. context <br /> window length, etc.);<br /> (c) information on the data used for training, testing and validation, where applicable, including the type and <br /> provenance of data and curation methodologies.<br /> OJ L, 12.7.2024<br /> EN<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj<br /> 143/144</p> <p>ANNEX XIII<br /> Criteria for the designation of general-purpose AI models with systemic risk referred to in Article 51<br /> For the purpose of determining that a general-purpose AI model has capabilities or an impact equivalent to those set out in <br /> Article 51(1), point (a), the Commission shall take into account the following criteria:<br /> (a)<br /> the number of parameters of the model;<br /> (b)<br /> the quality or size of the data set, for example measured through tokens;<br /> (c)<br /> the amount of computation used for training the model, measured in floating point operations or indicated by <br /> a combination of other variables such as estimated cost of training, estimated time required for the training, or <br /> estimated energy consumption for the training;<br /> (d)<br /> the input and output modalities of the model, such as text to text (large language models), text to image, <br /> multi-modality, and the state of the art thresholds for determining high-impact capabilities for each modality, and <br /> the specific type of inputs and outputs (e.g. biological sequences);<br /> (e)<br /> the benchmarks and evaluations of capabilities of the model, including considering the number of tasks without <br /> additional training, adaptability to learn new, distinct tasks, its level of autonomy and scalability, the tools it has <br /> access to;<br /> (f)<br /> whether it has a high impact on the internal market due to its reach, which shall be presumed when it has been <br /> made available to at least 10 000 registered business users established in the Union;<br /> (g)<br /> the number of registered end-users.<br /> EN<br /> OJ L, 12.7.</p>
Show original text

A platform is considered to have significant reach when it has been made available to at least 10,000 registered business users in the European Union. Additionally, the number of registered end-users should be reported.

<p>to its reach, which shall be presumed when it has been <br /> made available to at least 10 000 registered business users established in the Union;<br /> (g)<br /> the number of registered end-users.<br /> EN<br /> OJ L, 12.7.2024<br /> 144/144<br /> ELI: http://data.europa.eu/eli/reg/2024/1689/oj</p>

Entities

10-day notification requirement technical_requirement

The requirement for a notified body to inform providers within 10 days when its designation has been suspended, restricted, or withdrawn.
  • notified body: Notified bodies must inform providers within 10 days when their designation has been suspended, restricted, or withdrawn.

10^25 floating point operations technical_requirement

The computational threshold for presuming a general-purpose AI model has high impact capabilities based on cumulative training computation.
  • Article 51: Article 51 establishes the computational threshold for presuming high impact capabilities.

1999/5/EC directive

Earlier directive on radio equipment that was repealed by Directive 2014/53/EU.
  • 2014/53/EU: Directive 2014/53/EU repeals Directive 1999/5/EC.

2 August 2026 legal_obligation

The general date of application for the Regulation.
  • This Regulation: The Regulation establishes 2 August 2026 as its general date of application.

2 August 2026 technical_requirement

Deadline by which at least one national AI regulatory sandbox must be operational in each Member State.
  • Member States: Member States must ensure operational sandboxes by the deadline of 2 August 2026.

2 August 2030 legal_obligation

Deadline for operators of high-risk AI systems intended for public authorities to comply with Regulation requirements.

2009/48/EC directive

Directive of the European Parliament and Council on the safety of toys, enacted on 18 June 2009.

2013/53/EU directive

Directive on recreational craft and personal watercraft, enacted on 20 November 2013, repealing Directive 94/25/EC.

2014/33/EU directive

Directive on the harmonisation of laws relating to lifts and safety components for lifts, enacted on 26 February 2014.

2014/34/EU directive

Directive on the harmonisation of laws relating to equipment and protective systems for potentially explosive atmospheres, enacted on 26 February 2014.

2014/53/EU directive

Directive on the harmonisation of laws relating to radio equipment making available on the market, enacted on 16 April 2014, repealing Directive 1999/5/EC.

2014/68/EU directive

Directive on the harmonisation of laws relating to pressure equipment making available on the market, enacted on 15 May 2014.

2017/746 regulation

A regulation establishing a governance framework for coordinating and supporting the application of AI-related rules at national and Union levels.
  • Board: The regulation establishes a Board composed of Member State representatives.
  • Scientific panel: The regulation establishes a scientific panel to integrate the scientific community.
  • Advisory forum: The regulation establishes an advisory forum for stakeholder input.
  • AI Office: The AI Office contributes to the implementation of the regulation.
  • EuroHPC Joint Undertaking: The regulation references synergies with the EuroHPC Joint Undertaking for building Union expertise.
  • Digital Europe Programme: The regulation references AI testing and experimentation facilities under the Digital Europe Programme.

2019 Ethics guidelines for trustworthy AI documentation

Non-binding ethical principles developed by the AI HLEG to ensure AI is trustworthy and ethically sound.
  • AI HLEG: The AI HLEG developed the 2019 Ethics guidelines for trustworthy AI.
  • Regulation: The Regulation references the 2019 Ethics guidelines for trustworthy AI as important context for the risk-based approach.

2024/1689 regulation

A regulation published in the Official Journal of the European Union on 12.7.2024, identified by ELI http://data.europa.eu/eli/reg/2024/1689/oj

94/25/EC directive

Earlier directive on recreational craft that was repealed by Directive 2013/53/EU.
  • 2013/53/EU: Directive 2013/53/EU repeals Directive 94/25/EC.

acceptable use policies legal_obligation

Policies that define acceptable uses of general-purpose AI models and must be documented.
  • technical documentation: Technical documentation must contain information about acceptable use policies applicable to the model.

access to personal data legal_obligation

Obligation for market surveillance authorities to obtain access to all personal data being processed.

accessibility requirements legal_obligation

Requirements ensuring full and equal access for persons with disabilities to high-risk AI systems in accordance with EU Directives 2016/2102 and 2019/882.

accessibility requirements technical_requirement

Requirements that the EU database must comply with to ensure accessibility for users.
  • EU database: The EU database must comply with applicable accessibility requirements.

accessible formats technical_requirement

Information and notifications must be provided in formats accessible to persons with disabilities.

accountability evaluation_criterion

An ethical principle for trustworthy AI systems.

accountability framework technical_requirement

A framework establishing responsibilities of management and staff regarding all aspects of high-risk AI system compliance and governance.
  • high-risk AI system: High-risk AI systems must establish an accountability framework setting out responsibilities of management and staff.

Accreditation certificate documentation

Document issued by a national accreditation body certifying that a conformity assessment body meets the requirements of Article 31.
  • National accreditation body: National accreditation bodies issue accreditation certificates attesting compliance with Article 31 requirements.

accuracy evaluation_criterion

Performance metric that high-risk AI systems should meet in accordance with their intended purpose and state of the art.
  • High-risk AI systems: High-risk AI systems are required to meet an appropriate level of accuracy.
  • AI regulatory sandbox: AI systems in the sandbox are assessed on accuracy as a relevant dimension.

Accuracy, robustness, and cybersecurity technical_requirement

Technical requirements that high-risk AI systems must achieve, including appropriate levels of performance consistency throughout their lifecycle.
  • High-risk AI systems: High-risk AI systems must be designed to achieve appropriate levels of accuracy, robustness, and cybersecurity.

adequate safeguards with respect to the protection of fundamental rights and freedoms technical_requirement

Required protections that third countries or international organizations must provide to qualify for exemption from the regulation.

Administrative fine legal_obligation

Monetary penalty imposed for non-compliance with AI regulations, with amounts determined by percentages or fixed amounts up to EUR 1,500,000, and reduced for SMEs and start-ups.
  • SMEs: Administrative fines apply to SMEs and start-ups at reduced percentages or amounts.
  • AI system: Administrative fines are imposed for infringements related to AI systems, with the system's purpose considered in penalty determination.
  • Market surveillance authorities: Market surveillance authorities are responsible for applying administrative fines for infringements.
  • National competent authorities: National competent authorities determine and impose administrative fines based on relevant circumstances and evaluation criteria.

administrative fines legal_obligation

Penalties imposed by Member States and supervisors on AI model providers and Union institutions for infringements of AI regulations, subject to procedural safeguards.
  • European Data Protection Supervisor: The European Data Protection Supervisor may impose administrative fines on Union institutions.
  • procedural safeguards: The exercise of powers to impose administrative fines is subject to appropriate procedural safeguards.
  • Article 101: Article 101 establishes the legal obligation for the Commission to impose administrative fines.
  • Regulation: The Regulation establishes administrative fines as penalties for infringements.

Administrative penalties and fines legal_obligation

Enforcement measures including effective, proportionate and dissuasive penalties for infringement of the Regulation.
  • Member States: Member States must lay down effective, proportionate and dissuasive penalties for infringement.
  • European Data Protection Supervisor: The European Data Protection Supervisor has the power to impose fines on Union institutions, agencies and bodies.

adversarial attacks technical_requirement

An AI-specific vulnerability involving attempts to manipulate AI system inputs to cause incorrect outputs or behavior.
  • high-risk AI system: Adversarial attacks represent an AI-specific vulnerability that threatens high-risk AI systems.

adversarial examples technical_requirement

Inputs designed to cause AI models to make mistakes, requiring control and response measures.
  • high-risk AI systems: Technical solutions must include measures to prevent and control adversarial examples and model evasion.

adversarial testing technical_requirement

A testing methodology including red teaming, model adaptations, alignment, and fine-tuning that providers must conduct and document either internally or through independent external testing.
  • provider: Providers must conduct and document adversarial testing of models prior to market placement.
  • providers of general-purpose AI models: Providers must describe measures for conducting adversarial testing and model adaptations.

advisory forum institution

An advisory body established to provide technical expertise and stakeholder input to the Commission and Board on standardisation and AI regulation matters, with members serving two-year terms and meeting at least twice yearly.

affected persons market_actor

Persons located in the Union affected by AI systems.

AI ai_system

A fast-evolving family of technologies that contributes to economic, environmental and societal benefits across industries, but may also generate risks and harm to public interests and fundamental rights.
  • Article 2 TEU: AI and its regulatory framework should be developed in accordance with Union values enshrined in Article 2 TEU.
  • Article 6 TEU: AI development should comply with fundamental rights and freedoms pursuant to Article 6 TEU and the Charter.
  • Treaty on European Union: AI regulatory framework is based on Union values and fundamental rights enshrined in the Treaty on European Union.

AI developers market_actor

Professionals and entities that develop AI systems and are expected to engage in interdisciplinary cooperation projects pursuing socially and environmentally beneficial outcomes.
  • AI system: AI developers are responsible for creating AI systems and should cooperate interdisciplinarily in their development.

AI HLEG institution

An independent high-level expert group appointed by the Commission to develop ethical guidelines and principles for trustworthy AI.

AI labeling requirement legal_obligation

The obligation to label content generated by AI systems under the regulation to mitigate systemic risks.

AI literacy legal_obligation

A requirement to equip providers, deployers, and affected persons with necessary knowledge and understanding of AI systems' technical elements, application measures, and impacts to ensure informed decision-making and appropriate compliance.
  • Regulation 2024/1689: AI literacy requirements are established within Regulation 2024/1689.
  • AI system development phase: AI literacy includes understanding the correct application of technical elements during AI system development.
  • Regulation: The Regulation requires AI literacy measures to ensure appropriate compliance and correct enforcement.
  • Article 4: Article 4 establishes the legal obligation for AI literacy.
  • downstream provider: AI literacy requirements apply to providers and deployers of AI systems.

AI literacy evaluation_criterion

Skills, knowledge, and understanding that enable providers, deployers, and affected persons to make informed decisions about AI deployment and understand opportunities and risks.

AI literacy technical_requirement

A requirement to promote understanding of AI among persons involved in the development, operation, and use of AI systems.
  • codes of conduct: Codes of conduct require promoting AI literacy among persons involved in AI development and use.

AI literacy training technical_requirement

Required competence level for persons assigned to implement instructions for use and human oversight of high-risk AI systems.
  • deployers: Deployers must ensure that persons implementing instructions for use have adequate AI literacy, training, and authority.

AI model ai_model

General-purpose AI models that generate content and can be implemented with techniques to facilitate fulfillment of obligations.
  • marking obligation: AI models can be implemented with techniques to facilitate fulfillment of marking obligations.

AI models ai_model

General-purpose AI models subject to regulatory requirements that incorporate ethical principles and may demonstrate compliance through codes of practice or harmonised standards.
  • Transparency and explainability: AI models should incorporate ethical principles including transparency and explainability in their design.
  • This Regulation: This Regulation regulates and imposes requirements and obligations on AI models.
  • codes of practice: Providers of general-purpose AI models can demonstrate compliance using codes of practice as alternative adequate means.
  • harmonised standards: Providers of general-purpose AI models can demonstrate compliance using harmonised standards.
  • Regulation /2024/1689/oj: The regulation applies to AI models with specific exemptions for scientific research, pre-market development, and certain licensing contexts.

AI Office institution

A Union-level institution established by Commission Decision responsible for monitoring compliance of AI model and system providers, overseeing classification of systemic risk models, encouraging codes of practice, and providing coordination support for market surveillance and joint investigations.
  • technical documentation: Technical documentation must be made available upon request to the AI Office.
  • training data summary: The AI Office should provide a template for the training data summary that providers must use.
  • copyright compliance policy: The AI Office monitors whether providers have fulfilled copyright compliance obligations.
  • benchmarks and indicators for model capability: The AI Office engages with stakeholders to establish thresholds, tools and benchmarks for assessing high-impact capabilities.
  • general-purpose AI model with systemic risk: General-purpose AI models with systemic risks are subject to oversight and notification requirements by the AI Office.
  • notification requirement: The notification requirement mandates that providers inform the AI Office within two weeks of meeting systemic risk criteria.
  • general-purpose AI model with systemic risk: The AI Office monitors and receives alerts about general-purpose AI models that should be classified as having systemic risk.
  • scientific panel: The scientific panel provides qualified alerts to the AI Office regarding systemic risks in AI models and supports its monitoring activities.
  • codes of practice: The AI Office encourages, facilitates, and regularly monitors the development and implementation of codes of practice at Union level.
  • national competent authorities: The AI Office collaborates with relevant national competent authorities on codes of practice.
  • Scientific Panel: The AI Office may consult with the Scientific Panel for drawing up codes of practice.
  • Codes of practice: The AI Office encourages, facilitates, and regularly monitors the development and implementation of codes of practice at Union level.
  • European harmonised standard: AI Office assesses harmonised standards as suitable to cover relevant obligations.
  • Commission Decision (24.1.2024): The Commission Decision established the AI Office.
  • 2017/746: The AI Office contributes to the implementation of the regulation.
  • general-purpose AI models: The AI Office conducts monitoring activities regarding general-purpose AI models.
  • market surveillance authorities: The AI Office provides coordination support for joint investigations conducted by market surveillance authorities.
  • Regulation (EU) 2019/1020: The AI Office operates as a market surveillance authority under the powers provided by Regulation (EU) 2019/1020.
  • market surveillance authority: Market surveillance authorities should cooperate with the AI Office to carry out evaluations of compliance for general-purpose AI systems.
  • general-purpose AI model: The AI Office monitors compliance of general-purpose AI models, initiates structured dialogues regarding those with systemic risks, and can enforce access to related information during investigations.
  • general-purpose AI models: The AI Office monitors and ensures compliance with rules applicable to general-purpose AI models.
  • market surveillance authorities: Market surveillance authorities can request the AI Office to investigate possible infringements.
  • downstream providers: Downstream providers can lodge complaints with the AI Office about infringements of rules on general-purpose AI model providers.
  • documentation and information: The AI Office can request documentation and information from providers of general-purpose AI models.
  • general-purpose AI models: The AI Office conducts evaluations of general-purpose AI models and can involve independent experts to carry out these evaluations.
  • classification rules and procedures: The AI Office ensures classification rules and procedures remain current with technological developments.
  • Commission Decision of 24 January 2024: The AI Office was established by the Commission Decision of 24 January 2024.
  • Regulation 2024/1689: Regulation 2024/1689 establishes the AI Office as the institution responsible for monitoring compliance.
  • voluntary model terms: The AI Office develops and recommends voluntary model terms for contracts.
  • fundamental rights impact assessment: AI Office shall develop templates and questionnaires to facilitate compliance with fundamental rights impact assessment obligations.
  • Obligations for providers of general-purpose AI models: Technical documentation must be provided to the AI Office upon request.
  • authorised representative: Authorised representatives must provide documentation and information to the AI Office upon request and can be addressed on compliance issues.
  • incident reporting: Incident reporting obligation requires notification to the AI Office.
  • Codes of practice: The AI Office encourages, facilitates, and regularly monitors the development and implementation of codes of practice at Union level.
  • general-purpose AI models: The AI Office invites all providers of general-purpose AI models to participate in and adhere to codes of practice.
  • codes of practice: The AI Office encourages, facilitates, and regularly monitors the development and implementation of codes of practice at Union level.
  • general-purpose AI models: The AI Office invites providers of general-purpose AI models to adhere to codes of practice.
  • National competent authorities: National competent authorities must inform the AI Office of decisions to suspend testing or sandbox participation.
  • AI regulatory sandbox: The AI Office maintains a publicly available list of planned and existing sandboxes and provides support and guidance.
  • AI regulatory sandboxes: The AI Office maintains and publishes a list of planned and existing AI regulatory sandboxes.
  • National competent authorities: National competent authorities must submit annual reports to the AI Office regarding sandbox implementation.
  • standardised templates: AI Office is required to provide standardized templates for areas covered by the Regulation.
  • Commission: The Commission establishes and develops the AI Office for Union expertise in AI.
  • Regulation 2024/1689: Regulation 2024/1689 establishes the AI Office as the institution responsible for monitoring compliance.
  • European Artificial Intelligence Board: The AI Office attends the Board's meetings and participates in its operations without voting rights.
  • Board: The AI Office must inform the Board of monitoring measures and alerts, and consult it before conducting evaluations.
  • AI regulatory sandboxes: The AI Office supports national competent authorities in establishing and developing AI regulatory sandboxes.
  • scientific panel: The scientific panel provides qualified alerts to the AI Office regarding systemic risks in AI models and supports its monitoring activities.
  • scientific panel: The scientific panel provides qualified alerts to the AI Office regarding systemic risks in AI models and supports its monitoring activities.
  • Union safeguard procedure: The AI Office carries out duties in the context of the Union safeguard procedure pursuant to Article 81.
  • General-purpose AI model: The AI Office monitors compliance of general-purpose AI models, initiates structured dialogues regarding those with systemic risks, and can enforce access to related information during investigations.
  • Regulation (EU) 2019/1020: The AI Office operates as a market surveillance authority under the powers provided by Regulation (EU) 2019/1020.
  • market surveillance authority: Market surveillance authorities cooperate with and submit requests to the AI Office for compliance evaluations.
  • general-purpose AI model: The AI Office monitors compliance of general-purpose AI models, initiates structured dialogues regarding those with systemic risks, and can enforce access to related information during investigations.
  • The Commission: The Commission entrusts the AI Office with implementation of supervision and enforcement tasks.
  • Commission: The Commission exercises powers through the AI Office for assessing systemic risks.
  • Board: The AI Office must inform the Board of monitoring measures and alerts, and consult it before conducting evaluations.
  • Board: The AI Office must inform the Board of monitoring measures and alerts, and consult it before conducting evaluations.
  • provider of the general-purpose AI model: The AI Office may initiate a structured dialogue with the provider before sending a request for information.
  • Article 92: Article 92 establishes the power of the AI Office to conduct evaluations of general-purpose AI models.
  • general-purpose AI model: The AI Office monitors compliance of general-purpose AI models, initiates structured dialogues regarding those with systemic risks, and can enforce access to related information during investigations.
  • Member States: The AI Office and Member States jointly encourage and facilitate the drawing up of codes of conduct.
  • codes of conduct: The AI Office facilitates the drawing up of codes of conduct for AI systems.
  • SMEs and start-ups: The AI Office takes into account the specific interests and needs of SMEs and start-ups when facilitating codes of conduct.
  • Commission: The AI Office can request the Commission to update guidelines.
  • Commission: The AI Office can request the Commission to update guidelines.
  • Regulation: The Regulation establishes the framework for the AI Office's implementation and enforcement.
  • Commission: The Commission evaluates the functioning, powers, competences, and resources of the AI Office.
  • risk level evaluation methodology: The AI Office shall develop an objective and participative methodology for evaluation of risk levels.

AI Regulation regulation

A regulatory framework governing AI systems implementation in the European Union that establishes requirements for health, safety, and fundamental rights protection while supporting innovation through cooperation and monitoring by competent authorities.
  • Charter of Fundamental Rights of the European Union: The AI Regulation is applied in accordance with the values enshrined in the Charter.
  • AI systems: The regulation establishes uniform obligations for operators developing, importing, or using AI systems.
  • Member States: The regulation prevents Member States from imposing restrictions on AI development, marketing, and use unless explicitly authorized.
  • fundamental rights: The regulation requires protection of fundamental rights including democracy, rule of law, and environmental protection.
  • free movement of AI-based goods and services: The regulation ensures the free movement and cross-border circulation of AI-based goods and services.
  • advisory forum: The advisory forum contributes to tasks under the AI Regulation.
  • Article 96: Article 96 is contained within the AI Regulation.
  • Commission: The Commission is required to develop guidelines on the practical implementation of the AI Regulation.

AI Regulation 2024/1689 regulation

The main regulation establishing harmonised rules for AI systems in the Union, including requirements for placing on market, prohibitions, and transparency rules.

AI regulatory sandbox technical_requirement

A controlled framework for experimentation and testing of innovative AI systems under strict regulatory oversight before market placement.
  • Member States: Member States are required to establish at least one AI regulatory sandbox at national level through their competent authorities.
  • AI systems: The regulatory sandbox facilitates development and testing of innovative AI systems before market placement.
  • sandbox plan: The sandbox plan describes the objectives, conditions, timeframe, methodology and requirements for activities within the AI regulatory sandbox.
  • real-world testing plan: The real-world testing plan describes the methodology and scope for testing within the AI regulatory sandbox.

AI regulatory sandbox institution

A controlled environment established by national competent authorities in physical, digital, or hybrid form to enable providers to develop, train, validate and test innovative AI systems under supervision before market placement.
  • Member States: Member States are required to establish at least one AI regulatory sandbox at national level through their competent authorities.
  • national competent authorities: National competent authorities are responsible for establishing and managing AI regulatory sandboxes.
  • innovative AI systems: AI regulatory sandboxes provide controlled environments for development, training, testing, and validation of innovative AI systems.
  • SMEs: AI regulatory sandboxes should be widely accessible with particular attention to accessibility for SMEs and start-ups.
  • legal certainty: A primary objective of AI regulatory sandboxes is to enhance legal certainty for innovators.
  • regulatory learning: AI regulatory sandboxes aim to facilitate regulatory learning for authorities and undertakings through evidence-based experimentation.
  • innovators: Innovators and prospective providers participate in AI regulatory sandboxes to address legal uncertainty in AI development.
  • AI systems: The AI regulatory sandbox supervises the development, training, testing, and validation of AI systems in real-world conditions.
  • national competent authorities: National competent authorities establish and manage AI regulatory sandboxes.
  • Regulation: AI regulatory sandboxes are established under and governed by the Regulation.
  • real world conditions testing: AI regulatory sandboxes may permit testing of AI systems in real world conditions upon agreement.
  • Regulation (EU) 2016/679: The AI regulatory sandbox operates subject to GDPR requirements and conditions.
  • competent authorities: Competent authorities establish and operate AI regulatory sandboxes.
  • European Commission: The Commission may provide technical support, advice and tools for establishment and operation of sandboxes.
  • European Data Protection Supervisor: The European Data Protection Supervisor may establish an AI regulatory sandbox for Union institutions.
  • This Regulation: The sandbox operates under the requirements and obligations established by the regulation.
  • Competent authorities: Competent authorities provide supervision, guidance, and support within the sandbox and publish project summaries on their websites.
  • National competent authorities: National competent authorities exercise supervisory powers and make decisions regarding suspension or termination of sandbox participation.
  • AI Office: The AI Office maintains a publicly available list of planned and existing sandboxes and provides support and guidance.
  • National competent authorities: Sandboxes are designed to facilitate cross-border cooperation between national competent authorities.
  • accuracy: AI systems in the sandbox are assessed on accuracy as a relevant dimension.
  • robustness: AI systems in the sandbox are assessed on robustness as a relevant dimension.
  • cybersecurity: AI systems in the sandbox are assessed on cybersecurity as a relevant dimension.
  • fundamental rights: The sandbox requires protection of fundamental rights during AI system testing.
  • Union law on the protection of personal data: The sandbox operates under and must comply with Union data protection law.
  • personal data protection measures: The sandbox requires implementation of appropriate technical and organisational measures to protect personal data.
  • law enforcement authorities: Law enforcement processing of personal data in sandboxes is subject to specific Union or national law and cumulative conditions.
  • processing logs: Logs of personal data processing are maintained for the duration of sandbox participation.
  • Article 58: Article 58 establishes the AI regulatory sandbox framework.

AI regulatory sandbox legislative_procedure

A specific regime for testing high-risk AI systems in real world conditions with participation of providers or prospective providers.
  • high-risk AI systems: High-risk AI systems may be tested within an AI regulatory sandbox regime.

AI regulatory sandboxes technical_requirement

Controlled testing environments established by national competent authorities with AI Office support to test AI systems under real-world conditions in compliance with regulatory requirements.
  • Regulation: The Regulation includes provisions on AI regulatory sandboxes and testing in real world conditions.
  • AI Office: The AI Office supports national competent authorities in establishing and developing AI regulatory sandboxes.

AI regulatory sandboxes institution

Regulatory frameworks established by Member States to support AI innovation, facilitate compliance testing, and provide priority access to SMEs and start-ups for developing and testing AI systems under specific legal conditions.
  • Member States: Member States shall establish and maintain at least one AI regulatory sandbox at national level to support innovation.
  • SMEs: SMEs and start-ups are eligible to access AI regulatory sandboxes with priority access.
  • this Regulation: AI regulatory sandboxes operate under and must comply with the regulation.
  • National competent authorities: National competent authorities establish, operate, supervise, and control the operation of AI regulatory sandboxes.
  • health and safety risks: Significant risks to health and safety identified during AI system testing must result in adequate mitigation.
  • Article 58: Article 58 establishes the legal framework governing the detailed arrangements and functioning of AI regulatory sandboxes.
  • implementing acts: Implementing acts establish common principles and detailed arrangements for the operation of AI regulatory sandboxes.
  • AI system: AI regulatory sandboxes are designed to test and supervise AI systems.
  • SMEs: AI regulatory sandboxes provide free access to SMEs, including start-ups.
  • conformity assessment obligations: AI regulatory sandboxes facilitate providers in complying with conformity assessment obligations under the Regulation.
  • codes of conduct: AI regulatory sandboxes facilitate the voluntary application of codes of conduct referenced in Article 95.
  • notified bodies: AI regulatory sandboxes facilitate the involvement of notified bodies in the AI ecosystem.
  • SMEs: AI regulatory sandboxes facilitate participation of SMEs and start-ups with simplified procedures and clear communication.
  • Member State: Member States may establish AI regulatory sandboxes with participation recognized uniformly across the Union.
  • European Data Protection Supervisor: The European Data Protection Supervisor may establish AI regulatory sandboxes with effects recognized across the Union.
  • high-risk AI systems: High-risk AI systems can be tested in AI regulatory sandboxes under specific conditions.
  • Article 62: Article 62 establishes priority access measures for SMEs and start-ups to AI regulatory sandboxes.

AI regulatory sandboxes regulation

Regulatory frameworks that allow testing of innovative AI products, services, and business models under supervised conditions.
  • AI Office: The AI Office maintains and publishes a list of planned and existing AI regulatory sandboxes.
  • Commission: The Commission develops dedicated interfaces and coordinates with national competent authorities on AI regulatory sandboxes.
  • Article 58: Article 58 establishes detailed arrangements for and functioning of AI regulatory sandboxes.
  • Article 62(1), point (c): Article 62(1), point (c) provides for non-binding guidance on conformity of innovative AI products and services within sandboxes.

AI regulatory sandboxes and testing in real world conditions technical_requirement

Provisions that apply to AI systems falling within the Regulation's scope when placed on the market or put into service as a result of research and development activity.
  • Regulation: The Regulation includes provisions on AI regulatory sandboxes and testing in real world conditions.

AI system ai_system

A machine-based system capable of operating with varying levels of autonomy to infer outputs such as predictions, content, recommendations, or decisions that can influence physical and virtual environments, subject to risk-based regulatory requirements when placed on the market or put into service in the Union.
  • machine learning: AI systems can be built using machine learning approaches as one of the key techniques enabling inference.
  • logic- and knowledge-based approaches: AI systems can be built using logic- and knowledge-based approaches as techniques enabling inference.
  • deployer: Deployers use AI systems under their authority and may be affected by the system's outputs.
  • Regulation: The Regulation governs AI systems placed on the market or put into service, with exclusions for military and national security purposes.
  • this Regulation: The regulation applies to AI systems whose output is intended for use in the Union, with exemptions for free and open-source systems unless they are high-risk.
  • Regulation: The Regulation governs AI systems placed on the market or put into service, with exclusions for military and national security purposes.
  • military, defence or national security purposes: AI systems used for military, defence, or national security purposes are excluded from the scope of this Regulation.
  • Regulation: AI systems placed on the market or put into service for civilian or law enforcement purposes must comply with the Regulation's requirements and obligations.
  • Regulation: AI systems placed on the market or put into service for civilian or law enforcement purposes must comply with the Regulation's requirements and obligations.
  • Union law: Research and development activities involving AI systems must be conducted in accordance with applicable Union law.
  • high risk AI system: AI systems with significant adverse impact on fundamental rights are classified as high risk.
  • fundamental rights: AI systems can have adverse impacts on fundamental rights protected by the Charter.
  • Union harmonisation legislation: Union harmonisation legislation addresses safety risks generated by products including AI systems as digital components.
  • high-risk use: An AI system can be classified as high-risk use when deployed in contexts listed in the regulation's annex.
  • high-risk uses: AI systems may be classified as high-risk based on their intended use and characteristics.
  • EU database: AI systems must be registered in the EU database established under the Regulation.
  • marking obligation: AI systems are subject to marking obligations for generated or manipulated content.
  • Union data protection law: AI systems used in real-world testing must comply with Union data protection law regarding data subject rights.
  • Member States: Member States are encouraged to support development of AI systems with socially and environmentally beneficial outcomes.
  • AI developers: AI developers are responsible for creating AI systems and should cooperate interdisciplinarily in their development.
  • general-purpose AI model: AI system is based on a general-purpose AI model.
  • Article 3: Article 3 provides the definition of AI system for purposes of the regulation.
  • intended purpose: Intended purpose defines how an AI system is intended to be used by the provider.
  • safety component: Safety components are components of AI systems that fulfill safety functions.
  • Prohibited AI practices: Prohibited AI practices apply to AI systems placed on market, put into service, or used.
  • social score: AI systems evaluate and classify natural persons based on social behavior resulting in social scores.
  • profiling of natural persons: An AI system that performs profiling of natural persons is always considered high-risk.
  • fundamental rights: AI systems can have impact on fundamental rights or give rise to significant concerns regarding such harm.
  • national competent authorities: National competent authorities receive reports and documented allegations regarding potential harms from AI systems.
  • Notified bodies: Notified bodies conduct conformity assessment activities on AI systems.
  • Section 2: AI systems must meet the requirements set out in Section 2 to maintain certificate validity.
  • Annex III: AI systems covered by Annex III are subject to a four-year certificate validity period.
  • provider of the system: The provider of the system is responsible for ensuring AI system compliance through corrective action.
  • transparency obligations: Transparency obligations apply to deployers of AI systems that generate or manipulate content.
  • AI regulatory sandboxes: AI regulatory sandboxes are designed to test and supervise AI systems.
  • technical documentation: Complete descriptions of AI system training, testing, and validation are documented in technical documentation.
  • Article 61: Article 61 governs the treatment of AI system predictions and decisions in real world conditions testing through consent and reversal arrangements.
  • market surveillance authority: Market surveillance authority evaluates AI systems for compliance with regulatory requirements and proper high-risk classification.
  • compliance with requirements and obligations: AI systems must comply with requirements and obligations laid down in the Regulation.
  • Regulation (EU) 2024/1689: Regulation establishes requirements and obligations for AI systems.
  • operator: Operator must ensure corrective action is taken for all AI systems made available on the Union market.
  • market surveillance authority: Market surveillance authority evaluates AI systems for compliance with regulatory requirements and proper high-risk classification.
  • Article 5: AI systems must comply with the prohibition of AI practices referred to in Article 5.
  • Article 40: Harmonised standards referenced in Article 40 confer presumption of conformity for AI systems.
  • Article 41: Common specifications referenced in Article 41 confer presumption of conformity for AI systems.
  • Article 50: Article 50 establishes compliance requirements applicable to AI systems.
  • Article 18 of Regulation (EU) 2019/1020: Article 18 establishes procedural rights for operators concerned with AI system market surveillance measures.
  • high-risk AI system: AI systems may be classified as high-risk based on evaluation criteria.
  • provider: The provider places the AI system on the market or into service and is responsible for its classification.
  • Article 99: Non-compliant AI system providers are subject to fines under Article 99.
  • Chapter III, Section 2: AI systems must comply with requirements established in Chapter III, Section 2.
  • Commission: The Commission evaluates AI systems for compliance with Union law and decides on appropriate measures.
  • Article 5: Article 5 contains prohibitions on specific AI practices that AI systems must comply with.
  • Member State: Member States shall take restrictive measures such as requiring withdrawal of non-compliant AI systems from their market.
  • Administrative fine: Administrative fines are imposed for infringements related to AI systems, with the system's purpose considered in penalty determination.
  • administrative fine: The purpose of the AI system concerned is considered when determining the amount of administrative fines.
  • Article 11(1): Article 11(1) establishes requirements that apply to AI systems.
  • Technical documentation: Technical documentation contains required information about AI system design, development, and compliance, and is examined to assess conformity.
  • Chapter III, Section 2: AI systems must comply with requirements established in Chapter III, Section 2.
  • Training data sets: Training data sets are used to develop and train AI systems and must be documented.
  • Validation and testing procedures: Validation and testing procedures are used to evaluate AI system compliance with requirements.
  • System architecture: System architecture describes the structural design and component integration of an AI system.
  • Regulation 2024/1689: AI systems must be in conformity with Regulation 2024/1689.
  • Regulation (EU) 2016/679: AI systems processing personal data must comply with GDPR (Regulation 2016/679).
  • Regulation (EU) 2018/1725: AI systems processing personal data must comply with Regulation 2018/1725.
  • Directive (EU) 2016/680: AI systems processing personal data must comply with Directive 2016/680.
  • Article 72: AI systems are subject to post-market monitoring requirements specified in Article 72.
  • quality management system: AI systems are covered by and subject to the quality management system.
  • Union technical documentation assessment certificate: The certificate contains information necessary to evaluate the conformity and control of the AI system.
  • data used to train the AI system: The quality of training data is a requirement for AI system conformity assessment.
  • intended purpose: Changes to the AI system must not affect its intended purpose or compliance with requirements.
  • general-purpose AI model: The general-purpose AI model is designed to be integrated into AI systems.

AI system development phase technical_requirement

The stage during which technical elements of AI systems are applied and developed, requiring correct application and understanding.
  • AI literacy: AI literacy includes understanding the correct application of technical elements during AI system development.

AI system for criminal risk assessment ai_system

An AI system designed to make risk assessments of natural persons to assess or predict the risk of criminal offence based on profiling or personality traits.

AI system for emotion inference ai_system

An AI system used to infer emotions of natural persons in workplace and education institutions, with exceptions for medical or safety reasons.

AI system for facial recognition database creation ai_system

An AI system that creates or expands facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.

AI system output interpretation technical_requirement

The suitable ways in which to interpret and understand the results and decisions produced by AI systems.

AI system providers market_actor

Entities responsible for developing and providing AI systems that generate synthetic content.

AI systems ai_system

Artificial intelligence systems deployed across various economic sectors and society that are subject to harmonised rules for development, placing on market, and use, including requirements for transparency, technical documentation, record-keeping, and compliance with safety and ethical standards.
  • AI Regulation: The regulation establishes uniform obligations for operators developing, importing, or using AI systems.
  • trustworthy AI: AI systems are evaluated against the criterion of being trustworthy and safe.
  • New Legislative Framework: Harmonised rules laid down in the New Legislative Framework apply across sectors to AI systems.
  • Regulation: The Regulation governs the development, use, enforcement, and market surveillance of AI systems through ethical principles and requirements.
  • personal data: AI systems may involve the processing of personal data in their design, development, or use.
  • Regulation (EU) 2022/2065: Regulation (EU) 2022/2065 sets out provisions regarding the liability of providers of intermediary services related to AI systems.
  • solely automated individual decision-making: AI systems are subject to rights and guarantees related to solely automated individual decision-making including profiling.
  • Regulation: The Regulation governs the development, use, enforcement, and market surveillance of AI systems through ethical principles and requirements.
  • Transparency and explainability: The principle of transparency and explainability applies to how AI systems should operate and communicate with humans.
  • Diversity, non-discrimination and fairness: The principle of diversity and fairness applies to the development and use of AI systems.
  • Social and environmental well-being: The principle of social and environmental well-being applies to sustainable AI development and deployment.
  • Prohibition of manipulative or exploitative AI-enabled practices: The prohibition applies to AI systems that distort behavior and cause significant harm.
  • Vulnerable persons: AI systems are restricted in their use with vulnerable persons to prevent exploitation.
  • Union harmonisation legislation: AI systems are subject to Union harmonisation legislation and complementary regulatory requirements.
  • general-purpose AI models: General-purpose AI models are distinguished from AI systems as essential components that require additional elements to become functional systems.
  • general-purpose AI models that pose systemic risks: Obligations for systemic risk models apply when these models are integrated into or form part of AI systems.
  • This Regulation: The regulation applies to AI systems placed on the market or put into service in the Union, establishing requirements and obligations with specific exemptions.
  • very large online platforms: AI systems embedded into designated very large online platforms are subject to the risk-management framework provided in Regulation (EU) 2022/2065.
  • very large online search engines: AI systems embedded into designated very large online search engines are subject to the risk-management framework provided in Regulation (EU) 2022/2065.
  • Regulation (EU) 2022/2065: AI systems are subject to this regulation and may be provided as intermediary services within its scope.
  • AI regulatory sandbox: The regulatory sandbox facilitates development and testing of innovative AI systems before market placement.
  • AI regulatory sandbox: The AI regulatory sandbox supervises the development, training, testing, and validation of AI systems in real-world conditions.
  • substantial modification: AI systems may undergo substantial modifications that trigger new conformity assessment requirements.
  • providers and prospective providers: Providers develop and test AI systems within the regulatory sandbox.
  • SMEs: SMEs are providers and deployers of AI systems.
  • this Regulation: The regulation applies to AI systems placed on the market or put into service in the Union, establishing requirements and obligations with specific exemptions.
  • market surveillance authorities: Market surveillance authorities assess the risk posed by AI systems and may take measures when they present a risk.
  • documentation: Documentation is created under the regulation for AI systems and must be accessible to authorities.
  • Union financial services law: Union financial services law applies to AI systems used by regulated financial institutions.
  • prohibited practices: Prohibited practices restrict the placement, putting into service, and use of AI systems.
  • transparency requirements: Transparency requirements mandate disclosure for AI systems made available on the market.
  • Directive 2013/36/EU: Directive 2013/36/EU establishes framework for supervising AI systems used by financial institutions.
  • Union's Ethics Guidelines for Trustworthy AI: AI models are encouraged to apply additional requirements related to the Union's Ethics Guidelines for Trustworthy AI on a voluntary basis.
  • voluntary codes of conduct: Voluntary codes of conduct are developed for and applied to AI systems to ensure effectiveness through clear objectives and key performance indicators.
  • Regulation (EU) 2023/988: This regulation serves as a safety net applying to non-high-risk AI products to ensure safety when placed on the market or put into service.
  • Regulation /2024/1689/oj: The regulation applies to AI systems with specific exemptions for scientific research, pre-market development, and personal non-professional use.
  • Annex III: AI systems are classified and fall under the scope of Annex III based on their risk characteristics.
  • Article 96: Article 96 provides guidelines for practical implementation of AI system classification.
  • Notification procedure: The notification procedure applies to conformity assessment bodies assessing specific types of AI systems.
  • Obligations for providers of general-purpose AI models: Providers must make information available to providers of AI systems who intend to integrate general-purpose AI models.
  • general-purpose AI model: General-purpose AI models are designed to be integrated into AI systems placed on the market or put into service.
  • Regulation (EU) 2019/1020: Regulation (EU) 2019/1020 governs market surveillance and control of AI systems.
  • Regulation (EU) 2024/1689: The Artificial Intelligence Act applies to AI systems placed on the market or put into service.
  • Article 111: Article 111 requires AI systems to be brought into compliance with the Regulation by 31 December 2030.
  • voluntary codes of conduct: Voluntary codes of conduct apply to AI systems other than high-risk AI systems.
  • Regulation 2024/1689: The regulation governs the development and deployment of AI systems.
  • notified body: The notified body may conduct additional tests of AI systems for which a certificate was issued.

AI systems as safety components ai_system

Artificial Intelligence systems that function as safety components within the meaning of Regulation (EU) 2024/1689.

AI systems for administration of justice and democratic processes ai_system

Artificial intelligence systems intended for use in judicial administration and democratic processes, classified as high-risk.
  • high-risk AI systems: AI systems for administration of justice and democratic processes are classified as high-risk due to their significant impact on democracy and rule of law.

AI systems for alternative dispute resolution ai_system

AI systems used by alternative dispute resolution bodies that produce legal effects for parties involved in proceedings.
  • high-risk AI systems: AI systems used in alternative dispute resolution with legal effects are classified as high-risk.

AI systems for benefit determination ai_system

AI systems used by public authorities to determine whether essential public assistance benefits and services should be granted, denied, reduced, revoked or reclaimed.
  • high-risk AI systems: AI systems used for determining public assistance benefits are classified as high-risk due to their significant impact on persons' livelihood and fundamental rights.
  • essential public assistance benefits and services: AI systems are used in the determination process for essential public assistance benefits and services.

AI systems for credit evaluation ai_system

AI systems used to evaluate credit scores or creditworthiness of natural persons, determining access to financial resources and essential services.
  • high-risk AI systems: Credit evaluation AI systems are classified as high-risk due to their impact on access to financial resources and essential services.

AI systems for credit scoring ai_system

AI systems used to evaluate the credit score or creditworthiness of natural persons, determining access to financial resources and essential services.
  • high-risk AI systems: AI systems evaluating credit score or creditworthiness are classified as high-risk systems.

AI systems for creditworthiness assessment ai_system

AI systems intended to evaluate creditworthiness of natural persons or establish credit scores, excluding fraud detection systems.

AI systems for criminal detection ai_system

AI systems authorized by law to detect, prevent, investigate or prosecute criminal offences with exemptions from certain transparency requirements.

AI systems for detection and identification of natural persons ai_system

AI systems used in migration, asylum or border control management for detecting, recognising or identifying natural persons.

AI systems for educational assessment ai_system

AI systems intended to assess the appropriate level of education that an individual will receive or access in educational and vocational training institutions at all levels.

AI systems for election influence ai_system

AI systems designed or intended to influence election or referendum outcomes or the voting behavior of natural persons, classified as high-risk with limited exceptions.
  • Regulation (EU) 2024/900: The regulation provides rules addressing risks of undue external interference with voting rights through AI systems.
  • Article 39 of the Charter: Article 39 enshrines the right to vote, which AI systems intended to influence elections must not undermine.
  • high-risk AI systems: AI systems intended to influence elections or referenda are classified as high-risk.
  • Regulation 2024/1689: Regulation 2024/1689 governs AI systems intended to influence election outcomes or voting behaviour.

AI systems for emergency call evaluation ai_system

AI systems used to evaluate and classify emergency calls and dispatch emergency first response services including police, firefighters, and medical aid.
  • high-risk AI systems: Emergency call evaluation and dispatch AI systems are classified as high-risk due to critical decisions affecting life, health, and property.

AI systems for emergency call evaluation and dispatch ai_system

AI systems designed to evaluate and classify emergency calls by natural persons or to dispatch and establish priority in emergency first response services including police, firefighters, and medical aid.

AI systems for emotional state detection in workplace and education ai_system

AI systems intended to detect the emotional state of individuals in situations related to workplace and education contexts.

AI systems for employment decisions ai_system

AI systems intended to make decisions affecting work-related relationships, promotions, terminations, task allocation, and performance monitoring.

AI systems for evidence reliability evaluation ai_system

AI systems designed to evaluate the reliability of evidence during investigation or prosecution of criminal offences.
  • law enforcement authorities: Evidence evaluation AI systems are intended for use by law enforcement authorities in criminal investigations.

AI systems for financial fraud detection ai_system

AI systems specifically designed and used for the purpose of detecting financial fraud in financial services.

AI systems for fraud detection ai_system

AI systems provided by Union law for detecting fraud in the offering of financial services and for prudential purposes.
  • high-risk AI systems: AI systems for fraud detection provided by Union law should not be considered high-risk under this Regulation.

AI systems for health and life insurance ai_system

AI systems intended for risk assessment and pricing in relation to natural persons for health and life insurance purposes.
  • high-risk AI systems: Health and life insurance risk assessment AI systems can be classified as high-risk due to significant impact on persons' livelihood.

AI systems for insurance risk assessment ai_system

AI systems intended for risk assessment and pricing in relation to natural persons in life and health insurance.

AI systems for insurance risk assessment and pricing ai_system

AI systems intended to evaluate risk and determine pricing for life and health insurance products related to natural persons.

AI systems for judicial assistance ai_system

AI systems intended to be used by judicial authorities to assist in researching and interpreting facts and law, or in alternative dispute resolution.
  • Regulation 2024/1689: Regulation 2024/1689 governs AI systems intended for judicial assistance and alternative dispute resolution.

AI systems for judicial decision-making ai_system

AI systems intended to support judges or judicial bodies in applying law to facts, which should not replace human decision-making in final determinations.
  • high-risk AI systems: AI systems used in judicial decision-making are classified as high-risk systems.

AI systems for law enforcement profiling ai_system

AI systems intended to be used by law enforcement authorities or Union institutions for profiling natural persons in detection, investigation or prosecution of criminal offences.

AI systems for law enforcement victim risk assessment ai_system

AI systems intended for use by law enforcement authorities to assess the risk of natural persons becoming victims of criminal offences.

AI systems for migration and asylum assessment ai_system

AI systems intended for use by competent public authorities in migration, asylum and border control management for risk assessment and application examination.

AI systems for migration and border control ai_system

AI systems used by competent public authorities or Union institutions for detecting, recognising or identifying natural persons in migration, asylum or border control management.
  • Regulation 2024/1689: Regulation 2024/1689 governs AI systems used for migration, asylum and border control management.

AI systems for offender risk assessment ai_system

AI systems intended to assess the risk of natural persons offending or re-offending, including personality trait assessment, for law enforcement purposes.
  • Directive (EU) 2016/680: Offender risk assessment AI systems are subject to constraints defined in Directive (EU) 2016/680 regarding profiling.
  • Article 3(4): The regulation references Article 3(4) of Directive (EU) 2016/680 when defining constraints on offender risk assessment systems.

AI systems for performance monitoring ai_system

AI systems used to monitor the performance and behaviour of persons, potentially undermining fundamental rights to data protection and privacy.

AI systems for polygraph and similar tools ai_system

AI systems intended to function as polygraphs or similar investigative tools for law enforcement authorities.

AI systems for polygraph or similar tools ai_system

AI systems intended to be used as polygraphs or similar tools by competent public authorities in migration, asylum and border control contexts.

AI systems for public assistance eligibility ai_system

AI systems used by public authorities to evaluate eligibility for essential public assistance benefits and services, including healthcare.
  • public authorities: Public assistance eligibility systems are used by public authorities.

AI systems for recruitment and selection ai_system

AI systems intended for recruitment or selection of natural persons, including targeted job advertisements, filtering applications, and evaluating candidates.

AI systems for student behavior monitoring ai_system

AI systems intended to monitor and detect prohibited behavior of students during tests in educational and vocational training institutions.

AI systems generating synthetic content ai_system

AI systems that produce synthetic audio, image, video or text content requiring machine-readable marking to indicate artificial generation.

AI systems identifying or inferring emotions ai_system

AI systems that identify or infer emotions or intentions of natural persons based on biometric data, characterized by limited reliability and lack of specificity.
  • right to privacy: Emotional detection systems can be intrusive to the right to privacy and fundamental rights of natural persons.

AI systems in education ai_system

AI systems deployed in educational or vocational training contexts for determining access, admission, evaluating learning outcomes, assessing educational levels, or monitoring student behavior.
  • high-risk AI systems: AI systems used in education or vocational training for determining access, admission, evaluating outcomes, or monitoring behavior are classified as high-risk.

AI systems in employment and worker management ai_system

AI systems used for recruitment, selection, work-related decisions, task allocation, and monitoring of employees and platform workers, classified as high-risk due to impact on career prospects and workers' rights.

AI systems in law enforcement ai_system

Artificial intelligence systems used by law enforcement authorities for decision-making in critical situations involving surveillance, arrest, or deprivation of liberty.

AI systems in migration, asylum and border control ai_system

Artificial intelligence systems used by Member States or Union institutions for migration, asylum and border control management.

AI systems in migration, asylum and border control management ai_system

AI systems used by competent public authorities and Union institutions for tasks in migration, asylum and border control management, including risk assessment and document verification.

AI systems intended to interact with natural persons ai_system

AI systems designed to interact with or generate content for natural persons, which may pose risks of impersonation or deception.
  • transparency obligations: Certain AI systems are subject to specific transparency obligations regarding notification of natural persons.
  • Article 50: Article 50 establishes transparency obligations for AI systems designed to interact directly with natural persons.

AI systems placing on market and putting into service legal_obligation

The obligation that AI systems can be placed on the market, put into service or used with specific objectives or effects related to behavior distortion.

AI systems presenting a risk ai_system

AI systems that present risks to the health, safety, or fundamental rights of persons, subject to market surveillance evaluation.

AI systems providing social scoring ai_system

AI systems that evaluate or classify natural persons or groups based on multiple data points related to social behaviour, personal or personality characteristics. Such systems may lead to discriminatory outcomes and exclusion of certain groups.
  • Non-discrimination: Social scoring AI systems may violate the right to non-discrimination and should be prohibited.
  • Social scoring prohibition: Legal obligation prohibits AI systems that provide unacceptable social scoring practices.
  • Biometric data: Social scoring systems may process biometric and personal data related to individuals' characteristics.

AI technologies expertise technical_requirement

Required competence for national competent authority personnel including in-depth understanding of AI technologies, data, and data computing.
  • national competent authorities: Personnel of national competent authorities must have in-depth understanding of AI technologies, data, and data computing.

AI-enabled manipulative techniques ai_system

AI systems designed to persuade persons into unwanted behaviors or deceive them through nudging and subversion of autonomy and free choice.
  • Regulation: The Regulation prohibits AI-enabled manipulative techniques that contradict Union values and fundamental rights.
  • Prohibition of manipulative AI systems: Legal obligation prohibits the placing on market and use of AI systems with manipulative objectives or effects.
  • Subliminal components: Manipulative AI systems deploy subliminal components as part of their operation.
  • Machine-brain interfaces: Machine-brain interfaces facilitate AI-enabled manipulation by allowing control of stimuli presented to persons.
  • Virtual reality: Virtual reality facilitates AI-enabled manipulation through control of presented stimuli.

AI-generated or manipulated content detection and disclosure legal_obligation

Mandatory transparency obligations placed on providers and deployers of certain AI systems to enable detection and disclosure that outputs are artificially generated or manipulated.
  • Regulation (EU) 2022/2065: The obligations to enable detection and disclosure of artificially generated content are particularly relevant to facilitate effective implementation of the Digital Services Act.
  • very large online platforms: Providers of very large online platforms are subject to obligations to identify and mitigate systemic risks from artificially generated or manipulated content.
  • very large online search engines: Providers of very large online search engines are subject to obligations to identify and mitigate systemic risks from artificially generated or manipulated content.

AI-on-demand platform institution

A Union-level platform established to promote AI innovation and provide technical and scientific support to providers and notified bodies for compliance with AI regulation obligations.
  • this Regulation: The AI-on-demand platform contributes to achieving the objectives of the regulation.
  • Regulation: The AI-on-demand platform contributes to the implementation of the Regulation.

ancillary feature technical_requirement

A feature intrinsically linked to another commercial service that cannot be used independently for objective technical reasons and is not a means to circumvent regulation applicability.
  • Digital Services Act: The regulation defines the concept of ancillary features and their exemption from applicability rules.

Annex I documentation

Annex listing Union harmonisation legislation applicable to products and AI systems used as safety components in high-risk AI systems.
  • Article 6: Article 6 references Annex I which lists the Union harmonisation legislation applicable to high-risk AI systems.
  • Union harmonisation legislation: Union harmonisation legislation is listed in Section A of Annex I.
  • Article 60: Article 60 references Union harmonisation legislation listed in Annex I.
  • Regulation 2024/1689: The regulation references Union harmonisation legislation listed in Annex I.
  • Commission: The Commission shall provide detailed information on the relationship with Union harmonisation legislation listed in Annex I.

Annex I Section A legal_article

A section listing Union harmonisation legislation covering products with safety components.

Annex I Section B documentation

A section listing sectoral Union harmonisation legislation that must be considered in regulatory amendments.

Annex II documentation

Reference document listing specific criminal offences including terrorism, trafficking, sexual exploitation, and drug trafficking for which biometric identification may be used in law enforcement.

Annex III documentation

Annex listing categories of high-risk AI systems subject to conformity assessment, registration, and specific requirements, which may be amended by the Commission based on technological developments.
  • Article 6: Article 6 references Annex III which lists additional AI systems considered high-risk, subject to derogation provisions.
  • high-risk AI system: High-risk AI systems are those referred to and categorized in Annex III that pose significant risk of harm.
  • Article 97: Article 97 provides the procedure for amending conditions related to AI systems falling under Annex III.
  • AI systems: AI systems are classified and fall under the scope of Annex III based on their risk characteristics.
  • delegated acts: Delegated acts adopted by the Commission amend Annex III by adding or modifying high-risk AI system use-cases.
  • high-risk AI systems: High-risk AI systems are classified and listed in Annex III of the Regulation.
  • Article 97: Article 97 empowers the Commission to amend the list in Annex III by removing high-risk AI systems.
  • Article 12: Article 12 references Annex III for classification of high-risk AI systems requiring specific logging capabilities.
  • high-risk AI systems: High-risk AI systems are classified and listed in Annex III of the Regulation.
  • high-risk AI systems: High-risk AI systems are classified and listed in Annex III of the Regulation.
  • Article 43: Article 43 references Annex III which lists the high-risk AI systems subject to conformity assessment.
  • AI system: AI systems covered by Annex III are subject to a four-year certificate validity period.
  • Article 49: Article 49 references Annex III which lists the high-risk AI systems subject to registration.
  • high-risk AI system: High-risk AI systems are those referred to and categorized in Annex III that pose significant risk of harm.
  • Article 7: Article 7 governs the amendment procedure for Annex III.
  • Article 80: Article 80 references Annex III for classification criteria of non-high-risk AI systems.
  • high-risk AI system: High-risk AI systems are those referred to and categorized in Annex III that pose significant risk of harm.
  • Commission: Commission shall assess the need for amendment of Annex III annually.
  • risk level evaluation methodology: The methodology guides evaluation for inclusion of systems in Annex III.

Annex III, point 5 documentation

A reference to high-risk AI systems placed on the market or put into service by financial institutions.

Annex IV documentation

Annex IV specifies minimum elements required in technical documentation for high-risk AI systems, including post-market monitoring plans and simplified forms for SMEs.

Annex IV legal_article

Regulatory annex specifying requirements for technical documentation of AI systems.

Annex IX documentation

Specifies information requirements for registration of high-risk AI systems undergoing testing in real-world conditions.
  • EU database: The EU database registration requirements reference information points specified in Annex IX.
  • real-world testing plan: Real-world testing plans must include information specified in Annex IX.

Annex V documentation

Contains information requirements for EU declarations of conformity and CE marking content that can be updated by the Commission through delegated acts.
  • EU declaration of conformity: The EU declaration of conformity must contain information set out in Annex V.
  • Commission: The Commission is empowered to adopt delegated acts to amend Annex V based on technical progress.

Annex VI documentation

Describes the conformity assessment procedure based on internal control for high-risk AI systems without notified body involvement.
  • Article 43: Article 43 references Annex VI describing the internal control conformity assessment procedure.
  • internal control: The internal control conformity assessment procedure is documented in Annex VI.
  • The Commission: The Commission is empowered to amend Annex VI through delegated acts.
  • conformity assessment procedure: Annex VI describes the conformity assessment procedure based on internal control.

Annex VII documentation

Specifies the conformity assessment procedure for high-risk AI systems based on quality management system and technical documentation assessment involving notified bodies.
  • Article 43: Article 43 references Annex VII describing the quality management system assessment procedure.
  • notified bodies: Annex VII procedure requires involvement of notified bodies in quality management system and technical documentation assessment.
  • conformity assessment procedure: The conformity assessment procedure is documented and specified in Annex VII.
  • The Commission: The Commission is empowered to amend Annex VII through delegated acts.
  • notified bodies: Certificates issued by notified bodies are issued in accordance with Annex VII.
  • Union technical documentation assessment certificate: Union technical documentation assessment certificates are issued in accordance with Annex VII requirements.
  • quality management system approval: Quality management system approvals are issued in accordance with Annex VII requirements.

Annex VIII documentation

Annex containing detailed data specifications organized in sections A, B, and C that must be entered into the EU database for registration of high-risk AI systems.
  • EU database: The EU database registration requirements reference information points specified in Annex VIII.
  • EU database for high-risk AI systems: Annex VIII specifies the data to be entered into the EU database.
  • EU database: Annex VIII specifies the data that must be entered into the EU database, organized in sections A, B, and C.

Annex X documentation

Annex listing Union legislative acts establishing large-scale IT systems in the area of Freedom, Security and Justice subject to the Regulation.

Annex XI documentation

Specifies minimum technical documentation requirements for general-purpose AI models that can be amended by the Commission.
  • technical documentation: Technical documentation must contain minimum information set out in Annex XI.
  • Commission: Commission is empowered to amend Annex XI in light of technological developments.

Annex XI legal_article

Annex containing measurement and calculation methodologies for comparable and verifiable documentation of AI model compliance.
  • Commission: The Commission is empowered to adopt delegated acts to detail measurement and calculation methodologies for Annex XI.

Annex XII documentation

Contains minimum transparency information and documentation requirements for providers of AI systems integrating general-purpose AI models.

Annex XII legal_article

Annex subject to amendment by the Commission through delegated acts in light of technological developments.
  • Commission: The Commission is empowered to amend Annex XII in light of evolving technological developments.

Annex XIII documentation

Contains criteria for evaluating capabilities equivalent to high impact capabilities for systemic risk classification of general-purpose AI models.
  • Article 51: Article 51 references Annex XIII for criteria regarding equivalent capabilities and impact.
  • systemic risk: Annex XIII sets out the criteria for evaluating systemic risks in general-purpose AI models.
  • Commission: The Commission is empowered to amend Annex XIII by specifying and updating systemic risk criteria.

annual reports documentation

Reports submitted by national authorities and the Commission documenting use of remote biometric identification systems.
  • Commission: The Commission publishes annual reports on the use of real-time remote biometric identification systems based on aggregated Member State data.

annual reports on biometric identification use documentation

Annual reports submitted by deployers to market surveillance and data protection authorities on use of post-remote biometric identification systems.
  • Regulation (EU) 2024/1689: The regulation requires deployers to submit annual reports to market surveillance and data protection authorities on their use of post-remote biometric identification systems.

anonymisation and pseudonymisation tools ai_system

AI tools used for ancillary administrative activities in judicial contexts that do not affect actual administration of justice.

anonymised data data_category

Data that has been processed to remove personal identifiers, making individuals unidentifiable.
  • Chapter III, Section 2: Anonymised data is referenced as an alternative to personal data for fulfilling Chapter III, Section 2 requirements.

appeal procedure legislative_procedure

Formal procedure available to challenge decisions of notified bodies regarding conformity certificates and approvals.
  • notified body: An appeal procedure is available against decisions of notified bodies regarding conformity certificates.

Article 1 legal_article

Article defining the subject matter and purpose of the AI regulation to improve internal market functioning and promote trustworthy AI.

Article 10 legal_article

An article of Directive (EU) 2016/680 establishing requirements for real-time remote biometric identification systems in law enforcement contexts.
  • Directive (EU) 2016/680: Article 10 is contained within Directive (EU) 2016/680.
  • High-risk AI systems: Article 10 establishes data and data governance requirements for high-risk AI systems using training data.

Article 10 of Directive (EU) 2016/680 legal_article

A provision in Directive (EU) 2016/680 that establishes rules for processing biometric data by law enforcement authorities, allowing such processing only where strictly necessary with appropriate safeguards.
  • this Regulation: The Regulation specifically references and regulates biometric data processing rules contained in Article 10 of Directive (EU) 2016/680.
  • biometric data: Article 10 contains rules on the processing of biometric data.
  • biometric identification: Biometric identification systems must comply with Article 10 requirements for biometric data processing in law enforcement.
  • post-remote biometric identification system: The use of post-remote biometric identification systems is subject to Article 10 of the law enforcement directive regarding biometric data.

Article 10(1) legal_article

An article of Regulation (EU) 2018/1725 that prohibits the processing of biometric data subject to limited exceptions.

Article 10(2), point (g) of Regulation (EU) 2018/1725 legal_article

Legal provision allowing processing of special categories of personal data for substantial public interest purposes.

Article 10(4) legal_article

An article containing requirements that high-risk AI systems must comply with when trained and tested on specific geographical, behavioural, contextual or functional data.
  • high-risk AI systems: High-risk AI systems trained and tested on specific data are presumed to comply with Article 10(4).

Article 10(5) legal_article

A specific article of the AI regulation with exceptions to the non-applicability of data protection regulations.
  • Regulation /2024/1689/oj: The regulation contains Article 10(5) which provides exceptions to the non-applicability of data protection regulations.

Article 100 legal_article

Provision establishing administrative fines on Union institutions, bodies, offices and agencies under the scope of the Regulation.

Article 101 legal_article

Legal article establishing fines for providers of general-purpose AI models for supplying incorrect or misleading information and failure to provide access to AI models.
  • provider of the general-purpose AI model: Article 101 specifies fines for supplying incorrect, incomplete or misleading information.
  • Article 92: Article 92 references Article 101 regarding fines for non-compliance.
  • general-purpose AI model: Article 101 specifies fines for failure to provide access to general-purpose AI models.
  • Regulation: Article 101 is contained within the Regulation governing AI models.
  • Commission: The Commission enforces the provisions established in Article 101.
  • administrative fines: Article 101 establishes the legal obligation for the Commission to impose administrative fines.
  • Article 91: Article 101 references Article 91 regarding requests for documents or information.
  • Article 92: Article 101 references Article 92 regarding Commission evaluation of AI models.
  • Article 93: Article 101 references Article 93 regarding measures and commitments from AI model providers.
  • Article 56: Article 101 references Article 56 regarding codes of practice.

Article 103 legal_article

A legal article that amends Regulation (EU) No 167/2013 by adding requirements for artificial intelligence systems.
  • Regulation (EU) No 167/2013: Article 103 amends Regulation (EU) No 167/2013 by adding provisions to Article 17(5).
  • Article 17(5): Article 103 modifies Article 17(5) of Regulation (EU) No 167/2013.

Article 104 legal_article

Article establishing amendments to Regulation (EU) No 168/2013 regarding AI systems as safety components.

Article 105 legal_article

Article 105 specifies amendments to Directive 2014/90/EU regarding AI systems as safety components.
  • Directive 2014/90/EU: Article 105 specifies the amendment to Directive 2014/90/EU regarding AI systems as safety components.
  • Regulation (EU) 2024/1689: Regulation (EU) 2024/1689 contains Article 105 which specifies amendments to Directive 2014/90/EU.

Article 106 legal_article

Article establishing amendments to Directive (EU) 2016/797 regarding AI systems as safety components.
  • Directive (EU) 2016/797: Article 106 amends Directive (EU) 2016/797 by adding a new paragraph to Article 5.
  • Regulation (EU) 2024/1689: Article 106 references Regulation (EU) 2024/1689 when establishing requirements for AI systems as safety components.

Article 107 legal_article

Article establishing amendments to Regulation (EU) 2018/858 regarding artificial intelligence systems as safety components.

Article 108 legal_article

Article establishing amendments to Regulation (EU) 2018/1139.

Article 109 legal_article

An article that amends Regulation (EU) 2019/2144 by adding requirements for artificial intelligence systems.

Article 11 legal_article

Article 11 establishes requirements for technical documentation of high-risk AI systems, with simplified provisions for SMEs.
  • technical documentation: Article 11 establishes the requirement for technical documentation of high-risk AI systems.
  • importer: Importers must verify that technical documentation has been drawn up in accordance with Article 11.
  • Annex IV: Article 11 references Annex IV for technical documentation requirements.
  • provider: The provider is required to draw up technical documentation in accordance with Article 11.
  • Chapter III, Section 2: Article 11 of Regulation (EU) 2019/2144 references the requirements set out in Chapter III, Section 2 of Regulation (EU) 2024/1689.

Article 11 of Regulation (EU) 2019/1020 regulation

Regulation governing market surveillance authority powers and procedures for monitoring compliance.
  • market surveillance authority: Market surveillance authorities exercise monitoring powers in accordance with Article 11 of Regulation (EU) 2019/1020.

Article 11 of Regulation (EU) No 1025/2012 legal_article

Establishes the procedure the Commission shall apply for addressing shortcomings in harmonised standards or common specifications.
  • Article 82: Article 82 references the procedure provided in Article 11 of Regulation 1025/2012.

Article 11(1) legal_article

A legal article that establishes requirements for technical documentation of AI systems.
  • Technical documentation: Article 11(1) requires the provision of technical documentation for AI systems.
  • Regulation 2024/1689: Regulation 2024/1689 contains Article 11(1) which establishes documentation requirements.
  • Technical documentation: Technical documentation is referred to and required by Article 11(1).
  • AI system: Article 11(1) establishes requirements that apply to AI systems.

Article 11(3) legal_article

Legal article authorizing delegated acts subject to objection procedure.

Article 110 legal_article

Article establishing amendments to Directive (EU) 2020/1828 by adding Regulation (EU) 2024/1689 to Annex I.
  • Directive (EU) 2020/1828: Article 110 establishes amendments to Directive (EU) 2020/1828 by adding Regulation (EU) 2024/1689 to Annex I.

Article 111 legal_article

Article addressing AI systems already placed on the market or put into service and general-purpose AI models with compliance deadlines.
  • AI systems: Article 111 requires AI systems to be brought into compliance with the Regulation by 31 December 2030.
  • Article 5: Article 111 references Article 5 regarding application without prejudice.

Article 112 legal_article

Article establishing evaluation and review procedures for the Regulation, including procedures for revision of requirements based on available evidence and technology developments.
  • this Regulation: The Regulation contains Article 112 on evaluation and review procedures.
  • Article 5: Article 112 governs the revision procedure for Article 5.
  • Regulation: Article 112 is contained within the Regulation.
  • Article 50: Article 112 references AI systems requiring transparency measures in Article 50.

Article 113 legal_article

Article specifying the entry into force and application dates for the Regulation, with phased implementation across different chapters and articles.
  • Regulation: The Regulation contains Article 113 specifying entry into force and application.
  • Regulation 2024/1689: Regulation 2024/1689 contains Article 113 which establishes its entry into force and application dates.

Article 113(3) legal_article

Legal article containing provisions regarding the application of Article 5.
  • Article 5: Article 113(3) contains a reference to Article 5 regarding application provisions.

Article 114 TFEU legal_article

A provision of the TFEU that serves as the legal basis for establishing uniform rules on AI systems and protection of public interest within the internal market.
  • Regulation 2024/1689: The regulation is based on Article 114 TFEU for establishing uniform obligations on AI systems within the internal market.

Article 12 legal_article

Article 12 establishes record-keeping requirements for high-risk AI systems, including automatic logging of events throughout the system's lifetime.
  • automatic recording of events: Article 12 requires high-risk AI systems to technically allow for automatic recording of events over their lifetime.
  • high-risk AI system: High-risk AI systems must comply with record-keeping and logging obligations specified in Article 12.
  • High-risk AI systems: Article 12 establishes record-keeping requirements that apply to high-risk AI systems.
  • Logging capabilities: Article 12 specifies that logging capabilities must enable recording of events relevant to system traceability and risk identification.
  • Annex III: Article 12 references Annex III for classification of high-risk AI systems requiring specific logging capabilities.
  • Article 14: Article 12 references Article 14 regarding identification of natural persons involved in verification of results.
  • Article 79: Article 12 references Article 79 for the definition of risk in high-risk AI systems.

Article 12(1) legal_article

Establishes requirements for logs automatically generated by high-risk AI systems that must be maintained by providers.
  • automatically generated logs: Article 12(1) establishes the requirement for automatically generated logs by high-risk AI systems.
  • automatically generated logs: Article 12(1) references automatically generated logs of high-risk AI systems.
  • logs: Logs are automatically generated by high-risk AI systems as specified in Article 12(1).

Article 13 legal_article

A legal article establishing requirements for high-risk AI systems regarding transparency, information provision to deployers, and instructions for use, including provisions for law enforcement AI systems under Directive (EU) 2016/680.
  • high-risk AI systems: Article 13 of Directive (EU) 2016/680 governs the implementation of obligations for high-risk AI systems used for law enforcement.
  • risk management measures: Risk management measures require provision of information as specified in Article 13.
  • High-risk AI systems: Article 13 establishes transparency and information provision requirements for high-risk AI systems.
  • Transparency: Article 13 requires that high-risk AI systems ensure sufficient transparency in their operation.
  • deployer: Deployers must use information provided under Article 13 for compliance purposes.
  • Regulation (EU) 2016/679: Article 13 information supports compliance with data protection impact assessment requirements.
  • Directive (EU) 2016/680: Article 13 is contained within Directive (EU) 2016/680.
  • fundamental rights impact assessment: The impact assessment process references information provided by the provider according to Article 13.
  • real-world conditions testing: Instructions for use referenced in Article 13 must be provided to deployers.
  • Testing in real world conditions: Testing in real world conditions must comply with instructions specified in Article 13.

Article 13 of Directive (EU) 2016/680 legal_article

Article establishing obligations regarding explanation rights for persons in law enforcement contexts.
  • high-risk AI systems: High-risk AI systems used for law enforcement must comply with Article 13 regarding explanation rights.

Article 13(3), point (d) legal_article

A legal article that specifies requirements for technical measures to facilitate interpretation of AI system outputs by deployers.
  • Human oversight measures: Article 13(3), point (d) requires technical measures to facilitate interpretation of AI system outputs.

Article 14 legal_article

Article 14 establishes requirements for human oversight of high-risk AI systems to ensure effective monitoring during their use.
  • Article 12: Article 12 references Article 14 regarding identification of natural persons involved in verification of results.
  • high-risk AI system: High-risk AI systems are subject to human oversight requirements established in Article 14.
  • human oversight measures: Article 14 establishes requirements for human oversight measures of high-risk AI systems.
  • Human oversight measures: Article 14 requires assessment and implementation of human oversight measures for AI systems.
  • human oversight measures: Human oversight measures are required in accordance with Article 14.

Article 14 of Regulation (EU) 2019/1020 legal_article

Legal provision granting market surveillance authorities specific enforcement powers that may be exercised remotely.
  • market surveillance authority: Market surveillance authorities exercise powers under Article 14 of Regulation (EU) 2019/1020, including remote enforcement capabilities.

Article 14(4) legal_article

Legal article containing powers that market surveillance authorities may exercise remotely for enforcement purposes.

Article 15 legal_article

Article 15 establishes requirements for accuracy, robustness, and cybersecurity testing and validation of high-risk AI systems throughout their lifecycle.
  • Instructions for use: Instructions for use must reference accuracy, robustness and cybersecurity metrics as tested according to Article 15.
  • high-risk AI systems: Article 15 establishes accuracy, robustness, and cybersecurity requirements that apply to high-risk AI systems.
  • high-risk AI systems: High-risk AI systems certified under cybersecurity schemes are presumed to comply with cybersecurity requirements in Article 15.

Article 16 legal_article

Establishes obligations for providers of high-risk AI systems including compliance requirements, documentation, quality management, and conformity assessment procedures.
  • high-risk AI systems: Article 16 establishes obligations that apply to providers of high-risk AI systems.
  • Section 2: Article 16 requires compliance with requirements set out in Section 2.
  • Article 17: Article 16 requires quality management systems to comply with Article 17.
  • Article 18: Article 16 requires providers to keep documentation referred to in Article 18.
  • Article 19: Article 16 requires providers to keep logs as referred to in Article 19.
  • Article 43: Article 16 requires high-risk AI systems to undergo conformity assessment procedure in Article 43.
  • Article 47: Article 16 requires providers to draw up EU declaration of conformity in accordance with Article 47.
  • provider: Providers of high-risk AI systems are subject to obligations defined in Article 16.
  • Provider obligations: Provider obligations are established in Article 16.

Article 16 TFEU legal_article

A provision of the Treaty on the Functioning of the European Union that serves as the legal basis for specific rules on personal data protection and real-time biometric identification in the AI Regulation.
  • Regulation 2024/1689: Regulation 2024/1689 is adopted on the basis of Article 16 TFEU for personal data protection rules concerning AI systems in law enforcement.
  • European Data Protection Board: The European Data Protection Board should be consulted in accordance with Article 16 TFEU.
  • Regulation: The Regulation's rules on real-time biometric identification are based on Article 16 TFEU.

Article 16(1) legal_article

A provision in Regulation (EU) 2022/2065 establishing the mechanism for receiving notices on illegal content.

Article 16(2) legal_article

A provision in Regulation (EU) 2017/745 establishing that certain changes should not be considered modifications affecting device compliance.
  • Regulation (EU) 2017/745: Regulation (EU) 2017/745 contains Article 16(2) regarding modifications of medical devices.
  • high-risk AI system: Article 16(2) continues to apply to high-risk AI systems that are medical devices.

Article 16(6) legal_article

A provision in Regulation (EU) 2022/2065 requiring hosting service providers to process notices on illegal content.

Article 17 legal_article

Establishes quality management system requirements that providers of high-risk AI systems must implement in the design, development, and testing of AI systems.

Article 17(5) legal_article

An article in Regulation (EU) No 167/2013 that is amended to include provisions concerning artificial intelligence systems as safety components.
  • Article 103: Article 103 modifies Article 17(5) of Regulation (EU) No 167/2013.
  • Chapter III, Section 2: Article 17(5) requires that the requirements set out in Chapter III, Section 2 of Regulation (EU) 2024/1689 be taken into account.

Article 18 legal_article

Article requiring providers to maintain documentation for high-risk AI systems for 10 years after placement on the market or putting into service.
  • Regulation (EU) 2019/1020: Regulation (EU) 2019/1020 contains Article 18 which provides procedural rights.
  • general-purpose AI models: Procedural rights provided for in Article 18 apply mutatis mutandis to providers of general-purpose AI models.
  • Article 16: Article 16 requires providers to keep documentation referred to in Article 18.
  • technical documentation: Article 18 requires providers to keep technical documentation for high-risk AI systems.
  • EU declaration of conformity: Article 18 requires providers to keep the EU declaration of conformity on file.
  • Article 94: Article 94 applies Article 18 mutatis mutandis to providers of general-purpose AI models.

Article 18 of Regulation (EU) 2019/1020 legal_article

A legal article establishing procedural rights for operators concerned with market surveillance measures.
  • AI system: Article 18 establishes procedural rights for operators concerned with AI system market surveillance measures.

Article 19 legal_article

Addresses the maintenance and retention of automatically generated logs by providers of high-risk AI systems.
  • Article 16: Article 16 requires providers to keep logs as referred to in Article 19.
  • high-risk AI systems: Article 19 establishes requirements for maintaining automatically generated logs from high-risk AI systems.
  • automatically generated logs: Article 19 requires providers to keep automatically generated logs for appropriate periods of at least six months.
  • Regulation (EU) 2018/1139: Regulation (EU) 2018/1139 contains Article 19 which is being amended.
  • Artificial Intelligence systems: Article 19 applies to Artificial Intelligence systems that are safety components.
  • Chapter III, Section 2: Article 19 references the requirements set out in Chapter III, Section 2 of Regulation (EU) 2024/1689.

Article 19 of Regulation (EU) 2019/1020 legal_article

Provides measures that market surveillance authorities must take within seven days of receiving serious incident notifications.

Article 2 legal_article

Article defining the scope of the AI Regulation and specifying which entities and AI systems are subject to its provisions.
  • AI Regulation 2024/1689: The regulation contains Article 2 defining its scope.
  • Regulation: The Regulation contains Article 2 which defines its scope.
  • providers: Article 2 applies to providers placing AI systems on the market.
  • deployers: Article 2 applies to deployers of AI systems within the Union.
  • importers and distributors: Article 2 applies to importers and distributors of AI systems.
  • product manufacturers: Article 2 applies to product manufacturers placing AI systems on the market.
  • general-purpose AI models: Article 2 applies to general-purpose AI models placed on the market.

Article 2 TEU legal_article

A provision of the Treaty on European Union that enshrines Union values which should guide the development of AI and its regulatory framework.
  • AI: AI and its regulatory framework should be developed in accordance with Union values enshrined in Article 2 TEU.

Article 2(1) legal_article

Identifies operators and defines the scope of unmanned aircraft and their components covered by the regulation.
  • Regulation 2024/1689: Regulation 2024/1689 contains Article 2(1) which defines the scope of unmanned aircraft covered.

Article 20 legal_article

Establishes requirements for corrective actions and duty of information for providers of high-risk AI systems that are not in conformity with the Regulation.
  • high-risk AI system: Article 20 requires corrective actions and information provision for high-risk AI systems.
  • Regulation: Article 20 is contained within the Regulation establishing corrective action requirements.
  • Providers of high-risk AI systems: Providers must comply with corrective action and information duties under Article 20.
  • Article 79(1): Article 20 references Article 79(1) to define risk criteria triggering investigation and reporting.
  • Article 44: Article 20 references Article 44 regarding notified bodies issuing certificates.

Article 20 of Regulation (EU) 2019/1020 legal_article

Establishes procedures for notification of serious incidents to the Commission.
  • National competent authorities: Article 20 establishes the procedure for notification in accordance with which authorities must notify the Commission.

Article 21 legal_article

Regulation article requiring providers of high-risk AI systems to cooperate with competent authorities and provide necessary information and documentation.
  • Regulation: Article 21 is contained within the Regulation establishing cooperation requirements.
  • Providers of high-risk AI systems: Providers must cooperate with competent authorities as required by Article 21.
  • Section 2: Article 21 references Section 2 containing conformity requirements.

Article 21(6) legal_article

Article in Regulation (EU) 2019/1020 that lists tasks for testing support structures in the area of AI.

Article 22 legal_article

Establishes obligations for authorised representatives of providers of high-risk AI systems.

Article 22(1) legal_article

Legal article establishing requirements for appointing an authorised representative for high-risk AI systems.
  • importer: Importers must verify that the provider has appointed an authorised representative in accordance with Article 22(1).

Article 22(2), point (b) legal_article

Article from Regulation (EU) 2016/679 concerning legal basis for automated decision-making.

Article 22(5) legal_article

Article in Regulation (EU) No 168/2013 that is amended to include requirements for AI systems as safety components.
  • Chapter III, Section 2: Article 22(5) requires that requirements from Chapter III, Section 2 of Regulation (EU) 2024/1689 be taken into account.

Article 23 legal_article

Legal article establishing obligations for importers of high-risk AI systems.
  • importer: Article 23 establishes obligations that apply to importers of high-risk AI systems.
  • high-risk AI system: Article 23 governs the placement and conformity requirements for high-risk AI systems.
  • importer: Importers must comply with obligations laid down in Article 23, point (3).
  • Importer obligations: Importer obligations are established in Article 23.

Article 24 legal_article

Legal article establishing obligations of distributors regarding high-risk AI systems placed on the market.
  • distributor: Article 24 establishes obligations that apply to distributors of high-risk AI systems.
  • high-risk AI system: Article 24 governs the handling and distribution of high-risk AI systems.
  • Distributor obligations: Distributor obligations are established in Article 24.

Article 24 of the Charter legal_article

Article establishing specific rights for children as enshrined in the Charter.
  • Charter: Article 24 is contained within the Charter and establishes specific rights for children.

Article 24(2), point (b) legal_article

Article from Regulation (EU) 2018/1725 concerning legal basis for processing by Union institutions.

Article 25 legal_article

Article establishing responsibilities along the AI value chain for distributors, importers, deployers, and other third parties regarding high-risk AI systems.
  • distributor: Article 25 establishes responsibilities for distributors along the AI value chain.
  • Regulation 2024/1689: Article 25 is contained within Regulation 2024/1689.
  • high-risk AI system: Article 25 establishes responsibilities and obligations for high-risk AI systems along the value chain.
  • Regulation 2024/1689: Article 25 references Article 6 and Article 16 of the same regulation.
  • Article 96: Article 96 references Article 25 regarding requirements and obligations.

Article 26 legal_article

Article 26 establishes obligations for deployers of high-risk AI systems regarding technical and organizational measures and human oversight.
  • Regulation: Article 26 is contained within the Regulation and establishes specific obligations for deployers.
  • high-risk AI systems: High-risk AI systems are subject to the obligations established in Article 26.
  • Deployer obligations: Deployer obligations are established in Article 26.
  • Regulation (EU) 2024/1689: Regulation (EU) 2024/1689 contains Article 26 which specifies applicability conditions.
  • Regulation (EU) 2016/679: Article 26 references Regulation (EU) 2016/679 as applicable requirement.
  • Directive (EU) 2016/680: Article 26 references Directive (EU) 2016/680 as applicable requirement.

Article 26(10) legal_article

A provision of Regulation 2024/1689 concerning specific obligations related to the processing of personal data by Member States.

Article 26(8) legal_article

A legal article that specifies conditions for data protection impact assessments in relation to high-risk AI systems.

Article 261 TFEU legal_article

A provision of the Treaty on the Functioning of the European Union granting the Court of Justice unlimited jurisdiction with regard to penalties.

Article 27 legal_article

Article requiring completion of a fundamental rights impact assessment before deployment of real-time remote biometric identification systems.

Article 27 of Directive (EU) 2016/680 legal_article

A legal article requiring data protection impact assessments under the Law Enforcement Directive.

Article 28 legal_article

Article establishing requirements for notifying authorities and their designation by Member States for assessment and monitoring activities.
  • notifying authority: Article 28 establishes requirements and procedures for notifying authorities.
  • Article 34: Article 34 references the notifying authority defined in Article 28.

Article 29 legal_article

Legal article governing the application process for conformity assessment bodies seeking notification and procedures for extensions of notification scope.
  • Article 36: Article 36 references procedures laid down in Article 29 for extensions of notification scope.

Article 29(2) legal_article

Referenced article concerning accreditation certificates for notified bodies.

Article 29(3) legal_article

Referenced article concerning documentary evidence for notified bodies.

Article 290 TFEU legal_article

Treaty article that provides the legal basis for delegating power to the Commission to adopt acts.

Article 3 legal_article

A legal article that contains definitions for key terms used throughout the regulation.
  • AI system: Article 3 provides the definition of AI system for purposes of the regulation.
  • risk: Article 3 defines risk as the combination of probability and severity of harm.
  • provider: Article 3 defines the role and responsibilities of a provider.
  • deployer: Article 3 defines the role and responsibilities of a deployer.
  • authorised representative: Article 3 defines the role and responsibilities of an authorised representative.
  • importer: Article 3 defines the role and responsibilities of an importer.
  • This Regulation: The regulation contains Article 3 which provides definitions.

Article 3(4) legal_article

Legal article within Directive (EU) 2016/680 that defines profiling of natural persons.

Article 3(4) of Directive (EU) 2016/680 legal_article

Specific article within Directive (EU) 2016/680 that defines profiling of natural persons in the context of law enforcement.

Article 3, point (1) legal_article

Article containing the definition of an AI system.
  • Article 96: Article 96 references Article 3, point (1) regarding the definition of an AI system.

Article 3, point (4) legal_article

Article in Directive (EU) 2016/680 that defines profiling for law enforcement data processing.

Article 3, point (49)(b) legal_article

Legal article that defines serious incident in the context of widespread infringement.
  • serious incident: Serious incident definition is referenced in Article 3, point (49)(b).

Article 3, point (49)(c) legal_article

Defines serious incidents related to AI systems that require notification to market surveillance authorities.
  • market surveillance authority: Market surveillance authorities must follow the definition of serious incidents in Article 3, point (49)(c).
  • serious incidents: Article 3, point (49)(c) defines what constitutes serious incidents.

Article 3, point (5) legal_article

Article in Regulation (EU) 2018/1725 that defines profiling for EU institutions.

Article 30 legal_article

Legal article establishing the notification procedure for conformity assessment bodies and procedures applicable to extensions of notification scope.
  • Notification procedure: Article 30 defines the notification procedure for conformity assessment bodies.
  • Article 36: Article 36 references procedures laid down in Article 30 for extensions of notification scope.
  • CE marking: CE marking is subject to the general principles set out in Article 30 of Regulation (EC) No 765/2008.
  • Regulation (EC) No 765/2008: Article 30 is contained in Regulation (EC) No 765/2008.

Article 30 of Regulation (EU) 2019/1020 legal_article

A provision defining the administrative cooperation group (ADCO) framework referenced in relation to market surveillance.

Article 30(2) legal_article

Provision establishing the electronic notification tool for notified bodies to report relevant changes.

Article 31 legal_article

Legal article establishing organizational, quality management, resources, process, and cybersecurity requirements that notified bodies must fulfill.
  • Conformity assessment bodies: Conformity assessment bodies must fulfill the requirements laid down in Article 31.
  • conformity assessment body: Article 31 establishes requirements that conformity assessment bodies must satisfy.
  • conformity assessment body: Conformity assessment bodies must comply with requirements laid down in Article 31.
  • notified body: Article 31 establishes the requirements that notified bodies must satisfy.
  • Article 32: Article 32 references requirements set out in Article 31 regarding conformity assessment body compliance.
  • Article 33: Article 33 requires subcontractors and subsidiaries to meet requirements laid down in Article 31.
  • notified body: Notified bodies must fulfill the requirements laid down in Article 31.
  • notifying authority: Notifying authorities are subject to responsibilities laid down in Article 31.
  • Article 39: Article 39 references Article 31 for requirements that conformity assessment bodies must meet.
  • conformity assessment procedure: Conformity assessment procedures are based on requirements laid down in Article 31.
  • Notified body requirements: Notified body requirements are established in Article 31.

Article 31(10) legal_article

Legal article containing requirements for notified bodies in conformity assessment procedures.
  • notified body: Notified bodies must comply with requirements laid down in Article 31(10).

Article 31(11) legal_article

Legal article containing requirements for notified bodies in conformity assessment procedures.
  • notified body: Notified bodies must comply with requirements laid down in Article 31(11).

Article 31(4) legal_article

Legal article containing requirements for notified bodies in conformity assessment procedures.
  • notified body: Notified bodies must comply with requirements laid down in Article 31(4).

Article 31(5) legal_article

Legal article containing requirements for notified bodies in conformity assessment procedures.
  • notified body: Notified bodies must comply with requirements laid down in Article 31(5).

Article 32 legal_article

Establishes presumption of conformity when conformity assessment bodies demonstrate compliance with harmonised standards published in the Official Journal of the European Union.
  • Article 31: Article 32 references requirements set out in Article 31 regarding conformity assessment body compliance.

Article 33 legal_article

Legal article governing subsidiaries of notified bodies and subcontracting arrangements, including requirements for subcontractors and documentation retention.
  • Article 31: Article 33 requires subcontractors and subsidiaries to meet requirements laid down in Article 31.
  • notified bodies: Article 33 governs the subcontracting and subsidiary arrangements of notified bodies.
  • Notified body requirements: Notified body requirements are established in Article 33.

Article 34 legal_article

Establishes operational obligations for notified bodies regarding verification of high-risk AI systems conformity and documentation requirements.
  • Article 43: Article 34 references conformity assessment procedures set out in Article 43.
  • notified bodies: Notified bodies are subject to operational obligations specified in Article 34.
  • notified bodies: Article 34 establishes operational obligations for notified bodies regarding verification and documentation.
  • high-risk AI systems: Article 34 applies to verification of conformity of high-risk AI systems.
  • Article 28: Article 34 references the notifying authority defined in Article 28.
  • Recommendation 2003/361/EC: Article 34 references Recommendation 2003/361/EC for definition of micro- and small enterprises.
  • Notified body requirements: Notified body requirements are established in Article 34.

Article 34(4) of Regulation (EU) 2019/1020 legal_article

Establishes reporting obligations for market surveillance authorities.

Article 35 legal_article

Article from Regulation (EU) 2016/679 concerning data protection impact assessment requirements.
  • Regulation (EU) 2016/679: Article 35 is contained within Regulation (EU) 2016/679.
  • Commission: Article 35 requires the Commission to assign identification numbers and maintain public lists of notified bodies.

Article 35 of Regulation (EU) 2016/679 legal_article

Requires data protection impact assessments under the GDPR to address risk identification for data subjects.
  • high-risk monitoring mechanisms: Article 35 is referenced as a basis for identifying high risks to data subjects' rights during sandbox experimentation.

Article 36 legal_article

Governs procedures for notifying the Commission and Member States of changes to notified body notifications.
  • Article 29: Article 36 references procedures laid down in Article 29 for extensions of notification scope.
  • Article 30: Article 36 references procedures laid down in Article 30 for extensions of notification scope.

Article 37 legal_article

Legal article addressing the challenge to the competence of notified bodies and the Commission's investigative authority.
  • Commission: Article 37 establishes the Commission's authority to investigate notified body competence.
  • notified body: Article 37 governs the challenge procedures and competence requirements for notified bodies.

Article 38 legal_article

Legal article establishing coordination requirements for notified bodies in high-risk AI system conformity assessment.
  • Notified bodies: Notified bodies must participate in coordination activities as referred to in Article 38.
  • high-risk AI systems: Article 38 establishes coordination requirements specifically for high-risk AI systems.

Article 39 legal_article

Addresses conformity assessment bodies of third countries and their authorization to carry out activities of notified bodies under the Regulation, provided they meet requirements in Article 31 or ensure equivalent compliance.
  • Article 31: Article 39 references Article 31 for requirements that conformity assessment bodies must meet.

Article 39 of Regulation (EU) 2018/1725 legal_article

A provision addressing risk assessment and mitigation mechanisms in the context of EU institutional data protection.
  • high-risk monitoring mechanisms: Article 39 is referenced as a basis for identifying high risks to data subjects' rights during sandbox experimentation.

Article 39 of the Charter legal_article

Legal article that enshrines the right to vote as a fundamental right.

Article 4 legal_article

Establishes requirements for AI literacy among providers and deployers of AI systems.
  • AI literacy: Article 4 establishes the legal obligation for AI literacy.

Article 4 (1) of Directive (EU) 2016/680 legal_article

Article establishing principles of lawfulness, fairness, transparency, purpose limitation, accuracy, and storage limitation for data processing.

Article 4(2) TEU legal_article

A provision of the Treaty on European Union that addresses Member States' responsibilities regarding military, defence, and national security matters.
  • Regulation: The exclusion of military and defence AI systems from the Regulation is justified by Article 4(2) TEU.

Article 4(3) legal_article

Article in Directive (EU) 2019/790 that addresses the reservation of rights by rightsholders.

Article 4(3) of Directive (EU) 2019/790 legal_article

Specific article addressing the reservation of rights under EU copyright law that AI model providers must comply with.
  • Directive (EU) 2019/790: Article 4(3) is contained within Directive (EU) 2019/790 and addresses reservation of rights.

Article 4, point (4) legal_article

Article in Regulation (EU) 2016/679 that defines profiling for data protection purposes.

Article 40 legal_article

Establishes provisions for harmonised standards and standardisation deliverables that confer presumption of conformity for high-risk AI systems.
  • Article 17: Article 17 references harmonised standards in Article 40 for quality management systems.
  • Harmonised standards: Article 40 references harmonised standards that confer presumption of conformity for AI systems.
  • Regulation (EU) No 1025/2012: Article 40 references Regulation (EU) No 1025/2012 regarding standardisation and publication of harmonised standards.
  • Article 43: Article 43 references Article 40 regarding harmonised standards for conformity assessment.
  • Regulation 2024/1689: The regulation contains Article 40 regarding harmonised standards or common specifications.
  • AI system: Harmonised standards referenced in Article 40 confer presumption of conformity for AI systems.
  • harmonised standards: Article 40 references harmonised standards that confer presumption of conformity for AI systems.

Article 41 legal_article

References common specifications and harmonised standards that confer presumption of conformity for high-risk AI systems.
  • harmonised standards: Article 41 establishes requirements for common specifications related to harmonised standards.
  • Article 43: Article 43 references Article 41 regarding common specifications for conformity assessment.
  • Regulation 2024/1689: The regulation contains Article 41 regarding harmonised standards or common specifications.
  • AI system: Common specifications referenced in Article 41 confer presumption of conformity for AI systems.
  • common specifications: Article 41 references common specifications that confer presumption of conformity for AI systems.

Article 42(1) and (2) of Regulation (EU) 2018/1725 legal_article

Legal article establishing the consultation procedure for data protection authorities.

Article 43 legal_article

Defines the conformity assessment procedure that high-risk AI systems must undergo prior to being placed on the market or put into service.
  • Article 16: Article 16 requires high-risk AI systems to undergo conformity assessment procedure in Article 43.
  • conformity assessment procedure: Article 43 specifies the conformity assessment procedure that must be followed.
  • high-risk AI system: High-risk AI systems are subject to the conformity assessment procedure referenced in Article 43.
  • importer: Importers are required to verify that the conformity assessment procedure referred to in Article 43 has been carried out.
  • Article 34: Article 34 references conformity assessment procedures set out in Article 43.
  • Regulation 2024/1689: Article 43 is part of the AI Regulation establishing conformity assessment procedures.
  • high-risk AI systems: Article 43 establishes conformity assessment procedures applicable to high-risk AI systems.
  • Annex III: Article 43 references Annex III which lists the high-risk AI systems subject to conformity assessment.
  • Section 2: Article 43 requires demonstration of compliance with requirements set out in Section 2.
  • Annex VI: Article 43 references Annex VI describing the internal control conformity assessment procedure.
  • Annex VII: Article 43 references Annex VII describing the quality management system assessment procedure.
  • Article 40: Article 43 references Article 40 regarding harmonised standards for conformity assessment.
  • Article 41: Article 43 references Article 41 regarding common specifications for conformity assessment.
  • Article 46: Article 46 establishes derogations from the conformity assessment procedure defined in Article 43.
  • notified body: Notified bodies are responsible for conformity assessment procedures established in Article 43.
  • Regulation (EU) 2018/1139: Regulation (EU) 2018/1139 contains Article 43 which is being amended.
  • Chapter III, Section 2: Article 43 requires that Chapter III, Section 2 requirements be taken into account when adopting implementing acts.

Article 43(4) legal_article

A legal provision that specifies when changes to AI systems require a new conformity assessment.
  • changes to AI systems: Changes to AI systems may require a new conformity assessment in accordance with Article 43(4).

Article 43(5) or (6) legal_article

Legal article authorizing delegated acts subject to objection procedure.

Article 44 legal_article

Establishes requirements for certificates issued by notified bodies for high-risk AI systems, including language, validity periods, and extension procedures.
  • Article 20: Article 20 references Article 44 regarding notified bodies issuing certificates.
  • notified bodies: Article 44 establishes requirements for certificates issued by notified bodies.

Article 45 legal_article

Legal article establishing information obligations for notified bodies regarding certificates, approvals, and conformity assessment activities.
  • notified body: Article 45 establishes information obligations that notified bodies must fulfill.

Article 46 legal_article

Establishes conformity assessment procedures with possible derogations for high-risk AI systems under exceptional reasons of public security or protection of life and health.
  • Article 43: Article 46 establishes derogations from the conformity assessment procedure defined in Article 43.
  • high-risk AI systems: Article 46 applies to specific high-risk AI systems that may be placed on the market under exceptional circumstances.
  • this Regulation: The Regulation contains Article 46 on conformity assessment procedures.

Article 46(1) legal_article

Legal article specifying conditions under which deployers may be exempt from notification obligations to market surveillance authorities.
  • deployer: Deployers may be exempt from notification obligations under conditions specified in Article 46(1).
  • fundamental rights impact assessment: Article 46(1) provides exemption from notification obligations for deployers.

Article 47 legal_article

Establishes requirements for drawing up an EU declaration of conformity for high-risk AI systems.
  • Article 16: Article 16 requires providers to draw up EU declaration of conformity in accordance with Article 47.
  • EU declaration of conformity: Article 47 establishes the requirements and procedures for the EU declaration of conformity.
  • EU declaration of conformity: Article 47 establishes the requirements and procedures for the EU declaration of conformity.
  • EU declaration of conformity: The EU declaration of conformity is required and established by Article 47.
  • EU declaration of conformity: Article 47 establishes the requirements and procedures for the EU declaration of conformity.
  • Article 83: Article 83 references Article 47 regarding EU declaration of conformity requirements.
  • Chapter III, Section 2: Article 47 requires that Chapter III, Section 2 requirements be taken into account when adopting delegated acts.
  • Regulation 2024/1689: Regulation 2024/1689 contains Article 47 which establishes EU declaration of conformity requirements.

Article 47(5) legal_article

Legal article authorizing delegated acts subject to objection procedure.

Article 48 legal_article

Defines requirements for affixing CE marking to high-risk AI systems or their packaging and documentation to indicate conformity with the Regulation.
  • CE marking: Article 48 governs the requirements for affixing CE marking.
  • Article 83: Article 83 references Article 48 regarding CE marking affixing violations.

Article 49 legal_article

Legal article establishing registration requirements for high-risk AI systems before placing them on the market or putting them into service.
  • real-time remote biometric identification systems: Biometric identification systems must be registered in the EU database according to Article 49.
  • law enforcement use authorization: The authorization requirement is established in relation to Article 49 registration procedures.
  • high-risk AI system: Article 49 establishes registration requirements that apply to high-risk AI systems.
  • deployers: Public authority and Union institution deployers must comply with registration obligations in Article 49.
  • EU database: Article 49 establishes registration obligations for high-risk AI systems in the EU database referenced in Article 71.
  • Annex III: Article 49 references Annex III which lists the high-risk AI systems subject to registration.
  • EU database: Article 49 establishes registration obligations for high-risk AI systems in the EU database referenced in Article 71.
  • high-risk AI systems: Article 49 establishes registration requirements that apply to high-risk AI systems.
  • ANNEX VIII: ANNEX VIII provides detailed information requirements in accordance with Article 49.
  • high-risk AI systems: High-risk AI systems are subject to registration requirements specified in Article 49.

Article 49(1) legal_article

Legal article establishing registration obligations for high-risk AI systems.
  • provider: The provider is subject to registration obligations referred to in Article 49(1).

Article 49(2) legal_article

Legal article that specifies registration requirements for providers of high-risk AI systems.

Article 49(3) legal_article

A legal article that requires deployers of high-risk AI systems to submit information and register such systems.

Article 49(4) legal_article

A legal article that establishes provisions for registering certain high-risk AI systems in the secure non-public section of the EU database.

Article 49(4), point (d) legal_article

Provision requiring registration of real-world testing in the secure non-public section of the EU database with a Union-wide unique identification number.

Article 49(5) legal_article

Provision requiring providers or prospective providers of high-risk AI systems to register testing in real-world conditions.
  • high-risk AI systems: High-risk AI systems must comply with registration requirements in Article 49(5).

Article 5 legal_article

Article establishing prohibitions on certain AI practices including subliminal, manipulative, and deceptive techniques that exploit vulnerabilities, subject to administrative fines of up to EUR 1,500,000.
  • Regulation /2024/1689/oj: The regulation contains Article 5 which defines certain AI systems subject to the regulation.
  • This Regulation: The regulation contains Article 5 which establishes specific requirements.
  • Prohibited AI practices: Article 5 establishes the legal obligations regarding prohibited AI practices.
  • CHAPTER II: CHAPTER II contains Article 5 on prohibited AI practices.
  • Regulation (EU) No 1025/2012: Article 5 is contained within Regulation (EU) No 1025/2012.
  • Article 60: Article 60 references prohibitions established in Article 5.
  • Article 112: Article 112 governs the revision procedure for Article 5.
  • AI system: AI systems must comply with the prohibition of AI practices referred to in Article 5.
  • prohibited AI practices: Article 5 prohibits certain AI practices subject to accelerated market surveillance.
  • Article 81: Article 81 references the prohibition of AI practices in Article 5.
  • AI system: Article 5 contains prohibitions on specific AI practices that AI systems must comply with.
  • Article 96: Article 96 references Article 5 regarding prohibited practices.
  • Article 99: Article 99 establishes penalties for non-compliance with the prohibition in Article 5.
  • prohibition of AI practices: Article 5 establishes the prohibition of certain AI practices.
  • Regulation: Article 5 is contained within the Regulation and establishes prohibited AI practices.
  • Regulation (EU) 2018/858: Regulation (EU) 2018/858 contains Article 5 which is amended.
  • Artificial Intelligence systems: The amended Article 5 applies requirements to artificial intelligence systems used as safety components.
  • Regulation (EU) 2024/1689: Article 5 references Regulation (EU) 2024/1689 for requirements on AI systems as safety components.
  • Article 111: Article 111 references Article 5 regarding application without prejudice.
  • Article 113(3): Article 113(3) contains a reference to Article 5 regarding application provisions.
  • Commission: Commission shall assess the list of prohibited AI practices in Article 5.
  • risk level evaluation methodology: The methodology guides evaluation for the list of prohibited practices in Article 5.
  • Regulation: The Regulation contains Article 5 which lists prohibited AI practices.

Article 5 of Directive (EU) 2016/797 legal_article

Article of Directive (EU) 2016/797 that is amended to include requirements for AI systems.

Article 5 of Regulation (EU) No 182/2011 legal_article

Specific article of Regulation (EU) No 182/2011 that applies to the committee procedure.
  • Article 98: Article 98 applies Article 5 of Regulation (EU) No 182/2011.

Article 5 TEU legal_article

Article 5 of the Treaty on European Union that sets out the principles of subsidiarity and proportionality.
  • This Regulation: This Regulation is adopted in accordance with the principles of subsidiarity and proportionality established in Article 5 TEU.

Article 5(1) legal_article

An article establishing prohibitions on certain uses of biometric categorisation systems and AI systems.
  • Ireland: Ireland is not bound by certain provisions of Article 5(1) regarding biometric categorisation systems.
  • biometric categorisation systems: Biometric categorisation systems are subject to restrictions under Article 5(1).
  • Regulation 2024/1689: Regulation 2024/1689 contains Article 5(1) which references criminal offences in Annex II.
  • ANNEX II: Article 5(1) references the list of criminal offences in Annex II.

Article 5(1), first subparagraph, point (d) legal_article

A provision relating to the use of AI systems in specified contexts.

Article 5(1), first subparagraph, point (g) legal_article

A provision relating to the use of biometric categorisation systems for activities in police cooperation and judicial cooperation in criminal matters.

Article 5(1), first subparagraph, point (h) legal_article

A provision of the Regulation establishing requirements for AI systems.

Article 50 legal_article

Article establishing transparency obligations for providers and deployers of certain AI systems, requiring disclosure when AI systems interact with natural persons and marking of synthetic content.
  • Regulation /2024/1689/oj: The regulation contains Article 50 which defines certain AI systems subject to the regulation.
  • This Regulation: The regulation contains Article 50 which establishes specific requirements.
  • CHAPTER IV: Article 50 is contained within Chapter IV on transparency obligations.
  • AI systems intended to interact with natural persons: Article 50 establishes transparency obligations for AI systems designed to interact directly with natural persons.
  • Machine-readable marking: Article 50 requires providers to ensure synthetic content is marked in machine-readable format as artificially generated.
  • operator: Operators must comply with Article 50.
  • AI system: Article 50 establishes compliance requirements applicable to AI systems.
  • Article 96: Article 96 references Article 50 regarding transparency obligations.
  • Transparency obligations: Transparency obligations are established in Article 50.
  • Article 112: Article 112 references AI systems requiring transparency measures in Article 50.
  • risk level evaluation methodology: The methodology guides evaluation for AI systems requiring additional transparency measures.
  • Regulation: The Regulation contains Article 50 which establishes transparency requirements.

Article 51 legal_article

Establishes criteria for designating general-purpose AI models with systemic risk based on high impact capabilities and computational thresholds.
  • Chapter V: Article 51 is contained within Chapter V governing general-purpose AI models.
  • general-purpose AI model with systemic risk: Article 51 defines the classification criteria for general-purpose AI models with systemic risk.
  • Annex XIII: Article 51 references Annex XIII for criteria regarding equivalent capabilities and impact.
  • 10^25 floating point operations: Article 51 establishes the computational threshold for presuming high impact capabilities.
  • Article 52: Article 52 references the conditions established in Article 51 for identifying general-purpose AI models with systemic risks.
  • Article 90: Article 90 references conditions established in Article 51 for triggering alert procedures.
  • general-purpose AI models with systemic risk: Article 51 defines criteria for designation of general-purpose AI models with systemic risk.

Article 51(3) legal_article

Legal article authorizing delegated acts subject to objection procedure.

Article 52 legal_article

Establishes procedural requirements for notifying the Commission when a general-purpose AI model meets systemic risk conditions and the process for designating models with systemic risk.
  • Article 51: Article 52 references the conditions established in Article 51 for identifying general-purpose AI models with systemic risks.
  • notification requirement: Article 52 establishes the legal obligation for providers to notify the Commission within two weeks when conditions are met.

Article 52(4) legal_article

Legal article authorizing delegated acts subject to objection procedure and revocation.
  • delegated act: Article authorizes delegated acts subject to objection procedure.

Article 53 legal_article

Establishes obligations for providers of general-purpose AI models, including documentation, information provision, and copyright compliance requirements.
  • provider: Article 53 establishes obligations for providers of general-purpose AI models.
  • Obligations for providers of general-purpose AI models: Article 53 establishes the legal obligations that providers of general-purpose AI models must fulfill.
  • authorised representative: Authorised representatives must verify that obligations referred to in Article 53 have been fulfilled.
  • Article 55: Article 55 references Article 53 as containing additional applicable obligations.
  • Codes of practice: Codes of practice must cover obligations provided in Article 53.
  • codes of practice: Codes of practice include obligations provided for in Article 53.
  • Commission: The Commission may request providers to take measures to comply with obligations in Article 53.

Article 53(1), point (a) legal_article

Legal article requiring general-purpose AI model providers to provide technical documentation.

Article 53(1), point (b) legal_article

Legal article requiring general-purpose AI model providers to provide transparency information and technical documentation to downstream providers.
  • ANNEX XII: Article 53(1), point (b) references the transparency information detailed in ANNEX XII.
  • providers of general-purpose AI models: Article 53(1), point (b) requires providers of general-purpose AI models to provide technical documentation.
  • downstream providers: Article 53(1), point (b) applies to downstream providers that integrate AI models into their systems.
  • Regulation 2024/1689: Regulation 2024/1689 contains Article 53(1), point (b) regarding transparency requirements.
  • technical documentation: Article 53(1), point (b) requires providers to supply technical documentation with specified information.

Article 53(5) or (6) legal_article

Legal article authorizing delegated acts subject to objection procedure and revocation.
  • delegated act: Article authorizes delegated acts subject to objection procedure.

Article 54 legal_article

Establishes regulations governing authorized representatives of providers of general-purpose AI models.
  • general-purpose AI model: Article 54 establishes rules for authorised representatives of general-purpose AI model providers.
  • authorised representative: Article 54 defines the role and responsibilities of authorised representatives.
  • Article 55: Article 55 references Article 54 as containing additional applicable obligations.
  • Commission: The Commission may request providers to take measures to comply with obligations in Article 54.

Article 54(3) of Regulation (EU) 2019/881 legal_article

Specific article establishing provisions for cybersecurity certification and conformity statements for high-risk AI systems.
  • cybersecurity requirement: Article 54(3) establishes provisions for cybersecurity requirements through certification schemes.

Article 55 legal_article

Establishes obligations for providers of general-purpose AI models with systemic risk, including model evaluation, risk assessment, incident reporting, and cybersecurity protection.
  • Regulation 2024/1689: Article 55 is part of Regulation 2024/1689 and establishes obligations for providers of general-purpose AI models with systemic risk.
  • model evaluation: Article 55 requires providers to perform model evaluation using standardized protocols and tools.
  • systemic risk assessment and mitigation: Article 55 requires providers to assess and mitigate possible systemic risks at Union level.
  • incident reporting: Article 55 requires providers to track, document, and report serious incidents to the AI Office.
  • cybersecurity protection: Article 55 requires providers to ensure adequate cybersecurity protection for models and infrastructure.
  • Article 53: Article 55 references Article 53 as containing additional applicable obligations.
  • Article 54: Article 55 references Article 54 as containing additional applicable obligations.
  • general-purpose AI models with systemic risk: Article 55 establishes obligations that apply to providers of general-purpose AI models with systemic risk.
  • codes of practice: Codes of practice can be used to demonstrate compliance with Article 55 obligations until harmonised standards are published.
  • European harmonised standards: Compliance with European harmonised standards grants presumption of conformity with Article 55 obligations.
  • Article 78: Article 55 references Article 78 regarding confidentiality obligations for information and documentation.
  • Article 56: Article 55 references Article 56 regarding codes of practice as means to demonstrate compliance.
  • Codes of practice: Codes of practice must cover obligations provided in Article 55.
  • codes of practice: Codes of practice address obligations provided for in Article 55.

Article 56 legal_article

Establishes provisions for codes of practice that providers can use to demonstrate compliance with AI model obligations.
  • codes of practice: Codes of practice are defined and referenced in Article 56 of the regulation.
  • Article 55: Article 55 references Article 56 regarding codes of practice as means to demonstrate compliance.
  • Codes of practice: Article 56 establishes the framework for codes of practice at Union level.
  • Article 101: Article 101 references Article 56 regarding codes of practice.

Article 56 (6) legal_article

A legal article that establishes the procedure for the Commission to adopt implementing acts approving codes of practice.
  • Commission: The Commission adopts implementing acts in accordance with the procedure laid down in Article 56 (6).

Article 57 legal_article

Article establishing conditions for testing high-risk AI systems in real-world conditions and requiring consideration of Chapter III, Section 2 requirements when adopting implementing acts concerning AI safety components.
  • high-risk AI systems: Article 57 applies to high-risk AI systems insofar as requirements are integrated into Union harmonisation legislation.
  • Regulation 2024/1689: Article 57 is contained within the regulation.
  • this Regulation: The Regulation contains Article 57 on testing in real world conditions.
  • Chapter III, Section 2: Article 57 requires that Chapter III, Section 2 requirements be taken into account when adopting implementing acts.

Article 58 legal_article

Establishes detailed arrangements for the establishment, operation, and supervision of AI regulatory sandboxes to enable supervised testing of AI systems.
  • AI regulatory sandboxes: Article 58 establishes detailed arrangements for and functioning of AI regulatory sandboxes.
  • AI regulatory sandboxes: Article 58 establishes the legal framework governing the detailed arrangements and functioning of AI regulatory sandboxes.
  • AI regulatory sandbox: Article 58 establishes the AI regulatory sandbox framework.
  • Regulation 2024/1689: Regulation 2024/1689 contains Article 58 on AI regulatory sandboxes.
  • Chapter III, Section 2: Article 58 requires that Chapter III, Section 2 requirements be taken into account when adopting delegated acts.

Article 59 legal_article

Article addressing further processing of personal data for developing certain AI systems in the public interest within the AI regulatory sandbox, with exceptions to data protection regulations.
  • Regulation /2024/1689/oj: The regulation contains Article 59 which provides exceptions to the non-applicability of data protection regulations.
  • Regulation: Article 59 is contained within the Regulation governing AI systems.
  • personal data: Article 59 governs the further processing of personal data for AI system development in the sandbox.
  • public safety and public health: Article 59 applies to AI systems developed for public safety and public health purposes.
  • this Regulation: The Regulation contains Article 59 on testing in real world conditions.

Article 6 legal_article

Legal article that defines high-risk AI systems and establishes classification criteria and rules for determining whether an AI system qualifies as high-risk.
  • CHAPTER III: Article 6 is contained within Chapter III on high-risk AI systems.
  • high-risk AI systems: Article 6 defines the classification rules and conditions for determining whether an AI system is high-risk.
  • Annex I: Article 6 references Annex I which lists the Union harmonisation legislation applicable to high-risk AI systems.
  • Annex III: Article 6 references Annex III which lists additional AI systems considered high-risk, subject to derogation provisions.
  • high-risk AI system: High-risk AI systems are classified according to criteria in Article 6.
  • Regulation (EU) No 1025/2012: Article 6 is contained within Regulation (EU) No 1025/2012.
  • High-risk AI systems: Article 6 defines and classifies high-risk AI systems.

Article 6 TEU legal_article

A provision of the Treaty on European Union that references the Charter of Fundamental Rights as a basis for the EU legal framework governing AI development.
  • AI: AI development should comply with fundamental rights and freedoms pursuant to Article 6 TEU and the Charter.
  • Charter: Article 6 TEU references the Charter as a legal basis.

Article 6(1) legal_article

Establishes the classification criteria for high-risk AI systems.

Article 6(2) legal_article

Legal article establishing criteria for classifying high-risk AI systems and requiring fundamental rights impact assessments.
  • Article 27: Article 27 references Article 6(2) to define the scope of high-risk AI systems requiring impact assessments.
  • ANNEX III: ANNEX III provides the detailed list of high-risk AI systems referred to in Article 6(2).

Article 6(3) legal_article

Establishes criteria for determining whether an AI system is not high-risk.
  • high-risk AI system: Article 6(3) establishes criteria for determining whether an AI system is high-risk or not.
  • Article 80: Article 80 references Article 6(3) for the conditions of AI system classification.
  • high-risk AI systems: High-risk AI systems may be reclassified as not-high-risk based on conditions in Article 6(3).
  • Regulation (EU) 2024/1689: Article 6(3) is contained within Regulation (EU) 2024/1689.

Article 6(4) legal_article

Article from Regulation (EU) 2016/679 specifying conditions for lawful processing of personal data.
  • personal data: Article 6(4) specifies conditions for reusing personal data collected for other purposes in the AI regulatory sandbox.

Article 6(6) or (7) legal_article

Legal article authorizing delegated acts subject to objection procedure.

Article 60 legal_article

Legal article establishing conditions and requirements for testing high-risk AI systems in real-world conditions outside regulatory sandboxes.
  • Regulation 2024/1689: Article 60 is contained within the regulation.
  • High-risk AI systems: Testing procedures for high-risk AI systems may include testing in real-world conditions in accordance with Article 60.
  • high-risk AI systems: Article 60 establishes rules for testing high-risk AI systems in real world conditions.
  • real-world testing plan: Article 60 requires providers to follow a real-world testing plan for high-risk AI systems.
  • Article 5: Article 60 references prohibitions established in Article 5.
  • Annex I: Article 60 references Union harmonisation legislation listed in Annex I.
  • Article 61: Article 61 references Article 60 as the basis for testing in real world conditions requirements.
  • this Regulation: The Regulation contains Article 60 on testing in real world conditions.
  • EU database: Article 60 contains provisions regarding restricted access to certain information in the EU database.
  • testing in real world conditions: Article 60 defines conditions and requirements for testing AI systems in real world conditions.
  • market surveillance authorities: Market surveillance authorities must verify compliance with Article 60 as part of their supervisory role.
  • Regulation 2024/1689: Regulation 2024/1689 contains Article 60 on testing conditions.
  • Regulation (EU) 2024/1689: Regulation (EU) 2024/1689 contains Article 60 governing registration of high-risk AI systems.
  • testing in real world conditions: Article 60 requires registration of high-risk AI systems undergoing testing in real world conditions.

Article 60(4) point (c) legal_article

Specific provision requiring a Union-wide unique single identification number for testing in real world conditions.

Article 61 legal_article

Establishes requirements for informed consent and additional conditions for testing AI systems in real-world conditions outside regulatory sandboxes.
  • Informed consent: Informed consent requirements are established in accordance with Article 61.
  • testing in real world conditions: Article 61 establishes informed consent requirements for participation in testing in real world conditions.
  • informed consent: Article 61 establishes the legal obligation for informed consent in real world conditions testing.
  • Article 60: Article 61 references Article 60 as the basis for testing in real world conditions requirements.
  • provider: Article 61 requires providers to obtain informed consent and provide contact details to testing subjects.
  • AI system: Article 61 governs the treatment of AI system predictions and decisions in real world conditions testing through consent and reversal arrangements.
  • testing in real world conditions: Article 61 sets additional conditions for testing AI systems in real world conditions.
  • Regulation 2024/1689: Regulation 2024/1689 contains Article 61 on additional testing conditions.

Article 62 legal_article

Article establishing measures for providers and deployers, particularly SMEs and start-ups, including access to AI regulatory sandboxes and support mechanisms.
  • AI regulatory sandboxes: Article 62 establishes priority access measures for SMEs and start-ups to AI regulatory sandboxes.
  • conformity assessment: Article 62 references Article 43 regarding conformity assessment fees for SMEs.
  • informed consent documentation: Article 62 requires that informed consent be dated, documented, and a copy provided to testing subjects.

Article 62(1), point (c) legal_article

A legal article that establishes provisions for non-binding guidance on conformity of innovative products and services embedding AI technologies.
  • AI regulatory sandboxes: Article 62(1), point (c) provides for non-binding guidance on conformity of innovative AI products and services within sandboxes.

Article 63 legal_article

Article providing derogations for specific operators, particularly microenterprises, regarding simplified compliance with quality management system requirements.
  • Regulation 2024/1689: The Regulation contains Article 63 providing derogations for specific operators.
  • Recommendation 2003/361/EC: Article 63 references Recommendation 2003/361/EC for the definition of microenterprises.
  • quality management system: Article 63 allows simplified compliance with quality management system requirements for microenterprises.

Article 64 legal_article

Article establishing the AI Office and tasking the Commission with developing Union expertise and capabilities in AI.

Article 65 legal_article

Establishes the European Artificial Intelligence Board and defines its structure and composition.

Article 66 legal_article

Defines the tasks and responsibilities of the European Artificial Intelligence Board in advising and assisting the Commission and Member States.

Article 67 legal_article

Establishes an advisory forum whose representatives may be invited to Board sub-groups and defines its membership composition and appointment procedures.
  • advisory forum: Article 67 establishes the advisory forum as an institution to provide technical expertise and advice.

Article 68 legal_article

Establishes criteria for independent experts and the composition of the scientific panel.
  • Regulation 2024/1689: Article 68 is contained within Regulation 2024/1689.
  • Article 98(2): Article 68 references Article 98(2) regarding the examination procedure for implementing acts.
  • Article 92: Article 92 references Article 68 regarding criteria for independent experts and the scientific panel.

Article 68(1) legal_article

Legal article that establishes the implementing act for setting fees and recoverable costs structure.
  • Regulation: Article 68(1) is contained within the Regulation and establishes implementing acts for fees.

Article 68(2) legal_article

A legal article that defines the tasks of the scientific panel regarding general-purpose AI models.

Article 7 legal_article

Article 7 establishes the Commission's power to adopt delegated acts to amend Annex III by adding or modifying use-cases of high-risk AI systems.
  • delegated acts: Article 7 establishes the Commission's power to adopt delegated acts.
  • Article 97: Article 7 references Article 97 as the procedural basis for adopting delegated acts.
  • fundamental rights: Article 7 requires that amendments ensure consistency with safety and fundamental rights protections.
  • Regulation (EU) No 1025/2012: Article 7 is contained within Regulation (EU) No 1025/2012.
  • Annex III: Article 7 governs the amendment procedure for Annex III.

Article 7(1) legal_article

A legal article whose delegated acts must be considered for consistency when amending conditions in paragraph 3.

Article 7(1) or (3) legal_article

Legal article authorizing delegated acts subject to objection procedure.

Article 70 legal_article

Legal article establishing the designation of national competent authorities and single points of contact.
  • Regulation: Article 70 is contained within the Regulation and establishes national competent authorities.

Article 71 legal_article

Legal article establishing the EU database for registration of high-risk AI systems and AI systems assessed as non-high-risk.
  • EU database: The EU database is referred to in Article 71.
  • high-risk AI system: High-risk AI systems must be registered in the EU database referred to in Article 71.
  • this Regulation: The Regulation contains Article 71 establishing the EU database.
  • EU database for high-risk AI systems: Article 71 establishes the EU database for high-risk AI systems.
  • Article 83: Article 83 references Article 71 regarding EU database registration requirements.

Article 71(4) legal_article

A legal article that specifies requirements for registering real-world testing of high-risk AI systems with a Union-wide unique single identification number.
  • high-risk AI systems: High-risk AI systems must be registered according to the requirements specified in Article 71(4).

Article 72 legal_article

Article 72 establishes post-market monitoring requirements for high-risk AI systems, including monitoring plans and performance evaluation obligations.

Article 73 legal_article

Establishes requirements for reporting serious incidents involving high-risk AI systems to national market surveillance authorities.
  • serious incident reporting: Serious incident reporting procedures are established in Article 73.
  • deployer: Article 73 applies mutatis mutandis when a deployer cannot reach the provider.
  • serious incident: Serious incidents identified during testing must be reported in accordance with Article 73.
  • this Regulation: The Regulation contains Article 73 on serious incident reports.
  • high-risk AI systems: Article 73 establishes reporting requirements that apply to high-risk AI systems placed on the Union market.
  • serious incident: Article 73 requires providers to report serious incidents to market surveillance authorities.

Article 74 legal_article

Establishes provisions for market surveillance authorities' access to technical documentation and control of AI systems in the Union market.

Article 74(10) legal_article

Legal article that references national authorities or bodies involved in AI system oversight.

Article 74(11) legal_article

A legal article that refers to cross-border market surveillance activities.

Article 74(8) legal_article

Identifies national authorities with access to restricted sections of the EU database.
  • EU database: Article 74(8) defines which national authorities have access to restricted sections of the EU database.

Article 74(9) legal_article

Legal article referencing market surveillance authority responsibilities.

Article 75 legal_article

Grants Member States authority to confer powers on market surveillance authorities regarding AI system testing oversight and control.

Article 75(3) legal_article

Provision referenced in relation to market surveillance authorities' powers.

Article 77 legal_article

Establishes powers of national public authorities and bodies to protect fundamental rights in relation to high-risk AI systems, including access to documentation and testing authority.
  • high-risk AI systems: Article 77 establishes regulatory powers and obligations regarding high-risk AI systems.
  • market surveillance authorities: Article 77 requires market surveillance authorities to organize testing and maintain communication with public authorities.
  • documentation: Article 77 requires access to documentation created or maintained under the Regulation.
  • Union law protecting fundamental rights: Article 77 is based on Union law protecting fundamental rights including non-discrimination.

Article 77(1) legal_article

Specifies that market surveillance authorities must inform national public authorities or bodies of serious incidents involving AI systems.

Article 78 legal_article

Establishes confidentiality obligations for information and documentation obtained by competent authorities, notified bodies, and other entities involved in the application of the Regulation.
  • competent authority: Information obtained by competent authorities must be treated in accordance with confidentiality obligations in Article 78.
  • Notifying authorities: Notifying authorities must safeguard confidentiality in accordance with Article 78.
  • Confidentiality obligation: The confidentiality obligation references Article 78 for specific requirements.
  • Notified bodies: Notified bodies must maintain confidentiality in accordance with Article 78.
  • Commission: The Commission must treat sensitive information confidentially in accordance with Article 78.
  • notified body: Notified bodies must safeguard confidentiality of information in accordance with Article 78.
  • confidentiality obligations: Confidentiality obligations are set out in Article 78.
  • Article 55: Article 55 references Article 78 regarding confidentiality obligations for information and documentation.
  • Commission: Confidentiality obligations set out in Article 78 apply to information obtained in Commission assessments.
  • Exit report: Exit reports are subject to confidentiality provisions in Article 78.
  • Regulation 2024/1689: The regulation contains Article 78 which establishes confidentiality obligations.
  • national competent authorities: National competent authorities must act in accordance with confidentiality obligations set out in Article 78.
  • Market surveillance authorities: Information obtained by market surveillance authorities must be treated in accordance with confidentiality obligations in Article 78.
  • market surveillance authority: Market surveillance authorities must safeguard confidentiality of information in accordance with Article 78.
  • documentation: Article 78 establishes confidentiality obligations for documentation obtained by public authorities.
  • Confidentiality: Article 78 establishes the confidentiality obligation for entities involved in applying the Regulation.
  • Directive (EU) 2016/943: Article 78 references Directive (EU) 2016/943 regarding exceptions to confidentiality for intellectual property protection.
  • Regulation (EU) 2019/1020: Article 78 references Regulation (EU) 2019/1020 in relation to the exercise of powers by authorities.

Article 79 legal_article

Article 79 defines risk criteria for high-risk AI systems and establishes procedures for dealing with systems presenting risks to health, safety, or fundamental rights.
  • Article 12: Article 12 references Article 79 for the definition of risk in high-risk AI systems.
  • high-risk AI system: Article 79 defines risk criteria for high-risk AI systems.
  • Regulation (EU) 2019/1020: Article 79 references the definition of 'product presenting a risk' from Regulation (EU) 2019/1020.
  • vulnerable groups: Article 79 requires particular attention to be given to AI systems presenting a risk to vulnerable groups.
  • market surveillance authority: Market surveillance authority must perform evaluation under Article 79.

Article 79(1) legal_article

Defines risk criteria for high-risk AI systems that trigger immediate investigation and reporting obligations to authorities.
  • Article 20: Article 20 references Article 79(1) to define risk criteria triggering investigation and reporting.
  • high-risk AI system: The definition of risk for high-risk AI systems is provided in Article 79(1).

Article 79(5) legal_article

Establishes procedures for corrective action and notification of national measures in market surveillance.
  • Article 81: Article 81 references the notification procedure in Article 79(5).

Article 8 legal_article

Article establishing compliance requirements for high-risk AI systems and provider responsibilities.
  • high-risk AI systems: High-risk AI systems shall comply with requirements laid down in Article 8 and Section 2.
  • risk management system: Article 8 requires that the risk management system referred to in Article 9 be taken into account when ensuring compliance.
  • Union harmonisation legislation: Article 8 references Union harmonisation legislation listed in Annex I Section A that applies to products containing AI systems.

Article 8 of Directive (EU) 2016/680 legal_article

Article within Directive (EU) 2016/680 that provides legal basis for processing personal data.

Article 80 legal_article

Establishes procedures for market surveillance authorities to evaluate and deal with AI systems classified by providers as non-high-risk in application of Annex III.
  • Annex III: Article 80 references Annex III for classification criteria of non-high-risk AI systems.
  • Article 6(3): Article 80 references Article 6(3) for the conditions of AI system classification.
  • provider: Provider is subject to requirements and obligations established in Article 80.

Article 81 legal_article

Establishes the Union safeguard procedure for resolving disputes between market surveillance authorities and the Commission.
  • Article 79(5): Article 81 references the notification procedure in Article 79(5).
  • Article 5: Article 81 references the prohibition of AI practices in Article 5.

Article 82 legal_article

Addresses compliant AI systems which present a risk, establishing procedures for market surveillance authorities to require corrective measures.

Article 83 legal_article

Addresses formal non-compliance procedures for market surveillance authorities regarding CE marking, EU declarations of conformity, and technical documentation requirements for AI systems.
  • CE marking: Article 83 references CE marking requirements and violations thereof.
  • Article 48: Article 83 references Article 48 regarding CE marking affixing violations.
  • Article 47: Article 83 references Article 47 regarding EU declaration of conformity requirements.
  • Article 71: Article 83 references Article 71 regarding EU database registration requirements.
  • high-risk AI system: Article 83 establishes formal non-compliance procedures applicable to high-risk AI systems.

Article 84 legal_article

Establishes the designation of Union AI testing support structures to perform specific tasks in the area of artificial intelligence.
  • Regulation: Article 84 is contained within the Regulation and establishes Union AI testing support.
  • Commission: Article 84 requires the Commission to designate Union AI testing support structures.
  • Union AI testing support structures: Article 84 establishes the framework for designating Union AI testing support structures.
  • Regulation (EU) 2019/1020: Article 84 references Regulation (EU) 2019/1020 for defining tasks of testing support structures.

Article 85 legal_article

Legal article establishing the right to lodge complaints with market surveillance authorities for infringements of AI regulation provisions.

Article 86 legal_article

Legal article establishing the right to explanation of individual decision-making for persons affected by high-risk AI systems.
  • Regulation (EU) 2024/1689: Regulation (EU) 2024/1689 contains Article 86 on the right to explanation of individual decision-making.
  • high-risk AI system: Article 86 applies to decisions taken on the basis of output from high-risk AI systems listed in Annex III.

Article 87 legal_article

Establishes requirements for reporting of infringements and protection of reporting persons under the Regulation.
  • Directive (EU) 2019/1937: Article 87 references Directive (EU) 2019/1937 for reporting infringements and protection of reporting persons.

Article 88 legal_article

Defines enforcement powers for obligations of providers of general-purpose AI models, assigning exclusive supervisory authority to the Commission.
  • Chapter V: Article 88 establishes enforcement mechanisms for obligations contained in Chapter V.
  • Article 94: Article 88 references Article 94 for procedural guarantees in enforcement actions.
  • general-purpose AI models: Article 88 establishes enforcement obligations applicable to providers of general-purpose AI models.

Article 89 legal_article

Article establishing monitoring actions by the AI Office to ensure compliance with the Regulation by providers of general-purpose AI models.

Article 9 legal_article

Legal article establishing risk management system requirements for high-risk AI systems and addressing serious risk AI systems across multiple Member States.

Article 9 of Regulation (EU) 2016/679 legal_article

Article addressing the processing of special categories of personal data including biometric data.

Article 9(1) legal_article

Article of Regulation (EU) 2016/679 that prohibits the processing of biometric data subject to limited exceptions.

Article 9(1) of Regulation (EU) 2016/679 legal_article

Legal article protecting sensitive personal attributes and characteristics in biometric data processing.

Article 9(2) legal_article

Article 9(2) defines risks to health, safety, and fundamental rights that must be considered in the context of high-risk AI systems.
  • Instructions for use: Instructions for use must address risks to health, safety and fundamental rights as defined in Article 9(2).

Article 9(2), point (g) legal_article

Article from Regulation (EU) 2016/679 addressing processing of special categories of personal data.
  • personal data: This article establishes conditions for processing special categories of personal data in the sandbox.

Article 9(2), point (g) of Regulation (EU) 2016/679 legal_article

Legal provision allowing processing of special categories of personal data for substantial public interest purposes.

Article 90 legal_article

Establishes procedures enabling the scientific panel to provide qualified alerts to the AI Office regarding systemic risks posed by general-purpose AI models at Union level.
  • Regulation 2024/1689: Article 90 is contained within Regulation 2024/1689.
  • Article 51: Article 90 references conditions established in Article 51 for triggering alert procedures.

Article 90(1), point (a) legal_article

A legal article that establishes the procedure for qualified alerts from the scientific panel regarding systemic risks.
  • scientific panel: Article 90(1) point (a) establishes the procedure for qualified alerts from the scientific panel.

Article 91 legal_article

Establishes procedures and powers for the AI Office to request documentation and information from providers of general-purpose AI models.
  • Commission: Article 91 establishes the power of the Commission to request documentation and information from providers.
  • Article 92: Article 92 references Article 91 regarding information gathering procedures.
  • Article 101: Article 101 references Article 91 regarding requests for documents or information.

Article 92 legal_article

Legal article establishing procedures for the Commission to evaluate general-purpose AI models for compliance assessment and systemic risk investigation at Union level.
  • AI Office: Article 92 establishes the power of the AI Office to conduct evaluations of general-purpose AI models.
  • Article 91: Article 92 references Article 91 regarding information gathering procedures.
  • Article 101: Article 92 references Article 101 regarding fines for non-compliance.
  • Article 68: Article 92 references Article 68 regarding criteria for independent experts and the scientific panel.
  • systemic risk: Article 92 establishes evaluation procedures that assess systemic risks in general-purpose AI models.
  • systemic risk at Union level: Systemic risk assessment is based on evaluation carried out in accordance with Article 92.
  • Article 101: Article 101 references Article 92 regarding Commission evaluation of AI models.

Article 93 legal_article

Legal article establishing provisions for measures and commitments that AI model providers must comply with.
  • Article 101: Article 101 references Article 93 regarding measures and commitments from AI model providers.

Article 94 legal_article

Contains procedural guarantees and rights applicable to enforcement actions and economic operators of general-purpose AI models.
  • Article 88: Article 88 references Article 94 for procedural guarantees in enforcement actions.
  • Regulation (EU) 2019/1020: Article 94 references and applies Article 18 of Regulation (EU) 2019/1020 to general-purpose AI model providers.
  • Article 18: Article 94 applies Article 18 mutatis mutandis to providers of general-purpose AI models.

Article 95 legal_article

Establishes provisions for voluntary codes of conduct applicable to AI providers for specific AI system requirements.
  • codes of conduct: Article 95 references voluntary codes of conduct applicable to AI providers.
  • Chapter III, Section 2: Article 95 establishes codes of conduct for voluntary application of requirements from Chapter III, Section 2.
  • high-risk AI systems: Article 95 applies to AI systems other than high-risk AI systems, establishing voluntary requirements.

Article 96 legal_article

Article establishing the Commission's obligation to develop guidelines for practical implementation of AI system classification and assessment.
  • Commission: Commission guidelines must be provided in line with Article 96.
  • AI systems: Article 96 provides guidelines for practical implementation of AI system classification.
  • AI Regulation: Article 96 is contained within the AI Regulation.
  • Articles 8 to 15: Article 96 references Articles 8 to 15 regarding requirements and obligations.
  • Article 25: Article 96 references Article 25 regarding requirements and obligations.
  • Article 5: Article 96 references Article 5 regarding prohibited practices.
  • Article 50: Article 96 references Article 50 regarding transparency obligations.
  • Article 3, point (1): Article 96 references Article 3, point (1) regarding the definition of an AI system.
  • Article 99: Article 99 references guidelines issued by the Commission pursuant to Article 96.

Article 97 legal_article

Article 97 establishes the procedure for the Commission to adopt delegated acts to amend regulatory lists, thresholds, benchmarks, and other provisions under the Regulation.
  • The Commission: The Commission is empowered to adopt delegated acts in accordance with Article 97.
  • Annex III: Article 97 provides the procedure for amending conditions related to AI systems falling under Annex III.
  • Article 7: Article 7 references Article 97 as the procedural basis for adopting delegated acts.
  • Commission: The Commission is empowered to adopt delegated acts in accordance with Article 97.
  • Annex III: Article 97 empowers the Commission to amend the list in Annex III by removing high-risk AI systems.
  • Commission: The Commission is empowered to adopt delegated acts in accordance with Article 97.
  • The Commission: The Commission's power to adopt delegated acts is based on Article 97.
  • The Commission: The Commission is empowered to adopt delegated acts in accordance with Article 97.
  • high-risk AI systems: Article 97 empowers the Commission to amend provisions regarding high-risk AI systems referred to in Annex III.
  • Commission: Article 97 empowers the Commission to adopt delegated acts.
  • Commission: Commission adoption of delegated acts is subject to the procedure established in Article 97.
  • Commission: Article 97 governs the procedure for the Commission to adopt delegated acts.
  • Delegation of Power: Article 97 governs the exercise of delegation of power to the Commission.

Article 97(2) legal_article

Provision that empowers the Commission to adopt delegated acts for amending regulatory annexes.
  • Commission: Commission's power to adopt delegated acts is based on Article 97(2).

Article 98 legal_article

Establishes the committee procedure for assisting the Commission in adopting implementing acts, referencing Regulation (EU) No 182/2011.

Article 98(2) legal_article

Establishes the examination procedure for adopting implementing acts by the Commission.
  • Commission: The Commission's implementing acts regarding suspension or withdrawal must follow the examination procedure in Article 98(2).
  • implementing acts: Implementing acts are adopted in accordance with the examination procedure referred to in Article 98(2).
  • Article 68: Article 68 references Article 98(2) regarding the examination procedure for implementing acts.
  • scientific panel: Article 98(2) establishes the examination procedure for adopting the implementing act establishing the scientific panel.
  • Commission: The implementing act shall be adopted in accordance with the examination procedure referred to in Article 98(2).
  • post-market monitoring plan: The post-market monitoring plan must be adopted in accordance with the examination procedure in Article 98(2).
  • Commission: The Commission must adopt implementing acts in accordance with the examination procedure in Article 98(2).

Article 99 legal_article

Establishes penalties and enforcement measures for infringements of the Regulation, including administrative fines up to EUR 35 million or 7% of annual turnover.
  • provider: Provider is subject to fines under Article 99 for non-compliance.
  • AI system: Non-compliant AI system providers are subject to fines under Article 99.
  • Article 5: Article 99 establishes penalties for non-compliance with the prohibition in Article 5.
  • Member States: Article 99 requires Member States to lay down penalty rules and enforcement measures.
  • Article 96: Article 99 references guidelines issued by the Commission pursuant to Article 96.

Article 99(1) legal_article

A legal article that establishes administrative fines as penalties for infringements of the Regulation.
  • Regulation: The Regulation contains Article 99(1) which establishes administrative fines.

Article R23 of Annex I to Decision No 768/2008/EC legal_article

Legal article establishing the electronic notification tool for notifying conformity assessment bodies.
  • notified bodies: Notifications of notified bodies are sent through the electronic notification tool established by this legal article.
  • Commission: The Commission develops and manages the electronic notification tool referenced in this legal article.

Articles 102 to 109 legal_article

Contains requirements applicable to high-risk AI systems covered by Union harmonisation legislation.

Articles 40 and 41 legal_article

Articles referencing harmonised standards and common specifications for AI.
  • Commission: The Commission shall take account of harmonised standards and common specifications referred to in Articles 40 and 41.

Articles 5 and 6 of Regulation (EU) No 1025/2012 legal_article

Articles establishing requirements for balanced representation of interests in standardisation development.

Articles 53 and 55 legal_article

Establish documentation and obligation requirements for providers of general-purpose AI models that codes of practice must address.

Articles 79 to 83 legal_article

Procedural articles in the regulation that do not apply to AI systems related to products covered by Union harmonisation legislation with equivalent protection procedures.
  • market surveillance authority: Procedural articles do not apply to AI systems where equivalent protection procedures already exist in sectoral legislation.

Articles 8 to 15 legal_article

Articles containing requirements and obligations applicable to AI systems.
  • Article 96: Article 96 references Articles 8 to 15 regarding requirements and obligations.

Articles 91 to 94 legal_article

A series of legal articles governing measures related to general-purpose AI models and compliance assessment.

artificial intelligence systems ai_system

Systems subject to harmonised rules under the AI Regulation when they function as safety components or are otherwise regulated, requiring compliance with technical specifications and approval procedures.
  • REGULATION (EU) 2024/1689: Regulation (EU) 2024/1689 establishes harmonised rules governing the development, placing on market, putting into service and use of AI systems.
  • Regulation (EU) 2024/1689: Regulation (EU) 2024/1689 establishes harmonised rules governing the development, placing on market, putting into service and use of AI systems.
  • safety components: Artificial intelligence systems can be classified as safety components under the Artificial Intelligence Act.
  • Chapter III, Section 2: Artificial intelligence systems that are safety components must comply with requirements set out in Chapter III, Section 2 of Regulation (EU) 2024/1689.
  • Regulation (EU) 2024/1689: Regulation (EU) 2024/1689 governs artificial intelligence systems that function as safety components.

Artificial Intelligence systems which are safety components ai_system

AI systems that function as safety components as defined in Regulation (EU) 2024/1689.
  • Chapter III, Section 2: Chapter III, Section 2 establishes requirements that govern AI systems classified as safety components.

asylum authorities market_actor

Authorities responsible for asylum matters and conducting identity checks.
  • information systems: Information systems are used by asylum authorities for identity identification.

attestation of competence documentation

Documentation that attests to the competence of a conformity assessment body for notification purposes.

audit report documentation

Report provided by notified body documenting audit findings and additional tests conducted on AI systems.

authorisation requirement legal_obligation

Requirement under the Regulation that use of real-time remote biometric identification systems for law enforcement purposes must be authorized.
  • this Regulation: The Regulation establishes that use of real-time remote biometric identification systems for law enforcement must be subject to authorization.

authorised representative market_actor

A natural or legal person established in the Union appointed by third-country providers to perform compliance obligations, procedures, and serve as a contact point for regulatory matters.
  • providers established in third countries: Third-country providers must appoint an authorised representative established in the Union.
  • high-risk AI systems: The authorised representative ensures compliance of high-risk AI systems placed on the market or put into service in the Union.
  • Article 3: Article 3 defines the role and responsibilities of an authorised representative.
  • provider: Providers established in third countries must appoint an authorised representative in the Union before making high-risk AI systems available on the market.
  • high-risk AI system: Authorised representatives perform tasks specified in their mandate related to high-risk AI systems.
  • Article 22: Article 22 establishes requirements for authorised representatives of providers of high-risk AI systems.
  • EU declaration of conformity: The authorised representative must verify that the EU declaration of conformity has been drawn up.
  • technical documentation: Authorised representatives must verify that technical documentation specified in Annex XI has been drawn up.
  • conformity assessment procedure: The authorised representative must verify that an appropriate conformity assessment procedure has been carried out.
  • competent authorities: The authorised representative must provide information and documentation to competent authorities upon reasoned request.
  • provider: Authorised representatives are appointed by and act on behalf of providers in regulatory matters.
  • market surveillance authority: The authorised representative must inform the market surveillance authority of mandate termination.
  • notified body: The authorised representative must inform the notified body of mandate termination where applicable.
  • Article 54: Article 54 defines the role and responsibilities of authorised representatives.
  • Article 53: Authorised representatives must verify that obligations referred to in Article 53 have been fulfilled.
  • AI Office: Authorised representatives must provide documentation and information to the AI Office upon request and can be addressed on compliance issues.
  • competent authorities: The authorised representative can be addressed by competent authorities on compliance issues.
  • provider: Authorised representatives are appointed by and act on behalf of providers in regulatory matters.

Authorised representative obligations legal_obligation

Obligations imposed on authorised representatives pursuant to Article 22, subject to administrative fines for non-compliance.
  • Article 22: Authorised representative obligations are established in Article 22.
  • SMEs: SMEs are subject to reduced administrative fines for non-compliance with authorised representative obligations.

authorised representatives market_actor

Representatives of providers not established in the Union.

authorization legal_obligation

A formal permission issued by market surveillance authorities allowing high-risk AI systems to be put into service.
  • high-risk AI system: High-risk AI systems require authorization before being put into service.
  • market surveillance authority: Market surveillance authorities issue authorizations for high-risk AI systems.
  • Section 2: Authorizations are issued only if the high-risk AI system complies with Section 2 requirements.
  • Union law: Authorizations must comply with Union law as assessed by the Commission.
  • Commission: The Commission evaluates whether authorizations are justified and comply with Union law.
  • Member State: Member States can raise objections to authorizations issued by other Member States' market surveillance authorities.

authorization requirement for real-time biometric identification legal_obligation

Obligation for law enforcement to obtain authorization before using real-time biometric identification systems.

automatic recording of events technical_requirement

Technical requirement for high-risk AI systems to automatically record logs of events throughout their operational lifetime.
  • high-risk AI systems: High-risk AI systems must technically allow for automatic recording of events through logs over their lifetime.
  • Article 12: Article 12 requires high-risk AI systems to technically allow for automatic recording of events over their lifetime.

automatically generated logs data_category

Logs automatically generated by high-risk AI systems that must be maintained by providers and may be requested by competent authorities.
  • Article 19: Article 19 requires providers to keep automatically generated logs for appropriate periods of at least six months.
  • Article 12(1): Article 12(1) establishes the requirement for automatically generated logs by high-risk AI systems.
  • provider: Providers must provide access to automatically generated logs upon request by competent authorities.
  • Article 12(1): Article 12(1) references automatically generated logs of high-risk AI systems.

Automation bias evaluation_criterion

The tendency of natural persons to automatically rely or over-rely on outputs produced by high-risk AI systems, particularly for information or decision-support systems.
  • Human oversight: Natural persons assigned human oversight must remain aware of the tendency to over-rely on AI system outputs.

autonomous robots ai_system

Increasingly autonomous systems used in manufacturing and personal assistance and care contexts that must safely operate in complex environments.

benchmarks and evaluations of capabilities evaluation_criterion

Criterion assessing model capabilities including task adaptability, autonomy level, scalability, and tool access.

benchmarks and indicators for model capability evaluation_criterion

Tools and metrics used to assess high-impact capabilities of general-purpose AI models as strong predictors of generality, capabilities, and associated systemic risk.
  • AI Office: The AI Office engages with stakeholders to establish thresholds, tools and benchmarks for assessing high-impact capabilities.

Benchmarks and measurement methodologies technical_requirement

Standards and methods for measuring appropriate levels of accuracy and robustness in high-risk AI systems, to be developed by the Commission with stakeholders.
  • Commission: The Commission shall encourage the development of benchmarks and measurement methodologies for high-risk AI systems.

Bias and discriminatory effects evaluation_criterion

Assessment criteria for AI systems, particularly regarding discrimination based on age, ethnicity, race, sex, or disabilities.

bias detection and correction legal_obligation

Mandatory requirement for providers of high-risk AI systems to detect and correct biases, including through processing of special categories of personal data under strict conditions.

bias detection and mitigation technical_requirement

Measures required to detect, prevent and mitigate possible biases that could affect health, safety, fundamental rights or lead to discrimination.
  • Union law: Bias mitigation measures ensure compliance with Union law prohibitions on discrimination.

bias mitigation technical_requirement

Requirement to address and reduce biases in data sets that could lead to discrimination or negative impacts on vulnerable groups.
  • data sets: Data sets used in high-risk AI systems are subject to bias mitigation requirements.
  • feedback loops: Feedback loops in AI systems require bias mitigation to prevent amplification of existing discrimination.

biometric AI systems ai_system

AI systems intended for biometric use that require third-party conformity assessment as an exception.
  • Regulation: The regulation requires third-party conformity assessment for biometric AI systems as an exception.

biometric categorisation data_category

The assignment of natural persons to specific categories based on their biometric data, including characteristics such as sex, age, hair colour, eye colour, tattoos, and behavioral or personality traits.
  • Digital Services Act: The regulation defines the notion of biometric categorisation and its scope of application.
  • biometric data: Biometric categorisation is based on biometric data of natural persons.

biometric categorisation for law enforcement ai_system

An AI system used by law enforcement for categorizing individuals based on biometric characteristics.
  • Regulation 2024/1689: The regulation contains specific rules restricting the use of AI systems for biometric categorisation in law enforcement.

biometric categorisation system ai_system

An AI system that categorizes individuals based on biometric data such as facial features or fingerprints to deduce characteristics like race, political opinions, or sexual orientation.

Biometric categorisation systems ai_system

AI systems that use biometric data to deduce or infer individuals' sensitive personal characteristics such as political opinions, religious beliefs, race, or sexual orientation.
  • Regulation: The regulation prohibits biometric categorisation systems that deduce or infer protected personal characteristics.
  • Biometric data: Biometric categorisation systems operate on and process biometric data.
  • high-risk classification: Biometric categorisation systems based on sensitive attributes are classified as high-risk.
  • Article 9(1) of Regulation (EU) 2016/679: Biometric categorisation systems are subject to the protections of Article 9(1) regarding sensitive attributes.
  • ANNEX III: Biometric categorisation systems are classified as high-risk AI systems in ANNEX III.

biometric data data_category

Personal data relating to the physical, physiological, or behavioral characteristics of natural persons, such as facial images, fingerprints, or eye color, used for identification, authentication, categorization, and emotion recognition purposes.

biometric data processing technical_requirement

Processing of biometric data by AI systems to identify, infer emotions or intentions, or assign persons to specific categories.

biometric identification legal_obligation

Automated recognition of physical, physiological and behavioural human features for establishing individual identity by comparing to reference database data.
  • biometric data: Biometric identification applies to biometric data for establishing individual identity.

biometric identification ai_system

AI systems used for identifying individuals through biometric data processing.
  • Article 10 of Directive (EU) 2016/680: Biometric identification systems must comply with Article 10 requirements for biometric data processing in law enforcement.
  • law enforcement: Biometric identification systems are used by law enforcement authorities for identification purposes.

biometric identification technical_requirement

The automated recognition of physical, physiological, behavioural, or psychological human features for establishing the identity of a natural person by comparing biometric data to stored biometric data.
  • biometric data: Biometric data is used in biometric identification processes.

Biometric identification systems ai_system

High-risk AI systems used for identifying natural persons through biometric data, requiring enhanced human oversight with verification by at least two natural persons.
  • Human oversight measures: Biometric identification systems require enhanced human oversight with verification by at least two natural persons.

biometric verification ai_system

AI systems intended for authentication to confirm a specific natural person's identity for service access, device unlocking or security access.
  • biometric data: Biometric verification AI systems use biometric data for authentication purposes.

biometric verification technical_requirement

The automated, one-to-one verification including authentication of the identity of natural persons by comparing their biometric data to previously provided biometric data.
  • biometric data: Biometric data is used in biometric verification processes.

biometric verification system ai_system

An AI system intended for biometric verification and authentication to confirm that a specific natural person is who they claim to be or to provide access to services or devices.
  • Regulation: Biometric verification systems are excluded from certain rules of the Regulation due to their minor impact on fundamental rights.

Biometric verification systems ai_system

AI systems intended for biometric verification and authentication to confirm identity for service access, device unlocking, or secure premises access, explicitly excluded from high-risk classification.
  • ANNEX III: Biometric verification systems are explicitly excluded from the high-risk classification in ANNEX III.

Board institution

Governing body composed of representatives of Member States established to oversee the application of the AI Regulation, provide advice to the Commission and Member States, and coordinate with national competent authorities and the AI Office.
  • Commission: The Commission shall consult the Board when preparing standardisation requests and communicate information on fines to it.
  • 2017/746: The regulation establishes a Board composed of Member State representatives.
  • Regulation: The Board is responsible for advisory tasks, coordinates market surveillance authorities, and oversees the application and implementation of the Regulation.
  • Member States: The Board is composed of representatives of Member States.
  • Commission: The Board provides advice and recommendations to the Commission on matters related to AI implementation.
  • standing subgroup for market surveillance: The Board establishes standing sub-groups including one for market surveillance.
  • standing subgroup for notified bodies: The Board establishes a standing sub-group for notified bodies.
  • advisory forum: The advisory forum provides advice and technical expertise to the Board.
  • Commission: The Commission shall consult the Board when preparing standardisation requests and communicate information on fines to it.
  • Codes of practice: The Board works with the AI Office to ensure codes of practice cover specified obligations.
  • codes of practice: The Board works with the AI Office to ensure codes of practice contain clear objectives and commitments.
  • codes of practice: The Board regularly monitors and evaluates the achievement of objectives of codes of practice.
  • National competent authorities: National competent authorities coordinate their activities and cooperate within the framework of the Board.
  • National competent authorities: National competent authorities must submit annual reports to the Board on AI regulatory sandbox progress.
  • Member States: Member States' designated representatives adopt the Board's rules of procedure by two-thirds majority.
  • AI Office: The AI Office must inform the Board of monitoring measures and alerts, and consult it before conducting evaluations.
  • Commission: The Board advises and assists the Commission in facilitating consistent and effective application of the Regulation.
  • Member States: The Board advises and assists Member States in applying the Regulation.
  • market surveillance authorities: The Board establishes a standing sub-group providing a platform for cooperation among market surveillance authorities.
  • notifying authorities: The Board establishes a standing sub-group providing a platform for cooperation among notifying authorities.
  • Regulation (EU) 2019/1020: The Board's market surveillance sub-group acts as the ADCO within the meaning of Article 30 of Regulation (EU) 2019/1020.
  • Article 66: Article 66 defines the tasks of the Board to advise and assist the Commission and Member States.
  • advisory forum: The advisory forum is required to provide technical expertise and advice to the Board.
  • scientific panel: The Commission consults with the Board in determining the number of experts on the scientific panel.
  • Commission: The Commission must consult the Board regarding the specifications of the EU database and communicate information on imposed fines to the Board as appropriate.
  • Commission: The Commission must consult the Board regarding the specifications of the EU database and communicate information on imposed fines to the Board as appropriate.
  • AI Office: The AI Office must inform the Board of monitoring measures and alerts, and consult it before conducting evaluations.
  • AI Office: The AI Office must inform the Board of monitoring measures and alerts, and consult it before conducting evaluations.
  • European Commission: The Commission requests information from the Board without undue delay.

Board legislative_body

Regulatory board authorized to access exit reports and participate in regulatory oversight.
  • Exit report: Board is authorized to access and consider exit reports in regulatory oversight.

border control authorities market_actor

Authorities responsible for controlling borders and conducting identity checks.
  • information systems: Information systems are used by border control authorities for identity identification.

CE marking technical_requirement

A conformity marking required on high-risk AI systems to indicate compliance with regulatory requirements, which can be applied visibly, digitally, or on packaging depending on the system's nature.
  • Regulation: The Regulation requires high-risk AI systems to bear CE marking to indicate conformity.
  • high-risk AI systems: High-risk AI systems that comply with Regulation requirements must bear the CE marking.
  • Chapter III, Section 2: CE marking indicates conformity with requirements set out in Chapter III, Section 2.
  • importer: Importers must ensure the high-risk AI system bears the required CE marking.
  • distributor: Distributors must verify that high-risk AI systems bear the required CE marking.
  • Article 30: CE marking is subject to the general principles set out in Article 30 of Regulation (EC) No 765/2008.
  • high-risk AI system: High-risk AI systems require CE marking to be affixed visibly, legibly and indelibly on the system or its packaging/documentation to indicate compliance.
  • Article 83: Article 83 references CE marking requirements and violations thereof.

CE marking documentation

A conformity marking that providers must affix to high-risk AI systems or their packaging to indicate compliance with the Regulation.
  • high-risk AI systems: Providers must affix CE marking to high-risk AI systems.
  • high-risk AI system: High-risk AI systems require CE marking to be affixed visibly, legibly and indelibly on the system or its packaging/documentation to indicate compliance.
  • Article 48: Article 48 governs the requirements for affixing CE marking.

centres of excellence institution

Specialized institutions that support research, development, and testing of AI systems.

certificate documentation

Document issued by a notified body certifying compliance of an AI system.
  • notified body: Certificates are issued by notified bodies to verify AI system compliance.

certificate issuance and management legal_obligation

The process by which notified bodies issue certificates for high-risk AI systems and the requirement to suspend or withdraw unduly issued certificates.

certificate suspension or restriction legal_obligation

A regulatory action where certificates may be suspended or restricted, with specific conditions for their continued validity.
  • notifying authority: The notifying authority confirms risks and outlines timelines for remedying suspensions or restrictions.
  • national competent authorities: National competent authorities must receive information about suspended or withdrawn certificates and take appropriate measures.
  • notified body: Notified bodies must monitor and remain responsible for certificates during suspension or restriction periods.

Chapter 2 of Title V TEU legal_article

A section of the Treaty on European Union covering the common Union defence policy and Member States' defence policies.
  • Regulation: The Regulation considers the specificities of Member States' and Union defence policy covered by Chapter 2 of Title V TEU.

CHAPTER II legal_article

Chapter of legislation titled 'PROHIBITED AI PRACTICES' that contains regulations on banned AI system practices.
  • Article 5: CHAPTER II contains Article 5 on prohibited AI practices.

Chapter II of Regulation (EU) 2022/2065 legal_article

Specific chapter addressing liability of intermediary service providers.

CHAPTER III regulation

Chapter of legislation addressing high-risk AI systems and their classification.
  • Article 6: Article 6 is contained within Chapter III on high-risk AI systems.

Chapter III legal_article

A chapter containing requirements and obligations that are not affected by transparency obligations regarding AI-generated or manipulated content.
  • transparency obligations: Transparency obligations shall not affect the requirements and obligations set out in Chapter III.

Chapter III, Section 2 legal_article

Section of the Regulation containing technical requirements and standards that high-risk AI systems must comply with throughout their lifetime, including accuracy, robustness, and non-discrimination metrics.
  • conformity assessment: Conformity assessment process demonstrates compliance with requirements set out in Chapter III, Section 2.
  • CE marking: CE marking indicates conformity with requirements set out in Chapter III, Section 2.
  • substantial modification: Substantial modifications affect compliance with Chapter III, Section 2 requirements.
  • personal data: Chapter III, Section 2 establishes requirements that personal data processing must comply with in sandbox contexts.
  • anonymised data: Anonymised data is referenced as an alternative to personal data for fulfilling Chapter III, Section 2 requirements.
  • synthetic data: Synthetic data is referenced as an alternative to personal data for fulfilling Chapter III, Section 2 requirements.
  • Regulation 2024/1689: Regulation 2024/1689 contains Chapter III, Section 2 which sets out essential requirements and technical specifications for AI systems.
  • high-risk AI systems: High-risk AI systems must maintain continuous compliance with requirements set out in Chapter III, Section 2.
  • high-risk AI system: High-risk AI systems must comply with requirements set out in Chapter III, Section 2.
  • AI system: AI systems must comply with requirements established in Chapter III, Section 2.
  • Article 95: Article 95 establishes codes of conduct for voluntary application of requirements from Chapter III, Section 2.
  • codes of conduct: Codes of conduct are based on requirements set out in Chapter III, Section 2.
  • Regulation (EC) No 300/2008: Regulation (EC) No 300/2008 must take into account requirements from Chapter III, Section 2 of Regulation (EU) 2024/1689.
  • Regulation (EU) 2024/1689: Regulation (EU) 2024/1689 contains Chapter III, Section 2 with specific requirements for AI safety components.
  • Article 17(5): Article 17(5) requires that the requirements set out in Chapter III, Section 2 of Regulation (EU) 2024/1689 be taken into account.
  • Article 22(5): Article 22(5) requires that requirements from Chapter III, Section 2 of Regulation (EU) 2024/1689 be taken into account.
  • Commission: The Commission shall take into account the requirements set out in Chapter III, Section 2 when carrying out its activities.
  • Article 17: Article 17 references the requirements set out in Chapter III, Section 2 of Regulation (EU) 2024/1689.
  • Article 19: Article 19 references the requirements set out in Chapter III, Section 2 of Regulation (EU) 2024/1689.
  • Article 43: Article 43 requires that Chapter III, Section 2 requirements be taken into account when adopting implementing acts.
  • Article 47: Article 47 requires that Chapter III, Section 2 requirements be taken into account when adopting delegated acts.
  • Article 57: Article 57 requires that Chapter III, Section 2 requirements be taken into account when adopting implementing acts.
  • Article 58: Article 58 requires that Chapter III, Section 2 requirements be taken into account when adopting delegated acts.
  • Artificial Intelligence systems which are safety components: Chapter III, Section 2 establishes requirements that govern AI systems classified as safety components.
  • Article 11: Article 11 of Regulation (EU) 2019/2144 references the requirements set out in Chapter III, Section 2 of Regulation (EU) 2024/1689.
  • artificial intelligence systems: Artificial intelligence systems that are safety components must comply with requirements set out in Chapter III, Section 2 of Regulation (EU) 2024/1689.
  • voluntary codes of conduct: Voluntary codes of conduct foster application of requirements set out in Chapter III, Section 2.
  • AI system: AI systems must comply with requirements established in Chapter III, Section 2.
  • validation and testing procedures: Validation and testing procedures must comply with requirements set out in Chapter III, Section 2.
  • performance metrics: Performance metrics are subject to requirements established in Chapter III, Section 2.
  • harmonised standards: When harmonised standards are not applied, solutions must meet requirements in Chapter III, Section 2.
  • conformity assessment procedure based on internal control: The internal control procedure requires assessment of AI system compliance with essential requirements in Chapter III, Section 2.
  • Union technical documentation assessment certificate: The certificate is issued when the AI system meets the requirements in Chapter III, Section 2.

Chapter III, Section 2 of Regulation (EU) 2024/1689 legal_article

Section of the Artificial Intelligence Act containing requirements that must be taken into account for AI systems used as safety components.
  • Article 5 of Directive (EU) 2016/797: The amended Article 5 requires that requirements set out in Chapter III, Section 2 of Regulation (EU) 2024/1689 shall be taken into account.

CHAPTER IV regulation

Chapter of legislation governing transparency obligations for providers and deployers of certain AI systems.
  • Article 50: Article 50 is contained within Chapter IV on transparency obligations.

Chapter V legal_article

Chapter containing obligations that apply to high-risk AI systems and general-purpose AI models.
  • common specifications: Common specifications cover obligations in Sections 2 and 3 of Chapter V.
  • Article 88: Article 88 establishes enforcement mechanisms for obligations contained in Chapter V.
  • The Commission: The Commission has exclusive powers to supervise and enforce Chapter V.

Chapter V regulation

Regulatory chapter governing general-purpose AI models and their classification rules.
  • Article 51: Article 51 is contained within Chapter V governing general-purpose AI models.

Chapter V, Sections 2 and 3 legal_article

Sections of the Regulation containing obligations applicable to standardisation requests.

Chapter VI of Regulation (EU) 2019/1020 legal_article

Section of Regulation (EU) 2019/1020 establishing procedures for mutual assistance in cross-border cases and information access requests.

Charter treaty

The Charter of Fundamental Rights of the European Union referenced in Article 6 TEU that protects fundamental rights including human dignity, privacy, personal data protection, freedom of expression, and non-discrimination relevant to AI regulation.
  • Article 6 TEU: Article 6 TEU references the Charter as a legal basis.
  • Union legal framework on AI: Rules in the Union legal framework should be consistent with the Charter.
  • guidelines for trustworthy AI: The guidelines contribute to AI design in line with the Charter and Union values.
  • high risk AI system: Classification of AI systems as high risk is based on the extent of adverse impact on fundamental rights protected by the Charter.
  • mandatory requirements: Mandatory requirements are based on applicable requirements resulting from the Charter.
  • transparency obligation: Compliance with transparency obligations should not impede rights guaranteed in the Charter.

Charter regulation

Fundamental rights document establishing the right to a high level of environmental protection and other fundamental rights.

Charter fundamental rights legal_obligation

Fundamental rights guaranteed in the Charter that must be protected when AI systems are used by law enforcement.

Charter of Fundamental Rights of the European Union treaty

A foundational EU legal document that enshrines fundamental rights including human dignity, privacy, personal data protection, freedom of expression, and non-discrimination that the AI Regulation aims to protect.
  • REGULATION (EU) 2024/1689: The regulation references the Charter to ensure protection of fundamental rights including democracy and rule of law.
  • AI Regulation: The AI Regulation is applied in accordance with the values enshrined in the Charter.

civil protection authorities institution

Authorities that may deploy or put into service specific high-risk AI systems without prior authorization in duly justified situations of urgent public security threat.
  • high-risk AI systems: Civil protection authorities may deploy high-risk AI systems without authorization in duly justified situations.
  • high-risk AI systems: Civil protection authorities may put specific high-risk AI systems into service without prior authorization in urgent public security situations.

civilian or law enforcement purposes legal_obligation

Non-excluded purposes that fall within the scope of the Regulation and require provider compliance.

codes of conduct documentation

Voluntary guidelines and standards developed by the Commission and Member States in cooperation with stakeholders to translate ethical principles into practical AI development and deployment practices for responsible and trustworthy AI.
  • Commission: The Commission facilitates the drawing up of voluntary codes of conduct to advance AI literacy.
  • high-risk AI systems: Mandatory requirements applicable to high-risk AI systems are referenced as models for voluntary codes of conduct for non-high-risk systems.
  • AI regulatory sandboxes: AI regulatory sandboxes facilitate the voluntary application of codes of conduct referenced in Article 95.
  • Article 95: Article 95 references voluntary codes of conduct applicable to AI providers.
  • the Board: The Board issues opinions on the development and application of codes of conduct.

codes of conduct regulation

Voluntary frameworks for the application of specific requirements to AI systems, developed by providers, deployers, or their representative organizations.
  • AI Office: The AI Office facilitates the drawing up of codes of conduct for AI systems.
  • Member States: Member States work with the AI Office to facilitate the development of codes of conduct.
  • Chapter III, Section 2: Codes of conduct are based on requirements set out in Chapter III, Section 2.
  • high-risk AI systems: Codes of conduct apply to AI systems, including high-risk AI systems.
  • Union ethical guidelines for trustworthy AI: Codes of conduct incorporate applicable elements from Union ethical guidelines for trustworthy AI.
  • environmental sustainability: Codes of conduct require assessing and minimizing the impact of AI systems on environmental sustainability.
  • AI literacy: Codes of conduct require promoting AI literacy among persons involved in AI development and use.
  • vulnerable persons or groups: Codes of conduct require assessing and preventing negative impacts of AI systems on vulnerable persons or groups.

codes of practice documentation

Voluntary compliance frameworks developed with stakeholder input that providers of general-purpose AI models may adhere to in order to demonstrate conformity with systemic risk obligations and regulatory requirements.
  • AI Office: The AI Office encourages, facilitates, and regularly monitors the development and implementation of codes of practice at Union level.
  • general-purpose AI models: Codes of practice cover obligations for providers of general-purpose AI models.
  • AI models: Providers of general-purpose AI models can demonstrate compliance using codes of practice as alternative adequate means.
  • Commission: The Commission may adopt implementing acts to approve codes of practice and give them general validity within the Union.
  • detection and labelling of artificially generated or manipulated content: Codes of practice facilitate effective implementation of obligations regarding detection and labelling of artificially generated or manipulated content.
  • Article 56: Codes of practice are defined and referenced in Article 56 of the regulation.
  • Article 55: Codes of practice can be used to demonstrate compliance with Article 55 obligations until harmonised standards are published.
  • Board: The Board works with the AI Office to ensure codes of practice contain clear objectives and commitments.
  • Articles 53 and 55: Codes of practice must cover the obligations provided for in Articles 53 and 55 of the Regulation.
  • Regulation: Codes of practice contribute to the proper application of the Regulation.
  • key performance indicators: Codes of practice contain key performance indicators to measure achievement of objectives.
  • Board: The Board regularly monitors and evaluates the achievement of objectives of codes of practice.
  • AI Office: The AI Office encourages, facilitates, and regularly monitors the development and implementation of codes of practice at Union level.
  • Article 53: Codes of practice include obligations provided for in Article 53.
  • Article 55: Codes of practice address obligations provided for in Article 55.
  • the Board: The Board issues opinions on the development and application of codes of practice.

Codes of practice regulation

Union-level guidelines developed to contribute to proper application of AI regulations and address systemic risks.
  • Article 56: Article 56 establishes the framework for codes of practice at Union level.
  • AI Office: The AI Office encourages, facilitates, and regularly monitors the development and implementation of codes of practice at Union level.
  • Board: The Board works with the AI Office to ensure codes of practice cover specified obligations.
  • Article 53: Codes of practice must cover obligations provided in Article 53.
  • Article 55: Codes of practice must cover obligations provided in Article 55.
  • Systemic risks at Union level: Codes of practice address identification, assessment and management of systemic risks at Union level.
  • General-purpose AI models: Providers of general-purpose AI models are invited to participate in drawing-up codes of practice.

codes of practice at Union level documentation

Voluntary codes of practice to facilitate effective implementation of obligations regarding detection and labelling of artificially generated or manipulated content.
  • Commission: The Commission may encourage and facilitate the drawing up of codes of practice at Union level.

Commission legislative_body

The European Commission responsible for adopting delegated and implementing acts, designating systemic risk models, approving codes of practice, maintaining the EU database, establishing the AI Office, and overseeing compliance with the Regulation.
  • European Artificial Intelligence Board: The Board supports the Commission in promoting AI literacy tools and public awareness.
  • codes of conduct: The Commission facilitates the drawing up of voluntary codes of conduct to advance AI literacy.
  • AI HLEG: The AI HLEG was appointed by the Commission.
  • AI HLEG: The Commission appointed the independent AI HLEG.
  • market surveillance authority: Market surveillance authority notifies the Commission of evaluation results, required actions, and non-compliance not restricted to national territory.
  • national data protection authority: National data protection authorities must submit annual reports to the Commission on the use of real-time biometric identification systems.
  • Member State: Member States must notify the Commission of their national rules and common specifications within the required timeframe.
  • delegated acts: The Commission is empowered to adopt delegated acts to amend the list of high-risk AI systems.
  • high-risk AI system: Commission should provide guidelines specifying practical implementation of conditions for high-risk and non-high-risk AI systems.
  • this Regulation: The Commission adopts standardisation activities and guidance related to the Regulation.
  • Regulation 2024/1689: The Commission is empowered to amend annexes to the Regulation through delegated acts.
  • Delegated acts: The Commission is empowered to amend annexes through delegated acts.
  • general-purpose AI model with systemic risks: The Commission has authority to take individual decisions designating general-purpose AI models as having systemic risk.
  • general-purpose AI model with systemic risk: The Commission is empowered to designate general-purpose AI models as having systemic risk.
  • Codes of practice: The Commission may adopt implementing acts to approve codes of practice and give them general validity within the Union.
  • EU database: The Commission serves as the controller of the EU database and provides technical and administrative support, with access to restricted sections for sensitive areas.
  • Regulation (EU) 2018/1725: The Commission's role as data controller complies with Regulation (EU) 2018/1725.
  • testing and experimentation facilities: The Commission establishes testing and experimentation facilities at Union level.
  • quality management system: The Commission should develop guidelines specifying elements of the quality management system.
  • advisory forum: The advisory forum provides advice and technical expertise to the Commission.
  • general-purpose AI model: The Commission may request access to general-purpose AI models through APIs and technical means, and may make binding commitments from providers to implement mitigation measures.
  • scientific panel: The scientific panel can request the Commission to require documentation or information from providers.
  • Regulation: The Commission has authority to request compliance measures under the Regulation.
  • Court of Justice of the European Union: The Court of Justice reviews all Commission decisions under the Regulation and has unlimited jurisdiction to review and modify decisions fixing fines.
  • European Artificial Intelligence Board: The Commission must consult the European Artificial Intelligence Board before providing guidelines.
  • Article 96: Commission guidelines must be provided in line with Article 96.
  • notified body: The Commission reviews notifications and decides on authorization of notified bodies.
  • Article 35: Article 35 requires the Commission to assign identification numbers and maintain public lists of notified bodies.
  • notifying authority: Notifying authorities must notify the Commission of conformity assessment bodies and provide relevant information upon request.
  • Annex V: The Commission is empowered to adopt delegated acts to amend Annex V based on technical progress.
  • Article 97: Article 97 empowers the Commission to adopt delegated acts.
  • codes of practice: The Commission may adopt implementing acts to approve codes of practice and give them general validity within the Union.
  • Article 56 (6): The Commission adopts implementing acts in accordance with the procedure laid down in Article 56 (6).
  • general-purpose AI model with systemic risk: The Commission can classify general-purpose AI models as having systemic risk based on qualified alerts or ex officio decisions.
  • scientific panel: The scientific panel can issue qualified alerts to the Commission regarding AI model capabilities.
  • Article 97: Commission adoption of delegated acts is subject to the procedure established in Article 97.
  • Regulation 2024/1689: The regulation establishes competences and powers for the Commission to exercise oversight.
  • Annex XI: The Commission is empowered to adopt delegated acts to detail measurement and calculation methodologies for Annex XI.
  • Annex XII: The Commission is empowered to amend Annex XII in light of evolving technological developments.
  • Article 97: Article 97 governs the procedure for the Commission to adopt delegated acts.
  • Article 78: Confidentiality obligations set out in Article 78 apply to information obtained in Commission assessments.
  • implementing acts: The Commission adopts implementing acts to provide common rules when codes of practice are inadequate and to specify detailed arrangements for AI regulatory sandboxes.
  • Exit report: Commission is authorized to access and take into account exit reports in exercising regulatory tasks.
  • AI Office: The Commission establishes and develops the AI Office for Union expertise in AI.
  • Board: The Board advises and assists the Commission in facilitating consistent and effective application of the Regulation.
  • scientific panel of independent experts: The Commission establishes the scientific panel through implementing acts.
  • scientific panel: The Commission selects experts and determines the composition of the scientific panel.
  • Member States: Member States must immediately inform the Commission of findings regarding AI system risks and may request guideline updates.
  • Board: The Commission must consult the Board regarding the specifications of the EU database and communicate information on imposed fines to the Board as appropriate.
  • Board: The Commission must consult the Board regarding the specifications of the EU database and communicate information on imposed fines to the Board as appropriate.
  • post-market monitoring plan: The Commission shall adopt an implementing act laying down detailed provisions establishing a template for the post-market monitoring plan.
  • Article 98(2): The implementing act shall be adopted in accordance with the examination procedure referred to in Article 98(2).
  • serious incident: The Commission shall develop guidance to facilitate compliance with serious incident notification obligations.
  • confidential information exchange: Commission is subject to confidential information exchange obligations with national competent authorities.
  • market surveillance authority: Market surveillance authority notifies the Commission of evaluation results, required actions, and non-compliance not restricted to national territory.
  • market surveillance authority: The Commission evaluates national measures and enters into consultation with market surveillance authorities.
  • Member States: Member States must immediately inform the Commission of findings regarding AI system risks and may request guideline updates.
  • Member States: Commission shall evaluate national measures taken by Member States.
  • Article 84: Article 84 requires the Commission to designate Union AI testing support structures.
  • AI Office: The Commission exercises powers through the AI Office for assessing systemic risks.
  • Article 91: Article 91 establishes the power of the Commission to request documentation and information from providers.
  • provider of the general-purpose AI model: The Commission may request providers to provide documentation and additional information for compliance assessment.
  • documentation: The Commission may request documentation drawn up by providers in accordance with Articles 53 and 55.
  • general-purpose AI model: The Commission may request access to general-purpose AI models through APIs and technical means, and may make binding commitments from providers to implement mitigation measures.
  • Article 98(2): The Commission must adopt implementing acts in accordance with the examination procedure in Article 98(2).
  • Article 53: The Commission may request providers to take measures to comply with obligations in Article 53.
  • Article 54: The Commission may request providers to take measures to comply with obligations in Article 54.
  • independent experts: The Commission's implementing acts set out detailed arrangements for involving independent experts in evaluations.
  • AI Regulation: The Commission is required to develop guidelines on the practical implementation of the AI Regulation.
  • Articles 40 and 41: The Commission shall take account of harmonised standards and common specifications referred to in Articles 40 and 41.
  • Annex I: The Commission shall provide detailed information on the relationship with Union harmonisation legislation listed in Annex I.
  • AI Office: The AI Office can request the Commission to update guidelines.
  • European Data Protection Supervisor: The European Data Protection Supervisor notifies the Commission annually of imposed administrative fines.
  • examination procedure: Commission implementing acts must be adopted in accordance with the examination procedure referred to in Article 98(2).
  • AI Office: The Commission evaluates the functioning, powers, competences, and resources of the AI Office.
  • European Parliament: The Commission submits reports to the European Parliament on AI Office evaluation and standardisation progress.
  • Council: The Commission submits reports to the Council on AI Office evaluation and standardisation progress.

Commission institution

The European Commission, the executive body responsible for developing benchmarks and measurement methodologies, issuing standardisation requests, establishing common specifications, managing the EU database for high-risk AI systems, receiving notifications and reports on serious incidents, and exercising implementing and delegated powers under the Regulation.
  • Technical robustness: The Commission should ensure development of benchmarks and measurement methodologies for AI systems including technical robustness.
  • voluntary model contractual terms: The Commission could develop and recommend voluntary model contractual terms between providers and third parties.
  • general-purpose AI models: Providers of general-purpose AI models must report serious incidents to the Commission.
  • serious incident: Serious incidents must be reported to the Commission without undue delay.
  • harmonised standards: The Commission issues standardisation requests for harmonised standards development.
  • common specifications: The Commission establishes common specifications for AI systems via implementing acts as a fallback solution.
  • advisory forum: The Commission consults the advisory forum referred to in Article 67 for relevant expertise when drafting common specifications.
  • Board: The Commission shall consult the Board when preparing standardisation requests and communicate information on fines to it.
  • harmonised standard: The Commission must consider delays in harmonised standard adoption before establishing common specifications.
  • Article R23 of Annex I to Decision No 768/2008/EC: The Commission develops and manages the electronic notification tool referenced in this legal article.
  • mutual recognition agreements: The Commission is tasked with pursuing the conclusion of mutual recognition agreements with third countries.
  • Regulation: The Commission should explore international instruments in line with the Regulation's requirements.
  • EU database: The Commission establishes and manages the EU database for high-risk AI system registration.
  • EU database: The Commission develops functional specifications and acts as data controller for the EU database.
  • codes of practice at Union level: The Commission may encourage and facilitate the drawing up of codes of practice at Union level.
  • this Regulation: The Commission must provide standardised templates and information platforms for compliance, and evaluate and review the regulation by 2 August 2029 and every four years thereafter.
  • the Board: The Board can request that the Commission provide standardised templates for areas covered by the regulation.
  • Board: The Board provides advice and recommendations to the Commission on matters related to AI implementation.
  • standing subgroup for market surveillance: The Commission supports the activities of the standing subgroup for market surveillance through market evaluations and studies.
  • European Parliament: The Commission must submit findings and reports to the European Parliament on regulation evaluation and amendments.
  • Council: Commission must report to the Council on regulation evaluation and amendments.
  • Regulation (EU) No 182/2011: The Commission's implementing powers are exercised in accordance with Regulation (EU) No 182/2011.
  • This Regulation: This Regulation confers implementing powers on the Commission to establish uniform conditions for its implementation.
  • Member States: Member States have control mechanisms over the Commission's exercise of implementing powers under Regulation (EU) No 182/2011.
  • Member State: Member States must notify the Commission of their national rules and common specifications within the required timeframe.
  • Member States: Member States must immediately inform the Commission of findings regarding AI system risks and may request guideline updates.
  • National market surveillance authorities: National market surveillance authorities must submit annual reports to the Commission on remote biometric identification system use.
  • national data protection authorities: National data protection authorities must submit annual reports to the Commission on remote biometric identification system use.
  • annual reports: The Commission publishes annual reports on the use of real-time remote biometric identification systems based on aggregated Member State data.
  • Article 97: The Commission is empowered to adopt delegated acts in accordance with Article 97.
  • simplified technical documentation form: The Commission establishes the simplified technical documentation form for SMEs.
  • Article 97: The Commission is empowered to adopt delegated acts in accordance with Article 97.
  • Benchmarks and measurement methodologies: The Commission shall encourage the development of benchmarks and measurement methodologies for high-risk AI systems.
  • notifying authority: Notifying authorities must notify the Commission of conformity assessment bodies and provide relevant information upon request.
  • Article 37: Article 37 establishes the Commission's authority to investigate notified body competence.
  • notified body: The Commission investigates and evaluates the competence of notified bodies.
  • Article 78: The Commission must treat sensitive information confidentially in accordance with Article 78.
  • notified body: The Commission oversees notified bodies and ensures they meet requirements for notification.
  • Article 98(2): The Commission's implementing acts regarding suspension or withdrawal must follow the examination procedure in Article 98(2).
  • notifying authority: The Commission requires notifying authorities to provide relevant information and ensure notified bodies participate in coordination groups.
  • European standardisation organisations: The Commission issues standardisation requests to European standardisation organisations.
  • Board: The Commission shall consult the Board when preparing standardisation requests and communicate information on fines to it.
  • committee: The Commission shall inform the committee before preparing draft implementing acts.
  • implementing acts: The Commission adopts implementing acts to establish common specifications.
  • common specifications: The Commission establishes common specifications for AI systems via implementing acts as a fallback solution.
  • authorization: The Commission evaluates whether authorizations are justified and comply with Union law.
  • high-risk AI system: The Commission decides whether authorizations for high-risk AI systems are justified.
  • notification requirement: The notification requirement requires providers to inform the Commission of systemic risk conditions.
  • general-purpose AI model: The Commission has authority to designate general-purpose AI models as models with systemic risk.
  • general-purpose AI model with systemic risk: The Commission designates general-purpose AI models as presenting systemic risks based on criteria in Annex XIII.
  • scientific panel: The scientific panel issues qualified alerts to the Commission regarding systemic risks in AI models.
  • Annex XIII: The Commission is empowered to amend Annex XIII by specifying and updating systemic risk criteria.
  • provider: Providers may request reassessment of systemic risk designations from the Commission.
  • Article 97(2): Commission's power to adopt delegated acts is based on Article 97(2).
  • Annex XI: Commission is empowered to amend Annex XI in light of technological developments.
  • Annex XII: Commission is empowered to amend Annex XII in light of technological developments.
  • AI regulatory sandboxes: The Commission develops dedicated interfaces and coordinates with national competent authorities on AI regulatory sandboxes.
  • Regulation 2024/1689: The Commission is responsible for promoting AI literacy and implementing aspects of the regulation.
  • advisory forum: The advisory forum is required to provide technical expertise and advice to the Commission.
  • advisory forum: The Commission consults the advisory forum referred to in Article 67 for relevant expertise when drafting common specifications.
  • Member States: The Commission requires Member States to communicate the identity of competent authorities.
  • EU database for high-risk AI systems: The Commission shall set up and maintain the EU database.
  • National competent authorities: National competent authorities must immediately notify the Commission of serious incidents.
  • Market surveillance authorities: Market surveillance authorities must report annually to the Commission on market surveillance activities.
  • market surveillance authority: Market surveillance authority notifies the Commission of evaluation results, required actions, and non-compliance not restricted to national territory.
  • market surveillance authorities: Market surveillance authorities and the Commission can propose joint activities and investigations.
  • Confidentiality: The confidentiality obligation applies to the Commission when carrying out its tasks under the Regulation.
  • market surveillance authority: Market surveillance authority notifies the Commission of evaluation results, required actions, and non-compliance not restricted to national territory.
  • AI system: The Commission evaluates AI systems for compliance with Union law and decides on appropriate measures.
  • market surveillance authority: The Commission enters into consultation with market surveillance authorities regarding national measures.
  • Regulation (EU) No 1025/2012: The Commission applies the procedure provided in Regulation (EU) No 1025/2012 when addressing shortcomings in harmonised standards.
  • Delegation of Power: The Commission's power to adopt delegated acts is subject to the conditions laid down in Article 97.
  • Member States: Member States can request the Commission to update guidelines.
  • AI Office: The AI Office can request the Commission to update guidelines.
  • Union harmonisation law: The Commission updates guidelines pursuant to Union harmonisation law.
  • delegated act: Commission adopts delegated acts pursuant to authorized articles.
  • Interinstitutional Agreement of 13 April 2016 on Better Law-Making: Commission consults experts designated by Member States in accordance with principles in the agreement.
  • European Parliament: European Parliament can initiate extension of periods in Commission procedures.
  • Council: Council can initiate extension of periods in Commission procedures.
  • Member State: Member States must notify the Commission of their national rules and common specifications within the required timeframe.
  • general-purpose AI models: The Commission's authority to impose fines applies to providers of general-purpose AI models.
  • Article 101: The Commission enforces the provisions established in Article 101.
  • Chapter III, Section 2: The Commission shall take into account the requirements set out in Chapter III, Section 2 when carrying out its activities.
  • Annex III: Commission shall assess the need for amendment of Annex III annually.
  • Article 5: Commission shall assess the list of prohibited AI practices in Article 5.
  • Council of 25 November 2020: Commission shall submit findings and reports to the Council.
  • general-purpose AI models with systemic risk: The Commission determines whether a general-purpose AI model has systemic risk capabilities.

Commission Decision (24.1.2024) legislative_procedure

A Commission Decision of 24 January 2024 establishing the European Artificial Intelligence Office (C(2024) 390).
  • AI Office: The Commission Decision established the AI Office.

Commission Decision 2010/227/EU regulation

A Commission decision that was repealed by Regulation (EU) 2017/746.

Commission Decision 2010/227/EU directive

A Commission decision that was repealed by Regulation (EU) 2017/746.

Commission Decision 2010/261/EU directive

A commission decision that was repealed by Regulation (EU) 2018/1862.

Commission Decision of 24 January 2024 regulation

A Commission decision that established the AI Office and its responsibilities for AI governance and supervision.
  • AI Office: The AI Office was established by the Commission Decision of 24 January 2024.

Commission guidelines documentation

Guidelines issued by the Commission to support evaluation of AI system classifications.

Commission notice 'The Blue Guide' on the implementation of EU product rules 2022 documentation

Official guidance clarifying the application of the New Legislative Framework to products subject to Union harmonisation legislation.

Commission Recommendation 2003/361/EC directive

A prior Commission recommendation defining the classification of small and medium-sized enterprises.
  • SMEs: The Commission Recommendation defines the classification of small and medium-sized enterprises.

Commission Regulation (EU) No 1230/2012 regulation

A Commission regulation that was repealed by subsequent legislative action.

Commission Work Programme 2021 documentation

Document referenced regarding platform work-related contractual relationships and employee involvement.

committee institution

A committee referred to in Article 22 of Regulation (EU) No 1025/2012 that must be informed by the Commission.
  • Commission: The Commission shall inform the committee before preparing draft implementing acts.

Committee of the Regions institution

Institution that issued an opinion on the AI Regulation.

common specification technical_requirement

A set of technical specifications providing means to comply with certain requirements established under this Regulation.

common specifications technical_requirement

Technical specifications established by the Commission as an exceptional fallback solution when harmonised standards are unavailable, insufficient, or delayed, allowing high-risk AI systems and general-purpose AI models to demonstrate presumed conformity.
  • Commission: The Commission establishes common specifications for AI systems via implementing acts as a fallback solution.
  • provider's obligation to comply: Common specifications facilitate provider compliance with regulatory requirements.
  • Section 2: Common specifications cover requirements set out in Section 2.
  • Chapter V: Common specifications cover obligations in Sections 2 and 3 of Chapter V.
  • high-risk AI systems: High-risk AI systems must conform to common specifications to be presumed compliant.
  • general-purpose AI models: General-purpose AI models must conform to common specifications to be presumed compliant.
  • Section 2 of this Chapter: Common specifications are based on requirements set out in Section 2.
  • Commission: The Commission establishes common specifications for AI systems via implementing acts as a fallback solution.
  • Article 41: Article 41 references common specifications that confer presumption of conformity for AI systems.

competent authorities market_actor

Authorities responsible for law enforcement that may use real-time remote biometric identification systems under the regulatory framework.

competent authorities institution

National and Union authorities responsible for overseeing compliance of high-risk AI systems, conducting market surveillance, enforcing regulations, and taking action to mitigate associated risks.
  • provider: Providers must closely cooperate with competent authorities established under the Regulation.
  • authorised representative: The authorised representative must provide information and documentation to competent authorities upon reasoned request.
  • importer: Importers must cooperate with competent authorities regarding high-risk AI systems placed on the market.
  • distributor: Distributors must inform competent authorities when high-risk AI systems present risks as defined in Article 79.
  • deployers: Deployers must cooperate with competent authorities in implementing the Regulation.
  • authorised representative: The authorised representative can be addressed by competent authorities on compliance issues.
  • AI regulatory sandbox: Competent authorities establish and operate AI regulatory sandboxes.
  • sandbox plan: Competent authorities agree sandbox plans with AI providers specifying conditions for testing.
  • Regulation 2024/1689: Competent authorities are responsible for implementing and enforcing the regulation.
  • provider: Provider must cooperate with competent authorities during investigations.

competent authorities legislative_body

National regulatory bodies responsible for overseeing AI regulatory sandboxes, supervising system development and testing, and ensuring market surveillance and compliance.
  • real-world testing plan: Prospective providers must submit real-world testing plans to competent market surveillance authorities.

competent authority institution

An authority responsible for overseeing and enforcing compliance with high-risk AI system requirements and requesting information to demonstrate conformity.
  • Providers of high-risk AI systems: Competent authorities may request information and documentation from providers to demonstrate conformity.
  • high-risk AI system: Competent authorities oversee and request access to high-risk AI systems and their logs.
  • Article 78: Information obtained by competent authorities must be treated in accordance with confidentiality obligations in Article 78.
  • distributor: Distributors must inform competent authorities when a high-risk AI system presents a risk and provide information upon request.

competent judicial authorities institution

Courts or judicial bodies authorized to grant or deny requests for authorization of remote biometric identification system use.

Competent personnel requirement technical_requirement

The requirement that notifying authorities maintain adequate competent personnel with expertise in information technologies, AI, law, and fundamental rights supervision.
  • Notifying authorities: Notifying authorities are required to have adequate competent personnel with expertise in information technologies, AI, law, and fundamental rights.

competent public authorities institution

National or Union public authorities competent in migration, asylum and border control management.

competition law regulation

Union law governing fair competition practices.
  • Union law: Union law includes competition law as a component.

complaint handling and redress procedures legal_obligation

Mechanisms that deployers should establish to address complaints and provide remedies for harms caused by high-risk AI systems, serving as risk mitigation measures.
  • deployer: Deployers should establish complaint handling and redress procedures as risk mitigation measures.

Complaint lodging right legal_obligation

Right granted to downstream providers to lodge complaints alleging infringement of the Regulation by general-purpose AI model providers.

complaint mechanisms legal_obligation

Internal governance arrangements required to handle complaints related to high-risk AI system deployment.

compliance assessment legal_obligation

An obligation to assess whether providers comply with requirements under the Regulation.

compliance with all requirements and obligations legal_obligation

Obligation for AI systems to comply with all requirements and obligations laid down in the Regulation.
  • AI systems presenting a risk: AI systems presenting a risk must be evaluated for compliance with all requirements and obligations in the Regulation.

Compliance with Regulation legal_obligation

The requirement that parties involved must comply with the Regulation and respect confidentiality of information and data obtained in carrying out their tasks.

compliance with requirements and obligations legal_obligation

Obligation for AI systems to meet requirements and obligations laid down in the Regulation.
  • AI system: AI systems must comply with requirements and obligations laid down in the Regulation.

computation used for training evaluation_criterion

Criterion measured in floating point operations or through variables such as training cost, time, or energy consumption.

computational resources technical_requirement

Resources used to train the model, including floating point operations, training time, and energy consumption.
  • technical documentation: Technical documentation must document computational resources used to train the model.

confidential information exchange legal_obligation

Obligation to exchange information on confidential basis between national competent authorities and Commission with restrictions on disclosure.
  • national competent authorities: National competent authorities are subject to confidential information exchange obligations.
  • Commission: Commission is subject to confidential information exchange obligations with national competent authorities.

Confidentiality legal_obligation

The obligation to respect confidentiality of information and data obtained in carrying out tasks and activities under the Regulation, protecting intellectual property, business information, security interests, and other sensitive matters.
  • Article 78: Article 78 establishes the confidentiality obligation for entities involved in applying the Regulation.
  • Commission: The confidentiality obligation applies to the Commission when carrying out its tasks under the Regulation.
  • market surveillance authorities: The confidentiality obligation applies to market surveillance authorities in their regulatory activities.
  • notified bodies: The confidentiality obligation applies to notified bodies involved in the application of the Regulation.
  • intellectual property rights: The confidentiality obligation protects intellectual property rights and trade secrets.

Confidentiality obligation legal_obligation

Requirement that notified bodies and their personnel maintain confidentiality of information obtained during conformity assessment activities.
  • Notified bodies: Notified bodies must maintain confidentiality of information obtained during conformity assessment.
  • Article 78: The confidentiality obligation references Article 78 for specific requirements.

confidentiality obligations legal_obligation

Obligations requiring that information and documentation obtained by competent authorities, including trade secrets, be treated confidentially.
  • Article 78: Confidentiality obligations are set out in Article 78.

Confidentiality of information and data legal_obligation

Requirement for all parties involved in application of the Regulation to respect confidentiality in accordance with Union or national law.

confidentiality rules legal_obligation

Rules requiring members of national competent authorities to maintain confidentiality in handling information under the regulation.

Confidentiality safeguarding legal_obligation

The requirement for notifying authorities to protect the confidentiality of information obtained during their activities.

conformity evaluation_criterion

Compliance status of high-risk AI systems with requirements set out in the Regulation.

conformity assessment legal_obligation

A mandatory evaluation process to verify that high-risk AI systems comply with regulatory requirements before being placed on the market, which must be repeated when substantial modifications occur.
  • high-risk AI systems: High-risk AI systems must undergo conformity assessment prior to market placement or service deployment.
  • Regulation: The Regulation requires high-risk AI systems to undergo conformity assessment before market placement or putting into service.
  • substantial modification: When substantial modification occurs, a new conformity assessment must be conducted.
  • Chapter III, Section 2: Conformity assessment process demonstrates compliance with requirements set out in Chapter III, Section 2.
  • notified body: Notified bodies are required to conduct conformity assessment activities for high-risk AI systems.

conformity assessment technical_requirement

A mandatory evaluation procedure that high-risk AI systems must undergo prior to market placement to demonstrate compliance with regulatory requirements.
  • high-risk AI systems: High-risk AI systems must undergo conformity assessment prior to market placement or service deployment.
  • notified bodies: Conformity assessment procedures for high-risk AI systems involve notified bodies as third-party assessors.
  • provider: Providers must fulfill conformity assessment requirements for high-risk AI systems.
  • Article 62: Article 62 references Article 43 regarding conformity assessment fees for SMEs.

conformity assessment evaluation_criterion

The process through which providers demonstrate compliance with regulatory requirements for high-risk AI systems at initial deployment and throughout their lifecycle.
  • Notified bodies: Notified bodies conduct conformity assessments of AI systems.
  • high-risk AI system: High-risk AI systems undergo conformity assessment by providers at the moment of initial deployment.

Conformity assessment bodies institution

Bodies that perform conformity assessment activities for AI systems and must apply for notification to notifying authorities.
  • Article 31: Conformity assessment bodies must fulfill the requirements laid down in Article 31.
  • Notifying authorities: Conformity assessment bodies must submit applications for notification to notifying authorities.

conformity assessment body institution

An organization that performs third-party conformity assessment activities including testing, certification, and inspection, and must meet specified requirements for notification and designation.
  • notified body: A notified body is a type of conformity assessment body that has been formally notified.
  • notifying authority: Notifying authorities are responsible for assessment, designation, monitoring, and verification of compliance of conformity assessment bodies.
  • Article 31: Article 31 establishes requirements that conformity assessment bodies must satisfy.
  • accreditation certificate: An accreditation certificate attests that a conformity assessment body fulfils Article 31 requirements.
  • Article 31: Conformity assessment bodies must comply with requirements laid down in Article 31.
  • Union harmonisation legislation: Conformity assessment bodies may be designated under other Union harmonisation legislation.

conformity assessment obligations legal_obligation

Mandatory requirements that AI providers must fulfill to demonstrate compliance with regulatory standards.
  • AI regulatory sandboxes: AI regulatory sandboxes facilitate providers in complying with conformity assessment obligations under the Regulation.
  • Regulation: The Regulation establishes conformity assessment obligations for AI providers.

conformity assessment procedure technical_requirement

Mandatory procedure involving third-party conformity assessment bodies to verify that high-risk AI systems comply with applicable Union harmonisation legislation before placement on the market.
  • high-risk AI systems: High-risk AI systems are subject to conformity assessment procedures, which may be derogated under exceptional circumstances.
  • This Regulation: This Regulation requires products to undergo conformity assessment procedures with third-party bodies.
  • Union harmonisation legislation: Union harmonisation legislation establishes conformity assessment procedures for various product categories.
  • substantial modification: Substantial modifications to AI systems require a new conformity assessment procedure.
  • European Commission: Commission has power to amend conformity assessment procedures.
  • Article 43: Article 43 specifies the conformity assessment procedure that must be followed.
  • authorised representative: The authorised representative must verify that an appropriate conformity assessment procedure has been carried out.
  • high-risk AI system: High-risk AI systems are subject to conformity assessment procedures as specified in the regulation.
  • Annex VII: The conformity assessment procedure is documented and specified in Annex VII.
  • notified body: Notified bodies conduct conformity assessments for high-risk AI systems.
  • Annex VI: Annex VI describes the conformity assessment procedure based on internal control.

conformity assessment procedure legislative_procedure

A procedure for evaluating and demonstrating compliance with essential cybersecurity and AI system requirements.

conformity assessment procedure legal_obligation

A mandatory procedure that providers of high-risk AI systems must accomplish to demonstrate compliance with applicable regulatory requirements.
  • high-risk AI system: Providers must accomplish the required conformity assessment procedure for high-risk AI systems.
  • High-risk AI systems: High-risk AI systems are subject to conformity assessment procedures, which may be derogated under exceptional circumstances.
  • Article 31: Conformity assessment procedures are based on requirements laid down in Article 31.

conformity assessment procedure based on internal control evaluation_criterion

Procedure for assessing AI system compliance through provider's internal verification and quality management.
  • Article 17: The internal control procedure requires verification of compliance with Article 17 quality management requirements.
  • Chapter III, Section 2: The internal control procedure requires assessment of AI system compliance with essential requirements in Chapter III, Section 2.
  • Article 72: The internal control procedure requires verification that post-market monitoring is consistent with technical documentation.

conformity assessment procedures technical_requirement

Procedures that notified bodies must follow to verify compliance of high-risk AI systems with regulatory requirements.
  • notified bodies: Notified bodies must verify conformity in accordance with conformity assessment procedures.

conformity assessment procedures legal_obligation

Procedures conducted by notified bodies to assess compliance of high-risk AI systems with regulatory requirements.
  • notified body: Notified bodies are required to conduct conformity assessment procedures for high-risk AI systems.

conformity assessment provisions legal_obligation

Requirements for assessing conformity with regulations, including internal control procedures.

conformity assessment system technical_requirement

System for evaluating compliance with regulatory requirements, operational before August 2026.

conformity with requirements legal_obligation

The obligation for high-risk AI systems to comply with the requirements set out in Section 2.
  • distributor: Distributors must ensure or take corrective actions to achieve conformity with Section 2 requirements.

consumer protection legal_obligation

An existing Union law area that this Regulation is complementary to and does not prejudice.
  • Regulation (EU) 2019/1020: The regulation is complementary to and without prejudice to existing Union law on consumer protection.

consumer protection law regulation

Union law protecting consumer rights.
  • Union law: Union law includes consumer protection law as a component.

Content origin detection technical_requirement

Technical requirement to detect whether output has been generated or manipulated by an AI system rather than by a human.
  • Regulation 2024/1689: The regulation requires detection capabilities to identify AI-generated or manipulated content.
  • Watermarks: Watermarks are cited as one appropriate technique for detecting AI-generated content.
  • Metadata identifications: Metadata identifications are cited as one appropriate technique for detecting AI-generated content.
  • Cryptographic methods: Cryptographic methods are cited as techniques for proving provenance and authenticity.

Convention implementing the Schengen Agreement treaty

A convention that was amended by Regulation (EU) 2018/1861.

corrective action technical_requirement

Required action to be taken by the provider following a serious incident.
  • provider: Provider must perform corrective action following a serious incident.

corrective action legal_obligation

Actions required to be taken by operators to address non-compliance and ensure AI systems no longer present identified risks.
  • market surveillance authority: Market surveillance authority requires operators to take corrective action.
  • provider: Provider must ensure corrective action is taken on all concerned AI systems.

corrective actions legal_obligation

Actions required to bring a non-compliant high-risk AI system into conformity or to withdraw or recall it.

corrective actions technical_requirement

Actions required to bring non-compliant AI systems into compliance with regulatory requirements.
  • market surveillance authority: Market surveillance authority requires operators to take corrective actions to bring AI systems into compliance.

Council legislative_body

The Council of the European Union, a co-legislator that enacts directives and regulations alongside the European Parliament, receives reports on AI regulation evaluation, and can revoke or oppose delegations of power to the Commission.
  • Directive (EU) 2019/1937: Directive was enacted by the Council.
  • Commission: Commission must report to the Council on regulation evaluation and amendments.
  • Delegation of Power: The Council can oppose the extension or revoke the delegation of power to the Commission.
  • delegated act: Council can object to delegated acts within three months of notification.
  • Commission: Council can initiate extension of periods in Commission procedures.
  • Commission: The Commission submits reports to the Council on AI Office evaluation and standardisation progress.
  • European Commission: The Commission submits reports and evaluations to the Council.
  • European Commission: The Commission shall report on enforcement assessment to the Council.
  • Regulation 2024/1689: Regulation 2024/1689 was enacted by the Council.

Council Decision 2004/512/EC regulation

Council decision referenced in the legislative framework.

Council Decision 2007/533/JHA directive

A council decision that was amended and repealed by Regulation (EU) 2018/1862.

Council Decision 2008/633/JHA regulation

Council decision referenced in the legislative framework.

Council Decision 93/465/EEC directive

A prior decision that was repealed by Decision No 768/2008/EC.

Council Decision 93/465/EEC regulation

A previous regulation on a common framework for the marketing of products, repealed by Regulation (EU) 2019/1020.

Council Directive 2001/55/EC directive

An EU directive referenced in Regulation (EU) 2024/1358 for purposes of applying Eurodac.

Council Directive 85/374/EEC directive

EU directive of 25 July 1985 on the approximation of laws concerning liability for defective products across Member States, with rights and remedies remaining unaffected by the AI regulation.
  • Regulation (EU) 2019/1020: Rights and remedies provided by Council Directive 85/374/EEC remain unaffected and fully applicable.

Council Directive 87/102/EEC directive

Previous directive on credit agreements for consumers that was repealed by Directive 2008/48/EC.

Council Directive 87/357/EEC directive

A council directive that was repealed by Regulation (EU) 2023/988.

Council Directive 89/686/EEC directive

EU directive on personal protective equipment that was repealed by Regulation (EU) 2016/425.

Council Directive 90/385/EEC directive

EU directive on medical devices that was repealed by Regulation (EU) 2017/745.

Council Directive 93/42/EEC directive

EU directive on medical devices that was repealed by Regulation (EU) 2017/745.

Council Directive 96/98/EC directive

Earlier European directive on marine equipment that was repealed by Directive 2014/90/EU.

Council Framework Decision 2002/584/JHA legislative_procedure

Framework decision listing 32 criminal offences that form the basis for the annex to the current regulation.
  • criminal offences list: The annex listing criminal offences is based on the 32 offences in the Council Framework Decision.

Council Framework Decision 2002/584/JHA directive

Framework decision of 13 June 2002 on the European arrest warrant and surrender procedures between Member States.

Council Framework Decision 2008/977/JHA regulation

A framework decision that was repealed by Directive (EU) 2016/680.

Council of 25 November 2020 legislative_body

Council that enacted directive on representative actions for consumer protection.

Council of the European Union legislative_body

One of the two co-legislators of the European Union that enacted Regulation (EU) 2024/1689 alongside the European Parliament.

Council Regulation (EEC) No 3922/91 regulation

EU regulation repealed by Regulation 2024/1689.

Council Regulation (EU) No 1024/2013 regulation

Council regulation of 15 October 2013 establishing the Single Supervisory Mechanism and conferring prudential supervision tasks on the European Central Bank.
  • European Central Bank: Council Regulation (EU) No 1024/2013 establishes the Single Supervisory Mechanism and confers specific tasks on the European Central Bank.

Court of Justice of the European Union institution

EU judicial institution with unlimited jurisdiction to review Commission decisions under the Regulation, including the authority to cancel, reduce, or increase imposed fines.
  • Commission: The Court of Justice reviews all Commission decisions under the Regulation and has unlimited jurisdiction to review and modify decisions fixing fines.
  • Article 261 TFEU: Article 261 TFEU grants the Court of Justice unlimited jurisdiction regarding penalties.
  • market surveillance authorities: Market surveillance activities do not apply to the Court of Justice when acting in its judicial capacity.

criminal investigation and prosecution evaluation_criterion

Legitimate objective for using real-time biometric identification systems to locate or identify suspects of serious criminal offences.

Criminal offence identification legal_obligation

Permitted use case for remote biometric identification systems for localization or identification of perpetrators or suspects of listed criminal offences.
  • Regulation 2024/1689: The regulation permits remote biometric identification for localization or identification of perpetrators or suspects of listed criminal offences.

criminal offences legal_article

Specific criminal offences for which competent authorities may be authorized to use real-time remote biometric identification systems.

criminal offences list documentation

Annex to the regulation containing criminal offences for which real-time remote biometric identification may be used.

criminal risk assessment ai_system

AI systems assessing the risk of a natural person offending or reoffending in criminal matters.
  • Regulation: Criminal risk assessment systems are subject to prohibitions under the Regulation.

critical digital infrastructure institution

Digital systems that are part of critical infrastructure and whose failure or malfunctioning may lead to risks to health and safety of persons and property.

critical infrastructure data_category

Infrastructure defined in Directive (EU) 2022/2557 whose disruption or destruction could result in imminent threat to life or physical safety.
  • Directive (EU) 2022/2557: Critical infrastructure is defined in Article 2, point (4) of Directive (EU) 2022/2557.
  • EU database: High-risk AI systems in critical infrastructure are only registered at national level, not in the EU database.
  • Directive (EU) 2022/2557: Critical infrastructure is defined by reference to Directive (EU) 2022/2557.

critical infrastructure institution

Systems including supply of water, gas, heating and electricity whose failure may put at risk the life and health of persons at large scale and lead to appreciable disruptions in social and economic activities.
  • safety components: Safety components are used to directly protect the physical integrity of critical infrastructure.

Critical infrastructure AI systems ai_system

High-risk AI systems intended as safety components in management and operation of critical digital infrastructure, road traffic, or utility supply.
  • ANNEX III: Critical infrastructure AI systems are classified as high-risk AI systems in ANNEX III.

Critical infrastructure management AI systems ai_system

AI systems intended as safety components in the management and operation of critical digital infrastructure, road traffic, and utility supply systems.
  • high-risk classification: AI systems used as safety components in critical infrastructure management are classified as high-risk.
  • Directive (EU) 2022/2557: Critical infrastructure AI systems reference the list of critical digital infrastructure in Directive (EU) 2022/2557.

Cryptographic methods technical_requirement

Cryptographic techniques for proving provenance and authenticity of content generated by AI systems.

cybersecurity evaluation_criterion

Performance metric that high-risk AI systems should meet in accordance with their intended purpose and state of the art.
  • High-risk AI systems: High-risk AI systems are required to meet an appropriate level of cybersecurity.
  • AI regulatory sandbox: AI systems in the sandbox are assessed on cybersecurity as a relevant dimension.

cybersecurity technical_requirement

Technical requirement ensuring high-risk AI systems are resilient against unauthorized alterations and attacks.
  • high-risk AI systems: High-risk AI systems must be resilient against unauthorized alterations through cybersecurity measures.
  • national competent authorities: National competent authorities must take appropriate measures to ensure an adequate level of cybersecurity.

Cybersecurity and personal data protection biometric systems ai_system

Biometric systems used solely for enabling cybersecurity measures and personal data protection.

cybersecurity certificate documentation

A certificate issued under a cybersecurity scheme that can serve as presumption of conformity with cybersecurity requirements.
  • Regulation (EU) 2019/881: Regulation (EU) 2019/881 establishes cybersecurity schemes under which certificates are issued.

cybersecurity components technical_requirement

Components intended to be used solely for cybersecurity purposes that should not qualify as safety components.
  • safety components: Components intended solely for cybersecurity purposes should not qualify as safety components.

cybersecurity measures technical_requirement

Adequate and effective security measures that must be implemented and documented to protect the security and confidentiality of information and data obtained.

cybersecurity of AI systems technical_requirement

Essential cybersecurity requirements applicable to AI systems under regulatory frameworks.

cybersecurity protection technical_requirement

Security measures that providers of general-purpose AI models with systemic risks must ensure at an adequate level throughout the model lifecycle and physical infrastructure.
  • provider: Providers must ensure adequate cybersecurity protection for general-purpose AI models with systemic risks.
  • general-purpose AI models: Providers must ensure adequate cybersecurity protection for models and their physical infrastructure.

cybersecurity protection legal_obligation

Obligation to ensure adequate cybersecurity protection for general-purpose AI models with systemic risk and their physical infrastructure.
  • Article 55: Article 55 requires providers to ensure adequate cybersecurity protection for models and infrastructure.

cybersecurity requirement legal_obligation

Requirement for high-risk AI systems to meet cybersecurity standards as specified in the regulation.

cybersecurity requirements technical_requirement

Technical and security standards that products with digital elements, including AI systems, must meet to protect against cyberattacks, malicious exploitation, and unauthorized alterations.

cybersecurity requirements legal_obligation

Security requirements that AI systems must meet, covered by cybersecurity certificates or statements of conformity.
  • high-risk AI systems: High-risk AI systems must comply with cybersecurity requirements set out in Article 15.

Cybersecurity resilience technical_requirement

Requirement for high-risk AI systems to be resilient against unauthorized attempts to alter their use, outputs, or performance through exploitation of vulnerabilities.
  • High-risk AI systems: High-risk AI systems shall be resilient against unauthorized attempts to alter their use, outputs, or performance.

cybersecurity risks technical_requirement

Security considerations that the Commission must account for when performing its tasks as data controller.
  • EU database: The Commission must consider cybersecurity risks when managing the EU database.

data access infrastructure technical_requirement

Technical systems and frameworks that enable access to data for AI development purposes across borders.
  • European Commission: The Commission may develop initiatives to facilitate the lowering of technical barriers and improve data access infrastructure for AI development.

data deletion obligation legal_obligation

Requirement to discard and delete all data related to rejected authorization use within immediate effect.

data governance legal_obligation

Requirements for high-risk AI system providers to manage and verify the integrity of data sets used in training, validation, and testing in accordance with the Regulation.
  • high-risk AI systems: High-risk AI systems must comply with data governance requirements.
  • high-risk AI system: High-risk AI systems must comply with data governance requirements set out in the Regulation.

data governance and management practices technical_requirement

Practices required for managing training, validation, and testing data sets in AI systems, including transparency about data collection purposes and appropriate handling of personal data.

data governance requirement legal_obligation

Regulatory requirement for AI systems to comply with data governance measures set out in the regulation.
  • high-risk AI systems: High-risk AI systems should comply with data governance requirements when using relevant geographical and contextual data.

data management technical_requirement

Systems and procedures covering data acquisition, collection, analysis, labelling, storage, filtration, mining, aggregation, and retention for high-risk AI systems.
  • high-risk AI system: High-risk AI systems require systems and procedures for comprehensive data management before placing on the market or putting into service.

data minimisation technical_requirement

A principle from Union data protection law requiring that only necessary personal data be processed in AI systems.
  • Regulation: The Regulation requires data minimisation principles when processing personal data in AI systems.
  • Union data protection law: Data minimisation principle is set out in Union data protection law.

data poisoning technical_requirement

An AI-specific vulnerability where unauthorized parties attempt to compromise training data to alter system behavior, which providers must implement measures to prevent, detect, and respond to.
  • high-risk AI system: Data poisoning represents an AI-specific vulnerability that threatens high-risk AI systems.
  • high-risk AI systems: Technical solutions for high-risk AI systems must include measures to prevent and detect data poisoning attacks.

data protection legal_obligation

An existing Union law area that this Regulation is complementary to and does not prejudice.
  • Regulation (EU) 2019/1020: The regulation is complementary to and without prejudice to existing Union law on data protection.

data protection by design and by default technical_requirement

A principle from Union data protection law requiring privacy protection measures to be integrated throughout the AI system lifecycle from development through operation.
  • Regulation: The Regulation requires data protection by design and by default principles throughout the AI system lifecycle.
  • Union data protection law: Data protection by design and by default principles are set out in Union data protection law.

data protection impact assessment documentation

An assessment required to evaluate the data protection implications of high-risk AI systems.
  • Article 49(3): Article 49(3) requires deployers to submit summaries of data protection impact assessments.

data protection law regulation

Union law governing the protection of personal data.
  • Union law: Union law includes data protection law as a component.

data protection obligation legal_obligation

Requirement to protect security and confidentiality of information and data, and to delete data when no longer needed.
  • high-risk AI systems: High-risk AI systems are subject to data protection and confidentiality obligations.

data quality criteria evaluation_criterion

Standards that training, validation and testing data sets must meet, including relevance, representativeness, accuracy and completeness.

data set characteristics technical_requirement

Specific features and elements of data sets that must account for geographical, contextual, behavioral, or functional settings relevant to high-risk AI system deployment.
  • high-risk AI system: High-risk AI systems require data sets with specific characteristics tailored to their intended geographical, contextual, and functional settings.

data sets data_category

Collections of data used for training, validation, and testing of AI systems that must be high-quality, representative, free of errors and biases, and possess appropriate statistical properties.
  • high-risk AI system: High-risk AI systems require high-quality data sets for training, validation, and testing.
  • bias mitigation: Data sets used in high-risk AI systems are subject to bias mitigation requirements.
  • privacy-preserving techniques: Data sets should comply with privacy-preserving techniques during AI system development and testing.

data sheets documentation

A widely adopted documentation practice for AI systems that facilitates information sharing along the AI value chain.
  • AI value chain: Data sheets are encouraged as a documentation practice to accelerate information sharing along the AI value chain.

data transfer safeguards legal_obligation

Requirement that data collected during testing may only be transferred to third countries with appropriate safeguards under Union law.

data used to train the AI system data_category

Training data for an AI system that must meet quality requirements for conformity assessment.
  • AI system: The quality of training data is a requirement for AI system conformity assessment.

database documentation

A registration system for real-time remote biometric identification systems as set out in the Regulation.

Decision No 1247/2002/EC regulation

Previous EU decision repealed by later data protection regulations, specifically Regulation (EU) 2018/1725.

Decision No 768/2008/EC directive

EU decision enacted on 9 July 2008 by the European Parliament and Council establishing the New Legislative Framework for product harmonization and marketing, repealing Council Decision 93/465/EEC.

decision-making pattern detection technical_requirement

A technical capability of AI systems intended to detect patterns or deviations in prior decision-making without replacing human assessment.

declaration of interests documentation

A document that each expert on the scientific panel must prepare and make publicly available.
  • scientific panel: Each expert on the scientific panel shall draw up a declaration of interests, which shall be made publicly available.

deep fake data_category

AI-generated or manipulated image, audio or video content that resembles existing persons, objects, places, entities or events and falsely appears to be authentic or truthful.

deep fake system ai_system

An AI system that generates or manipulates image, audio, or video content to create synthetic or altered media that appears authentic.

deep fakes data_category

Image, audio or video content that appreciably resembles existing persons, objects, places, entities or events and falsely appears authentic or truthful.
  • transparency obligation: Deep fakes are subject to transparency obligations requiring disclosure of artificial creation.

delegated act legal_obligation

Acts adopted by the Commission pursuant to delegation of power, subject to objection by Parliament and Council.
  • European Parliament: European Parliament can object to delegated acts within three months of notification.
  • Council: Council can object to delegated acts within three months of notification.
  • Commission: Commission adopts delegated acts pursuant to authorized articles.
  • Official Journal of the European Union: Decisions of revocation are published in the Official Journal to take effect.
  • Article 52(4): Article authorizes delegated acts subject to objection procedure.
  • Article 53(5) or (6): Article authorizes delegated acts subject to objection procedure.

delegated acts legislative_procedure

Acts adopted by the Commission to amend the list of high-risk AI systems and annexes to the Regulation in light of rapid technological development.
  • Commission: The Commission is empowered to adopt delegated acts to amend the list of high-risk AI systems.
  • high-risk AI systems: Delegated acts amend the list of high-risk AI systems to account for technological development.

delegated acts regulation

Legal acts adopted by the Commission to amend Annex III regarding high-risk AI systems.
  • Article 7: Article 7 establishes the Commission's power to adopt delegated acts.
  • Annex III: Delegated acts adopted by the Commission amend Annex III by adding or modifying high-risk AI system use-cases.
  • The Commission: The Commission is empowered to adopt delegated acts to amend Annex III.
  • high-risk AI systems: Delegated acts apply to high-risk AI systems by amending their classification and use-cases.

Delegation of Power legal_obligation

The power conferred on the Commission to adopt delegated acts for a period of five years from 1 August 2024, subject to conditions and extension procedures.
  • Commission: The Commission's power to adopt delegated acts is subject to the conditions laid down in Article 97.
  • Article 97: Article 97 governs the exercise of delegation of power to the Commission.
  • European Parliament: The European Parliament can oppose the extension or revoke the delegation of power to the Commission.
  • Council: The Council can oppose the extension or revoke the delegation of power to the Commission.

Denmark market_actor

A Member State with specific exemptions from certain provisions of Regulation 2024/1689 as outlined in Protocol No 22.
  • Regulation 2024/1689: Denmark is not bound by specific provisions of Regulation 2024/1689 as outlined in Protocol No 22.

deployer market_actor

Any natural or legal person, including public authorities, using an AI system under its authority in a professional context and responsible for compliance with applicable obligations, impact assessments, risk mitigation, and incident reporting.
  • AI system: Deployers use AI systems under their authority and may be affected by the system's outputs.
  • instructions for use: Instructions for use guide deployers in understanding risks and using high-risk AI systems appropriately.
  • Regulation: The Regulation applies to deployers who may assume provider obligations under certain conditions.
  • provider: A deployer can be considered a provider of high-risk AI systems under Article 25 circumstances, particularly when modifying AI systems.
  • impact assessment: Deployers are required to conduct impact assessments for high-risk AI systems before deployment.
  • market surveillance authority: Deployers must notify market surveillance authorities of fundamental rights impact assessment results, risks, and serious incidents without undue delay.
  • human oversight: Deployers should implement human oversight arrangements as a governance measure to mitigate risks to fundamental rights.
  • complaint handling and redress procedures: Deployers should establish complaint handling and redress procedures as risk mitigation measures.
  • instructions for use: Deployers should take into account information from instructions for use when performing impact assessments and implementing human oversight.
  • Article 3: Article 3 defines the role and responsibilities of a deployer.
  • risk management measures: Risk management measures require provision of information and training to deployers.
  • high-risk AI system: Deployers must monitor the operation of high-risk AI systems and verify their registration before use.
  • input data: Deployers that exercise control over input data must ensure it is relevant and sufficiently representative for the intended purpose.
  • provider: A deployer can be considered a provider of high-risk AI systems under Article 25 circumstances, particularly when modifying AI systems.
  • human oversight measures: Deployers must implement and document human oversight measures according to provider instructions for use.
  • Article 73: Article 73 applies mutatis mutandis when a deployer cannot reach the provider.
  • Article 13: Deployers must use information provided under Article 13 for compliance purposes.
  • post-remote biometric identification: Deployers of post-remote biometric identification systems must obtain prior authorization from judicial or administrative authorities.
  • Article 46(1): Deployers may be exempt from notification obligations under conditions specified in Article 46(1).
  • EU database: Deployers who are or act on behalf of public authorities must enter data listed in Section C of Annex VIII into the EU database.
  • serious incident: Deployers may become aware of serious incidents and trigger reporting obligations.
  • serious incident: Deployer must report serious incidents when applicable within specified timeframes.

Deployer obligations legal_obligation

Obligations imposed on deployers pursuant to Article 26, subject to administrative fines for non-compliance.
  • Article 26: Deployer obligations are established in Article 26.
  • SMEs: SMEs are subject to reduced administrative fines for non-compliance with deployer obligations.

Deployer or prospective deployer market_actor

An entity responsible for implementing AI systems and ensuring compliance with testing provisions through qualified personnel.

deployers market_actor

Entities responsible for deploying and using high-risk AI systems, ensuring compliance with information, fundamental rights protection, monitoring, and human oversight obligations.
  • Regulation 2024/1689: The regulation sets specific responsibilities and obligations that deployers must fulfill when using high-risk AI systems.
  • monitoring of AI system performance: Deployers are required to monitor the functioning of high-risk AI systems and maintain records as appropriate.
  • AI literacy training: Deployers must ensure that persons implementing instructions for use have adequate AI literacy, training, and authority.
  • information requirement: Deployers are required to provide information to natural persons about high-risk AI systems and their rights.
  • detection and disclosure of artificially generated outputs: Deployers are placed with obligations to enable detection and disclosure of artificially generated outputs.
  • this Regulation: The regulation establishes obligations and requirements that apply to AI system deployers.
  • Article 2: Article 2 applies to deployers of AI systems within the Union.
  • Providers of high-risk AI systems: Providers must inform deployers of corrective actions and collaborate on risk investigations.
  • logs: Deployers are required to keep logs automatically generated by high-risk AI systems for at least six months.
  • Article 49: Public authority and Union institution deployers must comply with registration obligations in Article 49.
  • workers' representatives: Deployers who are employers must inform workers' representatives before putting high-risk AI systems into service.
  • fundamental rights impact assessment: Deployers are required to perform fundamental rights impact assessments for high-risk AI systems prior to deployment.
  • natural persons: Deployers must inform natural persons that they are subject to the use of high-risk AI systems.
  • competent authorities: Deployers must cooperate with competent authorities in implementing the Regulation.
  • post-market monitoring system: Deployers may provide relevant data to providers for the post-market monitoring system.

deployers of high-risk AI systems market_actor

Public authorities and private entities, including banking and insurance organizations, responsible for deploying high-risk AI systems and registering in the EU database before deployment.
  • fundamental rights impact assessment: Deployers must carry out fundamental rights impact assessments prior to deployment.
  • EU database: Public authority deployers of high-risk AI systems must register themselves in the EU database before using listed systems.
  • human oversight: Deployers are required to assign human oversight to competent natural persons.
  • instructions for use: Deployers must use high-risk AI systems in accordance with accompanying instructions for use.

Design specifications technical_requirement

Documentation of the AI system's general logic, algorithms, design methodologies, key design choices, and rationale for system design.

designation suspension, restriction, or withdrawal legal_obligation

Enforcement actions that a notifying authority may take against a notified body that fails to meet requirements or fulfill obligations.
  • certificate issuance and management: When a notified body's designation is suspended, restricted, or withdrawn, the authority must assess and manage the certificates it issued.

Detailed description of AI system elements and development process technical_requirement

A required component of technical documentation detailing development methods, design specifications, algorithms, and design choices.
  • Technical documentation: Technical documentation must contain a detailed description of AI system elements and development process.
  • Design specifications: The detailed description includes design specifications of the system.

detection and disclosure of artificially generated outputs legal_obligation

Obligation placed on providers and deployers to enable detection and disclosure that AI system outputs are artificially generated or manipulated.
  • providers: Providers are placed with obligations to enable detection and disclosure of artificially generated outputs.
  • deployers: Deployers are placed with obligations to enable detection and disclosure of artificially generated outputs.
  • Regulation (EU) 2022/2065: These obligations facilitate effective implementation of Regulation (EU) 2022/2065.
  • very large online platforms: Very large online platforms have obligations to identify and mitigate systemic risks from artificially generated content.
  • very large online search engines: Very large online search engines have obligations to identify and mitigate systemic risks from artificially generated content.

detection and labelling of artificially generated or manipulated content technical_requirement

Technical requirements for identifying and marking content that has been artificially generated or manipulated by AI systems.
  • codes of practice: Codes of practice facilitate effective implementation of obligations regarding detection and labelling of artificially generated or manipulated content.

diagnostics systems ai_system

Sophisticated AI systems in the health sector that support human decisions and must be reliable and accurate due to high stakes for life and health.

Digital Europe Programme legislative_procedure

A Union funding programme that should contribute to achieving the objectives of the AI regulation.
  • this Regulation: The Digital Europe Programme should contribute to achieving the objectives of the regulation.

Digital Europe Programme directive

A Union programme that establishes AI testing and experimentation facilities to support AI innovation and compliance.
  • 2017/746: The regulation references AI testing and experimentation facilities under the Digital Europe Programme.

Digital Services Act regulation

A regulation enacted on 19 October 2022 establishing a single market for digital services and amending Directive 2000/31/EC.
  • Directive 2000/31/EC: The Digital Services Act amends Directive 2000/31/EC.
  • biometric categorisation: The regulation defines the notion of biometric categorisation and its scope of application.
  • ancillary feature: The regulation defines the concept of ancillary features and their exemption from applicability rules.

Directive (EU) 2016/2102 directive

EU directive establishing accessibility requirements for websites and mobile applications of public sector bodies, referenced for high-risk AI systems.

Directive (EU) 2016/680 directive

EU directive on the protection of personal data by competent authorities for law enforcement purposes, establishing requirements for biometric data processing and data protection standards in law enforcement contexts applicable to high-risk AI systems.

Directive (EU) 2016/797 directive

European directive on the interoperability of the rail system within the European Union, enacted on 11 May 2016, amended by the Artificial Intelligence Act to include requirements for AI systems used as safety components.
  • high-risk AI systems: High-risk AI systems that are safety components fall within the scope of this directive.
  • European Parliament and of the Council: Directive (EU) 2016/797 was enacted by the European Parliament and of the Council.
  • Regulation (EU) 2024/1689: Regulation (EU) 2024/1689 amends Directive (EU) 2016/797 among other regulations and directives.
  • Article 106: Article 106 amends Directive (EU) 2016/797 by adding a new paragraph to Article 5.

Directive (EU) 2016/943 directive

European Union directive on the protection of undisclosed know-how and business information (trade secrets) against unlawful acquisition, use and disclosure, enacted on 8 June 2016.
  • Article 78: Article 78 references Directive (EU) 2016/943 regarding exceptions to confidentiality for intellectual property protection.
  • European Parliament and Council: Directive was enacted by the European Parliament and Council on 8 June 2016.

Directive (EU) 2016/97 directive

EU directive of 20 January 2016 on insurance distribution, establishing competent authorities and governance requirements for financial institutions.
  • quality management system: Directive (EU) 2016/97 applies quality management system requirements to insurance intermediaries.

Directive (EU) 2019/1937 directive

EU directive enacted on 23 October 2019 concerning the reporting of infringements of Union law, including AI regulations, and the protection of persons reporting such infringements.
  • Regulation (EU) 2024/1689: Regulation (EU) 2024/1689 references Directive (EU) 2019/1937 for whistleblower protection.
  • European Parliament: Directive was enacted by the European Parliament.
  • Council: Directive was enacted by the Council.
  • Article 87: Article 87 references Directive (EU) 2019/1937 for reporting infringements and protection of reporting persons.

Directive (EU) 2019/790 directive

EU directive on copyright and related rights in the Digital Single Market enacted on 17 April 2019, which introduces exceptions and limitations for text and data mining and governs rightholder reservations of rights.

Directive (EU) 2019/882 directive

EU directive establishing accessibility requirements for products and services that high-risk AI systems must comply with.

Directive (EU) 2020/1828 directive

A directive on representative actions for consumer protection enacted on 25 November 2020, amended by the Data Act and the Artificial Intelligence Act.

Directive (EU) 2022/2557 directive

EU directive of 14 December 2022 defining critical infrastructure and establishing resilience requirements for entities whose disruption could threaten life or physical safety.

Directive 2000/31/EC directive

A directive on electronic commerce that was amended by Regulation (EU) 2022/2065 (Digital Services Act).

Directive 2000/9/EC directive

EU directive on cableway installations that was repealed by Regulation (EU) 2016/424.

Directive 2001/83/EC directive

EU directive amended by Regulation (EU) 2017/745 on medical devices.

Directive 2001/95/EC directive

A directive of the European Parliament and Council that was repealed by Regulation (EU) 2023/988.

Directive 2002/14/EC directive

European Parliament and Council directive of 11 March 2002 establishing a general framework for informing and consulting employees in the European Community, applicable alongside the AI regulation.

Directive 2002/58/EC directive

EU directive protecting private life and confidentiality of communications, establishing conditions for storing and accessing personal and non-personal data on terminal equipment in the electronic communications sector.
  • Regulation: The main Regulation has regard to Directive 2002/58/EC which protects private life and confidentiality of communications.
  • Regulation /2024/1689/oj: The AI regulation does not affect and operates alongside this directive on privacy and confidentiality.

Directive 2002/87/EC directive

Directive amended by Directive 2013/36/EU.

Directive 2004/42/EC directive

Directive amended by Regulation (EU) 2019/1020.

Directive 2005/29/EC directive

European Parliament and Council directive prohibiting unfair business-to-consumer commercial practices that may cause economic or financial harm.
  • Regulation: The AI regulation's prohibitions on manipulative practices are complementary to and work alongside Directive 2005/29/EC provisions.
  • European Parliament and of the Council: The directive was enacted by the European Parliament and the Council.

Directive 2006/42/EC directive

A European directive on machinery enacted on 17 May 2006 by the European Parliament and Council, listed as Union harmonisation legislation.

Directive 2006/48/EC directive

Directive repealed by Directive 2013/36/EU.

Directive 2006/49/EC directive

Directive repealed by Directive 2013/36/EU.

Directive 2007/46/EC directive

Earlier directive repealed by Regulation (EU) 2018/858.

Directive 2008/48/EC directive

EU directive of 23 April 2008 on credit agreements for consumers, repealing Council Directive 87/102/EEC and amended by Directive 2014/17/EU.

Directive 2009/138/EC directive

EU directive of 25 November 2009 on the taking-up and pursuit of insurance and reinsurance business (Solvency II), governing insurance and reinsurance undertakings.
  • quality management system: Directive 2009/138/EC applies the same quality management system requirements as Directive 2013/36/EU to insurance undertakings.

Directive 2009/142/EC directive

EU directive on appliances burning gaseous fuels that was repealed by Regulation (EU) 2016/426.

Directive 2009/22/EC directive

Previous directive on representative actions for consumer protection that was repealed.

Directive 2009/48/EC directive

European directive on the safety of toys enacted by the European Parliament and Council on 18 June 2009, listed as Union harmonisation legislation.
  • New Legislative Framework: Directive 2009/48/EC is listed as Union harmonisation legislation based on the New Legislative Framework.

Directive 2013/32/EU directive

Directive of the European Parliament and Council establishing common procedures for granting and withdrawing international protection in migration and asylum matters.

Directive 2013/36/EU directive

EU directive of 26 June 2013 establishing rules for credit institutions, including requirements for risk management, quality management systems, and market surveillance.
  • AI systems: Directive 2013/36/EU establishes framework for supervising AI systems used by financial institutions.
  • market surveillance: Directive 2013/36/EU requires competent authorities to conduct market surveillance of AI systems in financial institutions.
  • risk management: Directive 2013/36/EU contains obligations regarding risk management for credit institutions.
  • post-marketing monitoring: Directive 2013/36/EU integrates post-marketing monitoring obligations for providers.
  • quality management system: Directive 2013/36/EU establishes quality management system requirements with limited derogations for credit institutions.
  • Directive 2002/87/EC: Directive 2013/36/EU amends Directive 2002/87/EC.
  • Directive 2006/48/EC: Directive 2013/36/EU repeals Directive 2006/48/EC.
  • Directive 2006/49/EC: Directive 2013/36/EU repeals Directive 2006/49/EC.
  • Directive 2014/17/EU: Directive 2014/17/EU amends Directive 2013/36/EU.
  • credit institutions: Directive regulates credit institutions that may use high-risk AI systems.

Directive 2014/17/EU directive

European Parliament and Council directive of 4 February 2014 on credit agreements for consumers relating to residential immovable property, amending Directives 2008/48/EC and 2013/36/EU.

Directive 2014/30/EU directive

EU directive on the harmonisation of the laws of the Member States relating to electromagnetic compatibility.

Directive 2014/31/EU directive

Union law on legal metrology enacted by the European Parliament and Council on 26 February 2014 to harmonise Member States' laws relating to non-automatic weighing instruments.
  • European Commission: Union law on legal metrology that guides the Commission's work on measurement accuracy and commercial transparency.
  • European Parliament and Council: Directive 2014/31/EU was enacted by the European Parliament and Council.

Directive 2014/32/EU directive

Union law on legal metrology enacted by the European Parliament and Council on 26 February 2014 to harmonise Member States' laws relating to measuring instruments.
  • European Commission: Union law on legal metrology that guides the Commission's work on measurement accuracy and commercial transparency.
  • European Parliament and Council: Directive 2014/32/EU was enacted by the European Parliament and Council.

Directive 2014/53/EU directive

EU directive on the harmonisation of the laws of the Member States relating to the making available on the market of radio equipment.

Directive 2014/90/EU directive

European directive on marine equipment enacted on 23 July 2014, repealing Council Directive 96/98/EC and amended by the Artificial Intelligence Act to include requirements for AI systems as safety components.

Directive 95/16/EC directive

European directive amended by Directive 2006/42/EC on machinery.

Directive 95/46/EC directive

Previous directive on data protection, repealed by Regulation (EU) 2016/679.

Directive 98/79/EC directive

A directive that was repealed by Regulation (EU) 2017/746.

disclosure of AI-generated or manipulated text legal_obligation

Obligation to disclose AI-generated or manipulated text published for informing the public on matters of public interest, unless it has undergone human review or editorial control.

disclosure of artificial origin technical_requirement

A requirement to label AI output and clearly indicate that content has been artificially created or manipulated.
  • transparency obligation: The transparency obligation requires disclosure of the artificial origin of AI-generated content.

disclosure requirement for artificially generated content legal_obligation

A legal obligation requiring deployers to inform that content has been artificially generated or manipulated.
  • deep fake system: Deployers of deep fake systems are subject to the obligation to disclose that content has been artificially generated or manipulated.

discrimination evaluation_criterion

Risk of discriminatory impacts based on racial or ethnic origins, gender, disabilities, age, or sexual orientation.
  • high-risk AI systems: High-risk AI systems must be evaluated for potential discriminatory impacts and perpetuation of historical discrimination patterns.

discrimination legal_obligation

A prohibited practice under Union law that high-risk AI systems must not perpetuate through biased or unfair outcomes.
  • high-risk AI system: High-risk AI systems must not perpetuate discrimination prohibited by Union law.

distributor market_actor

A natural or legal person in the supply chain, other than the provider or importer, that makes high-risk AI systems available on the Union market and ensures their compliance with regulatory requirements.
  • Regulation: The Regulation applies to distributors who may assume provider obligations under certain conditions.
  • provider: A distributor can be considered a provider under Article 25 circumstances, such as putting its name or trademark on a high-risk AI system.
  • making available on the market: Distributors make AI systems available on the Union market.
  • operator: Operator is defined as including distributors among other market actors.
  • Article 24: Article 24 establishes obligations that apply to distributors of high-risk AI systems.
  • CE marking: Distributors must verify that high-risk AI systems bear the required CE marking.
  • EU declaration of conformity: Distributors must verify that high-risk AI systems are accompanied by a copy of the EU declaration of conformity.
  • instructions for use: Distributors must ensure high-risk AI systems are accompanied by instructions for use.
  • Section 2: Distributors must ensure high-risk AI systems comply with requirements set out in Section 2.
  • competent authorities: Distributors must inform competent authorities when high-risk AI systems present risks as defined in Article 79.
  • high-risk AI system: Distributors are required to ensure that storage and transport conditions do not jeopardise compliance of high-risk AI systems.
  • conformity with requirements: Distributors must ensure or take corrective actions to achieve conformity with Section 2 requirements.
  • competent authority: Distributors must inform competent authorities when a high-risk AI system presents a risk and provide information upon request.
  • information and documentation: Distributors must provide all information and documentation regarding their actions to demonstrate conformity.
  • Article 25: Article 25 establishes responsibilities for distributors along the AI value chain.

Distributor obligations legal_obligation

Obligations imposed on distributors pursuant to Article 24, subject to administrative fines for non-compliance.
  • Article 24: Distributor obligations are established in Article 24.
  • SMEs: SMEs are subject to reduced administrative fines for non-compliance with distributor obligations.

distributors market_actor

Market actors that distribute high-risk AI systems and must be informed of corrective actions.

diversity, non-discrimination and fairness evaluation_criterion

An ethical principle ensuring AI systems are developed inclusively to promote equal access and gender equality while avoiding discriminatory impacts and unfair biases.

Documentation documentation

Required records and technical specifications that providers must create and maintain in accordance with the Regulation, accessible to national public authorities and data protection authorities.
  • Delegated acts: Delegated acts amend the annexes containing documentation requirements.
  • Regulation: The Regulation requires that minimal documentation elements be set out in specific annexes.

documentation data_category

Records created or maintained under the Regulation that must be accessible to authorities protecting fundamental rights.
  • Article 77: Article 77 requires access to documentation created or maintained under the Regulation.
  • Article 78: Article 78 establishes confidentiality obligations for documentation obtained by public authorities.

documentation and information documentation

Materials that providers must submit to the AI Office and Commission for monitoring and compliance purposes.
  • AI Office: The AI Office can request documentation and information from providers of general-purpose AI models.

documentation and procedures documentation

Records and operational processes required under Union harmonised legislation and Regulation 2024/900.
  • provider: Providers must maintain documentation and procedures to demonstrate compliance with applicable requirements.

documentation in police file documentation

Required documentation of each use of post-remote biometric identification systems in relevant police files.
  • Regulation (EU) 2024/1689: The regulation requires that each use of post-remote biometric identification systems be documented in relevant police files.

documentation of assessment documentation

Required documentation prepared by providers assessing whether an AI system is high-risk before market placement.
  • high-risk AI system: Providers must prepare documentation of assessment for high-risk AI systems before market placement.
  • traceability and transparency: Documentation of assessment ensures traceability and transparency of AI system risk classification.

downstream provider market_actor

A provider of an AI system, including general-purpose AI systems, which integrates an AI model provided either by themselves or by another entity based on contractual relations.
  • general-purpose AI system: Downstream providers integrate general-purpose AI systems into their operations.
  • AI literacy: AI literacy requirements apply to providers and deployers of AI systems.

Downstream providers market_actor

Organizations that integrate general-purpose AI models into their own AI systems and can lodge complaints about infringements by model providers.

dual verification requirement legal_obligation

Requirement that no action or decision based on identification from high-risk AI systems shall be taken unless verified and confirmed by at least two natural persons with necessary competence, training and authority.
  • high-risk AI systems: The dual verification requirement mandates that high-risk AI systems used for identification must have their output verified by at least two natural persons.
  • law enforcement, migration, border control or asylum: The dual verification requirement does not apply to high-risk AI systems used in law enforcement, migration, border control or asylum where Union or national law considers it disproportionate.

Educational access determination systems ai_system

High-risk AI systems intended to determine access, admission, or assignment to educational and vocational training institutions.
  • ANNEX III: Educational access determination systems are classified as high-risk AI systems in ANNEX III.

Educational level assessment systems ai_system

High-risk AI systems intended to assess the appropriate education level an individual will receive or access.
  • ANNEX III: Educational level assessment systems are classified as high-risk AI systems in ANNEX III.

Electronic instructions for use documentation

Required electronic documentation providing usage instructions for AI systems, with exceptions for certain high-risk categories.
  • high-risk AI systems: Electronic instructions for use are not required for high-risk AI systems in law enforcement, migration, asylum and border control.

electronic notification tool technical_requirement

A technical system referenced in Article 30(2) for notified bodies to report changes.
  • Article 30(2): Article 30(2) establishes the electronic notification tool for reporting changes.

eligibility and selection criteria evaluation_criterion

Transparent and fair criteria for participation in AI regulatory sandboxes that must be applied by national competent authorities.
  • implementing acts: Implementing acts require transparent and fair eligibility and selection criteria for participation in AI regulatory sandboxes.

Emergency healthcare patient triage systems ai_system

AI systems used for triaging emergency healthcare patients to determine priority of care.

Emotion identification and inference systems ai_system

AI systems aiming to identify or infer emotions, which have serious concerns about their scientific basis due to cultural and individual variations in emotion expression.

emotion recognition system ai_system

An AI system designed to identify or infer emotions or intentions of natural persons based on their biometric data, such as facial expressions, voice, or behavioral patterns.

Emotion recognition systems ai_system

High-risk AI systems designed to recognize and classify human emotions from biometric data.
  • high-risk classification: Emotion recognition systems are classified as high-risk.
  • ANNEX III: Emotion recognition systems are classified as high-risk AI systems in ANNEX III.

end of 2030 legal_obligation

Deadline for operators of AI systems in large-scale IT systems to comply with Regulation requirements.

energy efficiency technical_requirement

A requirement for reducing energy consumption and resource usage in high-risk AI systems and general-purpose AI models.
  • high-risk AI system: High-risk AI systems must meet energy efficiency requirements during their lifecycle.
  • general-purpose AI models: General-purpose AI models must be developed with energy-efficient approaches.

energy-efficient development of general-purpose AI models technical_requirement

A standardisation objective focused on developing energy-efficient approaches for general-purpose AI models.

enforcement powers technical_requirement

Powers granted to market surveillance authorities to take measures against AI systems presenting risks.

ENISA institution

The European Union Agency for Cybersecurity, responsible for cybersecurity policy and certification tasks, and designated as a permanent member of the advisory forum.
  • Regulation (EU) 2019/881: ENISA has knowledge, expertise, and tasks assigned under Regulation (EU) 2019/881.
  • European Commission: The Commission should cooperate with ENISA on issues related to cybersecurity of AI systems.
  • advisory forum: ENISA is part of the advisory forum composition.
  • advisory forum: ENISA is designated as a permanent member of the advisory forum.

Entry/Exit System ai_system

System established to register entry and exit data and refusal of entry data of third-country nationals crossing external borders.

environmental sustainability evaluation_criterion

A criterion for assessing and minimizing the impact of AI systems on the environment, including energy efficiency.
  • codes of conduct: Codes of conduct require assessing and minimizing the impact of AI systems on environmental sustainability.

essential public assistance benefits and services data_category

Healthcare services, social security benefits, social services, and social and housing assistance provided by public authorities.

ethical principles legal_obligation

Fundamental principles that the regulation ensures protection of, as requested by the European Parliament.

ethical review legal_obligation

A requirement under Union or national law that must be conducted independently of real-world testing of high-risk AI systems.
  • high-risk AI systems: Testing of high-risk AI systems is subject to ethical review required by Union or national law.

Ethics guidelines for trustworthy AI documentation

Guidelines developed by the High-Level Expert Group on Artificial Intelligence for ensuring trustworthy AI development.

EU database institution

Centralized Union-wide database established by the Commission for registration of high-risk AI systems with both public and secure non-public sections, containing information from providers, deployers, and public authorities.
  • AI system: AI systems must be registered in the EU database established under the Regulation.
  • high-risk AI systems: High-risk AI systems must be registered in the EU database at national level.
  • Regulation: The EU database is established by the Regulation.
  • Commission: The Commission serves as the controller of the EU database and provides technical and administrative support, with access to restricted sections for sensitive areas.
  • market surveillance authorities: Market surveillance authorities have restricted access to the secure non-public section of the EU database.
  • law enforcement, migration, asylum and border control management: High-risk AI systems in these areas must be registered in the secure non-public section of the EU database.
  • critical infrastructure: High-risk AI systems in critical infrastructure are only registered at national level, not in the EU database.
  • cybersecurity risks: The Commission must consider cybersecurity risks when managing the EU database.
  • functional specifications: The EU database requires development of functional specifications by the Commission.
  • independent audit report: An independent audit report is required to ensure full functionality of the EU database.
  • Directive (EU) 2019/882: The EU database and its information must comply with accessibility requirements under the directive.
  • Commission: The Commission develops functional specifications and acts as data controller for the EU database.
  • real-world testing: Real-world testing must be registered in dedicated sections of the EU database with limited exceptions.
  • real-time remote biometric identification system: The biometric identification system must be registered in the EU database according to Article 49.
  • Article 49: Article 49 establishes registration obligations for high-risk AI systems in the EU database referenced in Article 71.
  • Article 71: The EU database is referred to in Article 71.
  • Annex VIII: The EU database registration requirements reference information points specified in Annex VIII.
  • Annex IX: The EU database registration requirements reference information points specified in Annex IX.
  • Article 74(8): Article 74(8) defines which national authorities have access to restricted sections of the EU database.
  • high-risk AI systems: High-risk AI systems must be registered in the EU database at national level.
  • real-world conditions testing: Real-world testing must be registered in the EU database.
  • provider: Providers are required to enter data listed in Sections A and B of Annex VIII into the EU database.
  • deployer: Deployers who are or act on behalf of public authorities must enter data listed in Section C of Annex VIII into the EU database.
  • Annex VIII: Annex VIII specifies the data that must be entered into the EU database, organized in sections A, B, and C.
  • Article 49: Article 49 establishes registration obligations for high-risk AI systems in the EU database referenced in Article 71.
  • Article 60: Article 60 contains provisions regarding restricted access to certain information in the EU database.
  • personal data: The EU database contains personal data only as necessary, including names and contact details of responsible natural persons.
  • accessibility requirements: The EU database must comply with applicable accessibility requirements.

EU database documentation

A centralized database established and managed by the Commission for registration of high-risk AI systems, provider information, and AI-related incidents.
  • Commission: The Commission establishes and manages the EU database for high-risk AI system registration.
  • providers of high-risk AI systems: Providers of high-risk AI systems must register themselves and information about their systems in the EU database.
  • deployers of high-risk AI systems: Public authority deployers of high-risk AI systems must register themselves in the EU database before using listed systems.
  • the Board: The Board evaluates and reviews the functioning of the EU database.
  • market surveillance authority: Market surveillance authorities may access information stored in the EU database referred to in Article 71.

EU database for high-risk AI systems institution

A database set up and maintained by the Commission in collaboration with Member States containing information on high-risk AI systems.
  • Article 71: Article 71 establishes the EU database for high-risk AI systems.
  • Commission: The Commission shall set up and maintain the EU database.
  • Member States: Member States collaborate with the Commission in setting up and maintaining the EU database.
  • High-risk AI systems: High-risk AI systems must be registered in the EU database.
  • Provider: Providers must enter data into the EU database.
  • Authorised representative: Authorised representatives may enter data into the EU database on behalf of providers.
  • Annex VIII: Annex VIII specifies the data to be entered into the EU database.

EU declaration of conformity documentation

A written, machine-readable formal declaration that providers must draw up for each high-risk AI system, identifying the system and stating compliance with applicable Union law requirements.
  • cybersecurity requirements: The EU declaration of conformity demonstrates achievement of cybersecurity requirements.
  • high-risk AI systems: Providers must draw up an EU declaration of conformity for high-risk AI systems.
  • high-risk AI system: Providers of high-risk AI systems must draw up an EU declaration of conformity.
  • Article 47: Article 47 establishes the requirements and procedures for the EU declaration of conformity.
  • Article 18: Article 18 requires providers to keep the EU declaration of conformity on file.
  • Article 47: Article 47 establishes the requirements and procedures for the EU declaration of conformity.
  • authorised representative: The authorised representative must verify that the EU declaration of conformity has been drawn up.
  • Article 47: The EU declaration of conformity is required and established by Article 47.
  • importer: Importers must ensure the system is accompanied by the EU declaration of conformity and retain copies for 10 years after market placement.
  • Article 47: Article 47 establishes the requirements and procedures for the EU declaration of conformity.
  • distributor: Distributors must verify that high-risk AI systems are accompanied by a copy of the EU declaration of conformity.
  • provider: The provider draws up and maintains the EU declaration of conformity for high-risk AI systems.
  • high-risk AI system: The EU declaration of conformity identifies and documents compliance of the high-risk AI system.
  • Section 2: The EU declaration of conformity states that the high-risk AI system meets requirements in Section 2.
  • Annex V: The EU declaration of conformity must contain information set out in Annex V.
  • national competent authorities: National competent authorities receive and review EU declarations of conformity upon request.
  • provider: The provider draws up and maintains the EU declaration of conformity.
  • Regulation 2024/1689: The EU declaration of conformity certifies conformity with Regulation 2024/1689.
  • Regulation 2024/1689: EU declaration of conformity references compliance with Regulation 2024/1689.

EU Regulation 2024/1689 regulation

European Union regulation establishing requirements for high-risk AI systems, published in Official Journal L on 12.7.2024.
  • ANNEX III: EU Regulation 2024/1689 contains ANNEX III which lists high-risk AI systems.

Eurodac institution

An EU system for the comparison of biometric data to effectively apply migration-related regulations and identify illegally staying third-country nationals and stateless persons.
  • Regulation (EU) 2024/1358: Regulation (EU) 2024/1358 establishes Eurodac as an EU system for biometric data comparison.
  • biometric data: Eurodac is designed for the comparison of biometric data.
  • Europol: Europol is authorized to request comparison with Eurodac data for law enforcement purposes.
  • Member States' law enforcement authorities: Member States' law enforcement authorities are authorized to request comparison with Eurodac data.

Eurodac ai_system

System providing data on third-country nationals and requests for comparison by Member States' law enforcement authorities and Europol.

EuroHPC Joint Undertaking institution

A Union-level high-performance computing initiative that develops synergies with AI enforcement structures to build Union expertise and capabilities in AI.
  • 2017/746: The regulation references synergies with the EuroHPC Joint Undertaking for building Union expertise.

European Artificial Intelligence Board institution

A governance body established under the Regulation to coordinate AI regulation implementation across Member States, composed of representatives from each Member State and the European Data Protection Supervisor as observer, supporting the Commission in promoting AI literacy and risk assessment guidelines.
  • Commission: The Board supports the Commission in promoting AI literacy tools and public awareness.
  • Commission: The Commission must consult the European Artificial Intelligence Board before providing guidelines.
  • Regulation 2024/1689: The Regulation establishes the European Artificial Intelligence Board through Article 65.
  • Article 65: Article 65 establishes and defines the structure of the European Artificial Intelligence Board.
  • AI Office: The AI Office attends the Board's meetings and participates in its operations without voting rights.
  • Member States: Member States designate representatives to compose the European Artificial Intelligence Board.
  • European Data Protection Supervisor: The European Data Protection Supervisor participates as an observer in the Board's meetings.
  • Article 66: The Board's tasks are defined in Article 66.

European Artificial Intelligence Office institution

The AI Office is responsible for developing templates and questionnaires to facilitate compliance with AI regulations and reduce administrative burden for deployers.
  • impact assessments: The AI Office should develop templates and questionnaires to facilitate compliance with impact assessment requirements.

European Central Bank institution

An EU institution responsible for prudential supervision of credit institutions under the Single Supervisory Mechanism that issued an opinion on the AI Regulation and receives information from market surveillance activities related to AI systems.

European Commission institution

The EU executive institution that submitted the AI Regulation proposal, is delegated power to adopt acts and amend regulatory provisions, and is responsible for encouraging development of AI benchmarks, providing technical support for regulatory sandboxes, and evaluating voluntary codes of conduct.

European Commission legislative_body

The executive body of the European Union responsible for cooperation on cybersecurity matters, developing AI initiatives, issuing standardisation requests, adopting implementing acts, and conducting enforcement assessments.
  • ENISA: The Commission should cooperate with ENISA on issues related to cybersecurity of AI systems.
  • data access infrastructure: The Commission may develop initiatives to facilitate the lowering of technical barriers and improve data access infrastructure for AI development.
  • semantic and technical interoperability: The Commission develops initiatives to promote semantic and technical interoperability of different types of data for cross-border AI development.
  • Harmonised standards: The European Commission shall issue standardisation requests covering requirements for harmonised standards.
  • Article 10(1): The Commission acts pursuant to Article 10(1) when requesting harmonised standards.
  • European standardisation organisations: The Commission requests European standardisation organisations to draft harmonised standards.
  • Regulation: The Commission shall submit appropriate proposals to amend the Regulation.
  • European Parliament: The Commission shall report on enforcement assessment to the European Parliament.
  • Council: The Commission shall report on enforcement assessment to the Council.
  • European Economic and Social Committee: The Commission shall report on enforcement assessment to the European Economic and Social Committee.

European Committee for Electrotechnical Standardization (CENELEC) institution

European standardization body for electrotechnical standards, designated as a permanent member of the advisory forum for technical expertise.
  • advisory forum: CENELEC is part of the advisory forum composition.
  • advisory forum: CENELEC is designated as a permanent member of the advisory forum.

European Committee for Standardization (CEN) institution

A European standardization body designated as a permanent member of the advisory forum for providing technical expertise in AI regulation.
  • advisory forum: CEN is part of the advisory forum composition.
  • advisory forum: CEN is designated as a permanent member of the advisory forum.

European common data spaces institution

Data spaces established by the Commission to facilitate trustful, accountable and non-discriminatory access to high-quality data for AI system training, validation and testing.
  • high-risk AI systems: European common data spaces provide access to high-quality data for training, validation and testing of high-risk AI systems.

European Council legislative_body

EU institution that provided conclusions on promoting European human-centric approach to AI and being a global leader in secure, trustworthy and ethical AI development.
  • Regulation 2024/1689: The regulation is based on conclusions from the European Council regarding human-centric AI approach.

European Criminal Records Information System on third-country nationals and stateless persons (ECRIS-TCN) ai_system

A centralised system established by Regulation (EU) 2019/816 to identify Member States holding conviction information on third-country nationals and stateless persons.

European Data Protection Board institution

An EU institution responsible for advising on data protection matters that was consulted on the AI Regulation and delivered a joint opinion on 18 June 2021.

European Data Protection Supervisor institution

EU institution responsible for supervising compliance with AI regulations for Union institutions, bodies, offices and agencies, with power to impose administrative fines and establish AI regulatory sandboxes.
  • this Regulation: European Data Protection Supervisor is designated as competent market surveillance authority under this Regulation.
  • Administrative penalties and fines: The European Data Protection Supervisor has the power to impose fines on Union institutions, agencies and bodies.
  • Regulation: The European Data Protection Supervisor has the power to impose fines for violations of the Regulation.
  • Article 42(1) and (2) of Regulation (EU) 2018/1725: The supervisor was consulted in accordance with this article.
  • AI Regulation 2024/1689: The supervisor delivered a joint opinion on the AI regulation on 18 June 2021.
  • national competent authority: The European Data Protection Supervisor functions as a national competent authority for AI systems used by Union institutions.
  • AI regulatory sandbox: The European Data Protection Supervisor may establish an AI regulatory sandbox for Union institutions.
  • AI regulatory sandboxes: The European Data Protection Supervisor may establish AI regulatory sandboxes with effects recognized across the Union.
  • European Artificial Intelligence Board: The European Data Protection Supervisor participates as an observer in the Board's meetings.
  • Regulation: The European Data Protection Supervisor acts as competent authority for Union institutions under the Regulation.
  • Union institutions, bodies, offices or agencies: The European Data Protection Supervisor acts as market surveillance authority for Union institutions, bodies, offices and agencies.
  • Article 100: Article 100 establishes the authority of the European Data Protection Supervisor to impose administrative fines.
  • administrative fines: The European Data Protection Supervisor may impose administrative fines on Union institutions.
  • administrative fine: The European Data Protection Supervisor determines and imposes administrative fines for infringements.
  • Regulation: The European Data Protection Supervisor enforces compliance with the Regulation through administrative proceedings.
  • Commission: The European Data Protection Supervisor notifies the Commission annually of imposed administrative fines.

European Declaration on Digital Rights and Principles for the Digital Decade directive

A European policy document that should be taken into account when establishing rules for AI systems.

European Digital Innovation Hubs institution

Facilities established by the Commission and Member States to support development and assessment of high-risk AI systems, facilitate access to high-quality data sets, and contribute to regulatory compliance implementation.
  • this Regulation: European Digital Innovation Hubs contribute to implementation of the regulation.
  • Regulation: The European Digital Innovation Hubs contribute to the implementation of the Regulation.

European Economic and Social Committee institution

An EU institution that issued an opinion on the AI Regulation and receives enforcement assessment reports from the Commission.
  • REGULATION (EU) 2024/1689: The European Economic and Social Committee issued an opinion on the regulation.
  • European Commission: The Commission shall report on enforcement assessment to the European Economic and Social Committee.

European harmonised standard technical_requirement

Standard that, when published and assessed as suitable by the AI Office, grants providers presumption of conformity.
  • AI Office: AI Office assesses harmonised standards as suitable to cover relevant obligations.
  • General-purpose AI models: Compliance with European harmonised standards grants providers presumption of conformity with obligations.

European harmonised standards technical_requirement

Standardized technical requirements that grant providers presumption of conformity with Article 55 obligations when complied with.
  • general-purpose AI models: Compliance with European harmonised standards grants providers presumption of conformity with obligations.
  • Article 55: Compliance with European harmonised standards grants presumption of conformity with Article 55 obligations.

European health data space institution

A specific common data space facilitating non-discriminatory access to health data for training AI algorithms in a privacy-preserving and trustworthy manner.
  • high-risk AI systems: The European health data space facilitates access to health data for training AI algorithms.

European Parliament legislative_body

One of the two co-legislators of the European Union that enacted Regulation (EU) 2024/1689 and receives reports from the Commission on AI regulation implementation, evaluation, and standardisation progress.
  • REGULATION (EU) 2024/1689: The regulation was enacted by the European Parliament.
  • Regulation 2024/1689: The regulation incorporates ethical principles protection as requested by the European Parliament.
  • Regulation (EC) No 810/2009: The regulation was enacted by the European Parliament and the Council.
  • Directive 2013/32/EU: The directive was enacted by the European Parliament and the Council.
  • Directive 2002/14/EC: Directive 2002/14/EC was enacted by the European Parliament.
  • Directive (EU) 2019/1937: Directive was enacted by the European Parliament.
  • Commission: The Commission must submit findings and reports to the European Parliament on regulation evaluation and amendments.
  • Delegation of Power: The European Parliament can oppose the extension or revoke the delegation of power to the Commission.
  • delegated act: European Parliament can object to delegated acts within three months of notification.
  • Commission: European Parliament can initiate extension of periods in Commission procedures.
  • Commission: The Commission submits reports to the European Parliament on AI Office evaluation and standardisation progress.
  • European Commission: The Commission submits reports and evaluations to the European Parliament.
  • European Commission: The Commission shall report on enforcement assessment to the European Parliament.
  • Regulation 2024/1689: Regulation 2024/1689 was enacted by the European Parliament.

European Parliament and Council legislative_body

The legislative bodies of the European Union responsible for enacting regulations and directives governing AI systems, data protection, and related matters.

European Parliament and of the Council legislative_body

The joint legislative body of the European Union responsible for enacting regulations and directives governing AI systems, products, and related matters across the Union.

European standardisation organisation institution

Organization responsible for adopting harmonised standards and proposing them to the Commission.

European standardisation organisations institution

Organizations responsible for developing, adopting, and delivering harmonised standards in accordance with Commission requests to support compliance with AI regulatory requirements.
  • national competent authorities: National competent authorities may involve European standardisation organisations in sandbox supervision.
  • Notified bodies: Notified bodies must participate in or be aware of relevant standards from European standardisation organisations.
  • Commission: The Commission issues standardisation requests to European standardisation organisations.
  • Regulation (EU) No 1025/2012: European standardisation organisations must provide evidence of best efforts in accordance with Article 24 of Regulation 1025/2012.
  • European Commission: The Commission requests European standardisation organisations to draft harmonised standards.

European Telecommunications Standards Institute (ETSI) institution

European standardization body for telecommunications, designated as a permanent member of the advisory forum for technical expertise.
  • advisory forum: ETSI is part of the advisory forum composition.
  • advisory forum: ETSI is designated as a permanent member of the advisory forum.

European Travel Information and Authorisation System ai_system

System established to process travel information and authorisation for third-country nationals.
  • Regulation (EU) 2018/1240: Regulation (EU) 2018/1240 establishes the European Travel Information and Authorisation System (ETIAS).

European Travel Information and Authorisation System (ETIAS) ai_system

A European information system established through Regulation (EU) 2018/1241 for travel information and authorization purposes.
  • Regulation (EU) 2018/1241: Regulation (EU) 2018/1241 establishes the European Travel Information and Authorisation System (ETIAS).

European Union institution

The supranational organization that establishes regulatory frameworks and coordinates with member states and third countries on law enforcement and judicial cooperation.

European Union Aviation Safety Agency institution

EU agency established by Regulation (EU) 2018/1139 to develop common rules in the field of civil aviation and ensure high levels of safety.

Europol institution

A Union law enforcement agency authorized to establish cooperation frameworks with third countries and international organizations and to request comparison with Eurodac data for law enforcement purposes.
  • law enforcement and judicial cooperation: Europol is involved in establishing cooperation frameworks and agreements for law enforcement and judicial cooperation.
  • Eurodac: Europol is authorized to request comparison with Eurodac data for law enforcement purposes.

evaluation strategies technical_requirement

Detailed descriptions of evaluation methodologies including evaluation results, criteria, metrics, and methodology for identifying limitations of AI models.

evidence reliability evaluation ai_system

AI systems for evaluating the reliability of evidence in criminal investigation or prosecution.

examination procedure legislative_procedure

Procedure referred to in Article 98(2) for adoption of implementing acts by the Commission.
  • Commission: Commission implementing acts must be adopted in accordance with the examination procedure referred to in Article 98(2).

exceptions and limitations for text and data mining legal_obligation

Rules allowing reproductions and extractions of works or other subject matter for the purpose of text and data mining under certain conditions.
  • Directive (EU) 2019/790: Directive (EU) 2019/790 introduced exceptions and limitations allowing reproductions and extractions for text and data mining purposes.
  • Directive (EU) 2019/790: The exceptions and limitations for text and data mining are based on Directive (EU) 2019/790.

Exit report documentation

Written documentation provided by competent authorities detailing sandbox activities, results, and learning outcomes when participants exit or terminate participation.
  • This Regulation: Exit reports document compliance with regulatory requirements and obligations.
  • Conformity assessment: Providers use sandbox documentation to demonstrate compliance through conformity assessment.
  • Market surveillance authorities: Market surveillance authorities take exit reports positively into account for accelerating assessment procedures.
  • Notified bodies: Notified bodies consider exit reports when conducting conformity assessments.
  • Commission: Commission is authorized to access and take into account exit reports in exercising regulatory tasks.
  • Board: Board is authorized to access and consider exit reports in regulatory oversight.
  • Article 78: Exit reports are subject to confidentiality provisions in Article 78.
  • Single information platform: Exit reports may be made publicly available through the single information platform.

facial expressions data_category

Biometric data including basic expressions such as frowns or smiles that can be captured and processed.
  • Regulation: The Regulation classifies facial expressions as a type of biometric data covered under its scope.

Facial recognition scraping systems ai_system

AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.

Fail-safe plans technical_requirement

Mechanisms enabling AI systems to safely interrupt their operation in the presence of anomalies or when operating outside predetermined boundaries.
  • High-risk AI systems: High-risk AI systems should implement fail-safe plans as part of technical solutions to ensure robustness.

feedback loops technical_requirement

Mechanisms where AI system outputs influence inputs for future operations, potentially amplifying existing biases and discrimination.
  • bias mitigation: Feedback loops in AI systems require bias mitigation to prevent amplification of existing discrimination.

Feedback loops mitigation technical_requirement

Requirement for high-risk AI systems that continue learning after deployment to eliminate or reduce biased outputs influencing future operations.
  • High-risk AI systems: High-risk AI systems that continue learning after deployment must eliminate or reduce biased outputs and address feedback loops with mitigation measures.

financial institutions market_actor

Institutions subject to Union financial services law that may place high-risk AI systems on the market and are subject to internal governance and monitoring requirements.
  • Union financial services law: Financial institutions are subject to internal governance and process requirements under Union financial services law.

financial intelligence units institution

Units carrying out administrative tasks analyzing information pursuant to Union anti-money laundering law.
  • high-risk AI systems: AI systems used by financial intelligence units for anti-money laundering should not be classified as high-risk law enforcement systems.

floating point operations technical_requirement

A measurement of the cumulative amount of computation used for training a general-purpose AI model, serving as a relevant approximation for model capabilities and systemic risk classification.
  • high-impact capabilities: High-impact capabilities are approximated and evaluated using floating point operations as a measurement of computational training.
  • Regulation: The Regulation requires setting an initial threshold of floating point operations to determine if a model presents systemic risks.

floating point operations threshold technical_requirement

A computational metric used to determine whether a general-purpose AI model presents systemic risks, with initial thresholds adjusted over time to reflect technological changes.

floating-point operation technical_requirement

A mathematical operation or assignment involving floating-point numbers, which are real numbers typically represented on computers by an integer of fixed precision scaled by an integer exponent.

Free and open-source AI components ai_system

Software, data, models, tools, services, or processes of an AI system provided under free and open-source licenses with publicly available parameters, weights, and model architecture information.
  • Regulation: Free and open-source AI components are covered and regulated by the Regulation.

free and open-source licence legal_obligation

A licensing arrangement that allows users to freely access, use, modify, and redistribute AI models with publicly available parameters and architecture information, with certain exemptions from compliance obligations.
  • general-purpose AI models: General-purpose AI models released under free and open-source licenses are exempt from certain compliance requirements while ensuring transparency and openness.
  • general-purpose AI models: Providers of AI models released under free and open-source licences are exempt from certain obligations, except for models with systemic risks.
  • systemic risks: The exemption for free and open-source licences does not apply to general-purpose AI models with systemic risks.
  • provider: Providers of models released under free and open-source licences may be exempt from certain obligations unless systemic risks are present.

free and open-source licences technical_requirement

Licensing model under which AI systems may be exempted from the regulation unless they are high-risk or fall under specific articles.
  • Regulation /2024/1689/oj: The regulation exempts AI systems released under free and open-source licenses unless they are high-risk or fall under specific articles.

free movement of AI-based goods and services legal_obligation

A requirement established by the regulation to ensure cross-border circulation of AI-based goods and services within the Union.
  • AI Regulation: The regulation ensures the free movement and cross-border circulation of AI-based goods and services.

functional specifications documentation

Technical specifications for the EU database to be developed by the Commission.
  • EU database: The EU database requires development of functional specifications by the Commission.

functionally separate, isolated and protected data processing environment technical_requirement

A secure, segregated infrastructure under provider control where sandbox data processing occurs with restricted access.
  • personal data: Personal data in sandbox contexts must be processed in a functionally separate, isolated and protected environment.

fundamental rights legal_obligation

Core rights including democracy, rule of law, human dignity, privacy, personal data protection, freedom of expression, and non-discrimination that must be protected from harm or adverse impact caused by AI systems.
  • AI Regulation: The regulation requires protection of fundamental rights including democracy, rule of law, and environmental protection.
  • Union legal framework on AI: The framework requires protection of fundamental rights including democracy and rule of law.
  • Regulation (EU) 2019/1020: The regulation is complementary to and without prejudice to existing Union law on fundamental rights.
  • AI system: AI systems can have adverse impacts on fundamental rights protected by the Charter.
  • high-risk AI systems: High-risk AI systems must comply with requirements to protect fundamental rights.
  • Article 7: Article 7 requires that amendments ensure consistency with safety and fundamental rights protections.
  • AI system: AI systems can have impact on fundamental rights or give rise to significant concerns regarding such harm.
  • AI regulatory sandbox: The sandbox requires protection of fundamental rights during AI system testing.

fundamental rights evaluation_criterion

Core rights of natural persons that may be impacted by high-risk AI systems and must be protected through impact assessments and risk mitigation measures under Union law.
  • impact assessment: Impact assessments identify specific risks of harm likely to impact the fundamental rights of affected persons.
  • high-risk AI systems: High-risk AI systems are evaluated based on risks they pose to fundamental rights.

Fundamental Rights Agency institution

An EU institution designated as a permanent member of the advisory forum for AI regulation to provide stakeholder representation.
  • advisory forum: The Fundamental Rights Agency is part of the advisory forum composition.
  • advisory forum: The Fundamental Rights Agency is designated as a permanent member of the advisory forum.

fundamental rights and freedoms legal_obligation

Protection standards that must be included in international agreements and frameworks for cooperation under this Regulation.
  • Regulation: The Regulation requires that international agreements include adequate safeguards for the protection of fundamental rights and freedoms.

fundamental rights and freedoms evaluation_criterion

Protection standards that third countries and international organisations must provide regarding individuals.

fundamental rights concerns evaluation_criterion

Criterion for evaluating whether harmonised standards adequately address fundamental rights protection.
  • harmonised standards: Harmonised standards are evaluated against the criterion of adequately addressing fundamental rights concerns.

fundamental rights impact assessment evaluation_criterion

A required assessment evaluating the impact of real-time remote biometric identification systems on fundamental rights before authorization by law enforcement authorities.

fundamental rights impact assessment legal_obligation

Required assessment that deployers of high-risk AI systems must conduct to identify and evaluate risks to fundamental rights and natural persons before deployment.
  • Regulation 2024/1689: The regulation requires deployers of high-risk AI systems to conduct fundamental rights impact assessments.
  • deployers of high-risk AI systems: Deployers must carry out fundamental rights impact assessments prior to deployment.
  • high-risk AI system: High-risk AI systems are subject to mandatory fundamental rights impact assessments before first use.
  • Article 13: The impact assessment process references information provided by the provider according to Article 13.
  • Regulation (EU) 2016/679: Fundamental rights impact assessment complements data protection impact assessment under this regulation.
  • Directive (EU) 2016/680: Fundamental rights impact assessment complements data protection impact assessment under this directive.
  • AI Office: AI Office shall develop templates and questionnaires to facilitate compliance with fundamental rights impact assessment obligations.
  • Article 46(1): Article 46(1) provides exemption from notification obligations for deployers.

fundamental rights impact assessment technical_requirement

A required assessment evaluating the impact of high-risk AI systems on fundamental rights prior to deployment.
  • deployers: Deployers are required to perform fundamental rights impact assessments for high-risk AI systems prior to deployment.

fundamental rights impact assessment documentation

An assessment required for high-risk AI systems to evaluate their impact on fundamental rights.
  • Article 49(3): Article 49(3) requires deployers to submit summaries of fundamental rights impact assessments.
  • Article 27: Article 27 requires the conduct of fundamental rights impact assessments for high-risk AI systems.

Fundamental rights protection legal_obligation

An obligation to ensure respect for fundamental rights including privacy, non-discrimination, free movement, and good administration while obtaining benefits from AI systems and enabling democratic control.
  • Regulation 2024/1689: The regulation requires protection of fundamental rights, health and safety in AI systems.

fundamental rights protection evaluation_criterion

The objective of ensuring that AI systems do not violate fundamental rights of individuals and workers.
  • information requirement: The information requirement is ancillary and necessary to the objective of protecting fundamental rights.

General description of the AI system technical_requirement

A required component of technical documentation that includes intended purpose, provider name, system version, and hardware/software interactions.
  • Technical documentation: Technical documentation must contain a general description of the AI system.
  • Provider: The general description must include the name of the provider.
  • Deployer: The general description must include instructions for use and user-interface information for the deployer.

general-purpose AI model ai_model

An AI model trained with large amounts of data using self-supervision at scale that displays significant generality, can perform a wide range of distinct tasks, and may present systemic risks requiring regulatory oversight.
  • general-purpose AI system: General-purpose AI models are integrated into or form part of general-purpose AI systems.
  • Regulation 2024/1689: The regulation establishes requirements and transparency measures for general-purpose AI models.
  • technical documentation: Providers of general-purpose AI models must prepare and maintain technical documentation.
  • transparency measures: Proportionate transparency measures apply to providers of general-purpose AI models.
  • AI system: AI system is based on a general-purpose AI model.
  • Commission: The Commission may request access to general-purpose AI models through APIs and technical means, and may make binding commitments from providers to implement mitigation measures.
  • AI Office: The AI Office monitors compliance of general-purpose AI models, initiates structured dialogues regarding those with systemic risks, and can enforce access to related information during investigations.
  • systemic risk: Systemic risk is a specific risk associated with high-impact capabilities of general-purpose AI models.
  • high-impact capabilities: High-impact capabilities are a characteristic of general-purpose AI models.
  • general-purpose AI system: A general-purpose AI system is based on a general-purpose AI model.
  • Commission: The Commission has authority to designate general-purpose AI models as models with systemic risk.
  • systemic risk: General-purpose AI models are evaluated based on whether they present systemic risks.
  • Article 54: Article 54 establishes rules for authorised representatives of general-purpose AI model providers.
  • AI systems: General-purpose AI models are designed to be integrated into AI systems placed on the market or put into service.
  • systemic risks: General-purpose AI models presenting systemic risks are subject to additional obligations.
  • AI Office: The AI Office monitors compliance of general-purpose AI models, initiates structured dialogues regarding those with systemic risks, and can enforce access to related information during investigations.
  • Commission: The Commission may request access to general-purpose AI models through APIs and technical means, and may make binding commitments from providers to implement mitigation measures.
  • AI Office: The AI Office monitors compliance of general-purpose AI models, initiates structured dialogues regarding those with systemic risks, and can enforce access to related information during investigations.
  • Article 101: Article 101 specifies fines for failure to provide access to general-purpose AI models.
  • technical documentation: Technical documentation must describe the general-purpose AI model and its characteristics.
  • AI system: The general-purpose AI model is designed to be integrated into AI systems.

general-purpose AI model with systemic risk ai_model

A general-purpose AI model that meets specific thresholds for high-impact capabilities and is presumed to present systemic risks, subject to enhanced regulatory obligations and oversight.
  • AI Office: General-purpose AI models with systemic risks are subject to oversight and notification requirements by the AI Office.
  • high-impact capabilities threshold: Meeting the high-impact capabilities threshold classifies a general-purpose AI model as one with systemic risks.
  • Regulation: General-purpose AI models with systemic risk are subject to enhanced obligations under the Regulation.
  • Commission: The Commission is empowered to designate general-purpose AI models as having systemic risk.
  • AI Office: The AI Office monitors and receives alerts about general-purpose AI models that should be classified as having systemic risk.
  • Commission: The Commission designates general-purpose AI models as presenting systemic risks based on criteria in Annex XIII.

general-purpose AI model with systemic risk evaluation_criterion

A classification criterion for general-purpose AI models that pose systemic risks at Union level.
  • general-purpose AI models: General-purpose AI models can be classified as having systemic risk based on specific criteria.

general-purpose AI model with systemic risk ai_system

A classification of general-purpose AI models that meet conditions regarding high impact capabilities or equivalent capabilities as determined by the Commission.
  • Article 51: Article 51 defines the classification criteria for general-purpose AI models with systemic risk.
  • high impact capabilities: Classification as a model with systemic risk requires evaluation of high impact capabilities.
  • Commission: The Commission can classify general-purpose AI models as having systemic risk based on qualified alerts or ex officio decisions.

general-purpose AI model with systemic risks ai_model

A general-purpose AI model that meets computational thresholds or demonstrates high-impact capabilities equivalent to the most advanced models, thereby presenting systemic risks requiring regulatory oversight.
  • Regulation: The Regulation establishes a methodology for classifying general-purpose AI models as those with systemic risks.
  • high-impact capabilities: General-purpose AI models with systemic risks are evaluated based on whether they possess high-impact capabilities.
  • systemic risks: General-purpose AI models with systemic risks are subject to evaluation criteria based on systemic risk factors.
  • Commission: The Commission has authority to take individual decisions designating general-purpose AI models as having systemic risk.
  • floating point operations threshold: Meeting the floating point operations threshold creates a presumption that a model is a general-purpose AI model with systemic risks.
  • Regulation: The Regulation contains criteria and procedures for classifying and designating general-purpose AI models with systemic risks.
  • training data set quality and size: Training data quality and size is a criterion for assessing systemic risk designation.
  • number of business and end users: The number of business and end users is a criterion for assessing systemic risk.
  • input and output modalities: Input and output modalities are criteria for assessing systemic risk of general-purpose AI models.
  • level of autonomy and scalability: Level of autonomy and scalability are criteria for evaluating systemic risk designation.

general-purpose AI models ai_model

AI models with broad capabilities and generality, typically trained on large amounts of data, that are subject to specific compliance obligations including documentation, copyright compliance, systemic risk assessment, and monitoring under the Regulation.
  • free and open-source licence: General-purpose AI models released under free and open-source licenses are exempt from certain compliance requirements while ensuring transparency and openness.
  • AI systems: General-purpose AI models are distinguished from AI systems as essential components that require additional elements to become functional systems.
  • Regulation: The Regulation establishes rules and value chain obligations applicable to providers of general-purpose AI models placed on the market.
  • Regulation: General-purpose AI models are subject to the provisions and obligations of the Regulation once placed on the market.
  • Large generative AI models: Large generative AI models are classified as a typical example of general-purpose AI models.
  • placing on the market: Obligations for providers of general-purpose AI models apply once the models are placed on the market.
  • transparency-related requirements: General-purpose AI models are subject to transparency-related requirements unless they qualify for exceptions.
  • Union copyright law: General-purpose AI model providers must ensure compliance with Union copyright law in their training and fine-tuning processes.
  • systemic risk: General-purpose AI models are evaluated for systemic risk to determine if transparency exceptions apply.
  • text and data mining: Training of general-purpose AI models requires text and data mining processes.
  • copyright compliance policy: Providers of general-purpose AI models must establish a policy to comply with Union copyright law.
  • training data summary: Providers must make publicly available a summary of content used for training.
  • Regulation: The Regulation establishes rules and value chain obligations applicable to providers of general-purpose AI models placed on the market.
  • Commission: Providers of general-purpose AI models must report serious incidents to the Commission.
  • national competent authorities: Providers must report relevant information and corrective measures to national competent authorities.
  • cybersecurity protection: Providers must ensure adequate cybersecurity protection for models and their physical infrastructure.
  • codes of practice: Codes of practice cover obligations for providers of general-purpose AI models.
  • AI Office: The AI Office conducts monitoring activities regarding general-purpose AI models.
  • high-risk AI systems: High-risk AI systems can be built on general-purpose AI models.
  • AI Office: The AI Office monitors and ensures compliance with rules applicable to general-purpose AI models.
  • general-purpose AI model with systemic risk: General-purpose AI models can be classified as having systemic risk based on specific criteria.
  • AI Office: The AI Office conducts evaluations of general-purpose AI models and can involve independent experts to carry out these evaluations.
  • Article 18: Procedural rights provided for in Article 18 apply mutatis mutandis to providers of general-purpose AI models.
  • Union's Ethics Guidelines for Trustworthy AI: Providers and deployers of all AI systems and models are encouraged to apply elements from the Union's Ethics Guidelines for Trustworthy AI on a voluntary basis.
  • transparency information: Providers of general-purpose AI models must provide transparency information.
  • This Regulation: The regulation addresses standardisation deliverables for energy efficient development of general-purpose AI models.
  • This Regulation: The regulation applies specific obligations to providers of general-purpose AI models from August 2025.
  • AI Regulation 2024/1689: The regulation lays down harmonised rules for placing general-purpose AI models on the market.
  • Article 2: Article 2 applies to general-purpose AI models placed on the market.
  • third party supplier: General-purpose AI models made available under free and open-source licenses are exempt from certain third-party supplier obligations.
  • energy efficiency: General-purpose AI models must be developed with energy-efficient approaches.
  • common specifications: General-purpose AI models must conform to common specifications to be presumed compliant.
  • Sections 2 and 3 of Chapter V: General-purpose AI models must comply with obligations set out in Sections 2 and 3 of Chapter V.
  • Obligations for providers of general-purpose AI models: The obligations apply to providers of general-purpose AI models.
  • Regulation 2024/1689: The regulation establishes obligations that apply to providers of general-purpose AI models.
  • free and open-source licence: Providers of AI models released under free and open-source licences are exempt from certain obligations, except for models with systemic risks.
  • AI Office: The AI Office invites all providers of general-purpose AI models to participate in and adhere to codes of practice.
  • AI Office: The AI Office invites providers of general-purpose AI models to adhere to codes of practice.
  • the Board: The Board provides advice on enforcement of rules for general-purpose AI models.
  • Member States: Member States receive opinions on qualified alerts regarding general-purpose AI models and monitor their enforcement.
  • scientific panel: The scientific panel contributes to development of tools and methodologies for evaluating capabilities of general-purpose AI models.
  • scientific panel: The scientific panel provides advice on the classification of general-purpose AI models with systemic risk.
  • Article 88: Article 88 establishes enforcement obligations applicable to providers of general-purpose AI models.
  • Commission: The Commission's authority to impose fines applies to providers of general-purpose AI models.
  • Regulation (EU) 2024/1689: The Artificial Intelligence Act applies to general-purpose AI models placed on the market.
  • technical documentation for providers of general-purpose AI models: Technical documentation requirements apply to general-purpose AI models.
  • ANNEX XII: ANNEX XII contains transparency information requirements that apply to general-purpose AI models.

general-purpose AI models that pose systemic risks ai_model

A subset of general-purpose AI models that present systemic risks and are subject to specific regulatory rules.
  • Regulation: The Regulation establishes specific rules for general-purpose AI models that pose systemic risks.
  • AI systems: Obligations for systemic risk models apply when these models are integrated into or form part of AI systems.

general-purpose AI models with systemic risk ai_model

General-purpose AI models designated based on specific criteria and thresholds that present significantly negative effects and are subject to enhanced compliance obligations under Article 55.
  • Regulation: General-purpose AI models with systemic risk should always be subject to relevant obligations under the Regulation.
  • European Commission: Commission has power to amend classification rules and designation criteria for systemic risk models.
  • Article 55: Article 55 establishes obligations that apply to providers of general-purpose AI models with systemic risk.
  • Article 51: Article 51 defines criteria for designation of general-purpose AI models with systemic risk.
  • Commission: The Commission determines whether a general-purpose AI model has systemic risk capabilities.
  • number of parameters: Number of parameters is a criterion for evaluating general-purpose AI models with systemic risk.
  • quality or size of data set: Data set quality or size is a criterion for evaluating general-purpose AI models with systemic risk.
  • computation used for training: Computation used for training is a criterion for evaluating general-purpose AI models with systemic risk.
  • input and output modalities: Input and output modalities are criteria for evaluating general-purpose AI models with systemic risk.
  • benchmarks and evaluations of capabilities: Benchmarks and capability evaluations are criteria for assessing general-purpose AI models with systemic risk.
  • high impact on internal market: High impact on internal market is a criterion for designating general-purpose AI models with systemic risk.
  • number of registered end-users: Number of registered end-users is a criterion for designating general-purpose AI models with systemic risk.

general-purpose AI system ai_system

An AI system based on a general-purpose AI model with the capability to serve a variety of purposes, either used directly by deployers or integrated into other AI systems, some of which may be classified as high-risk.
  • high-risk AI system: A general-purpose AI system can become a high-risk AI system if its intended purpose is modified for high-risk applications.
  • Regulation (EU) 2024/1689: The regulation applies to general-purpose AI systems used as high-risk systems or as components thereof.
  • general-purpose AI model: General-purpose AI models are integrated into or form part of general-purpose AI systems.
  • general-purpose AI model: A general-purpose AI system is based on a general-purpose AI model.
  • downstream provider: Downstream providers integrate general-purpose AI systems into their operations.

gestures data_category

Biometric data including movements of hands, arms, or head that can be captured and processed.
  • Regulation: The Regulation classifies gestures as a type of biometric data covered under its scope.

governance and enforcement technical_requirement

Infrastructure and procedures required to implement and enforce the regulation, operational by August 2026.
  • This Regulation: The regulation requires establishment of governance and enforcement infrastructure operational by August 2026.

guidelines for trustworthy AI documentation

Guidelines developed by the AI HLEG that establish seven non-binding ethical principles for AI systems to ensure trustworthiness and ethical soundness.

harmonised standard technical_requirement

Standards adopted by European standardisation organisations in accordance with Regulation (EU) No 1025/2012 that providers must comply with.

harmonised standards technical_requirement

Standards developed by European standardisation organisations and published in the Official Journal of the European Union that reflect the state of the art and grant providers presumption of conformity with regulatory requirements.
  • AI models: Providers of general-purpose AI models can demonstrate compliance using harmonised standards.
  • Regulation (EU) No 1025/2012: Harmonised standards are defined in Regulation (EU) No 1025/2012.
  • providers: Providers can comply with harmonised standards to demonstrate conformity with regulatory requirements.
  • provider's obligation to comply: Harmonised standards are a means for providers to demonstrate conformity with regulatory requirements.
  • Commission: The Commission issues standardisation requests for harmonised standards development.
  • high-risk AI system: High-risk AI systems should apply harmonised standards or use alternative means to ensure compliance with relevant requirements.
  • Article 41: Article 41 establishes requirements for common specifications related to harmonised standards.
  • fundamental rights concerns: Harmonised standards are evaluated against the criterion of adequately addressing fundamental rights concerns.
  • Article 40: Article 40 references harmonised standards that confer presumption of conformity for AI systems.
  • Regulation: The Regulation requires the development of harmonised standards and common specifications.
  • Official Journal of the European Union: References to harmonised standards are published in the Official Journal of the European Union.
  • Chapter III, Section 2: When harmonised standards are not applied, solutions must meet requirements in Chapter III, Section 2.

health and safety evaluation_criterion

Critical evaluation criteria for determining whether high-risk AI systems pose significant risks.
  • high-risk AI systems: High-risk AI systems are evaluated based on risks they pose to health and safety.

health and safety of persons evaluation_criterion

A key criterion for identifying high-risk AI systems and assessing their impact.
  • high-risk AI systems: High-risk AI systems are identified based on their potential harmful impact on health and safety of persons.

health and safety risks evaluation_criterion

Significant risks to health and safety and fundamental rights that must be identified and mitigated during AI system development and testing.
  • AI regulatory sandboxes: Significant risks to health and safety identified during AI system testing must result in adequate mitigation.

health, safety and fundamental rights data_category

Key protected interests that AI systems must not adversely impact, serving as primary evaluation criteria for risk assessment.
  • risk mitigation measures: Risk mitigation measures are designed to protect health, safety, and fundamental rights from AI system risks.

health, safety and fundamental rights evaluation_criterion

Key criteria for assessing the risk level of AI systems and determining regulatory requirements.
  • The Commission: The Commission must ensure amendments maintain the level of protection of health, safety and fundamental rights.

health, safety or fundamental rights evaluation_criterion

Key criteria used to assess potential risks related to certificate suspension or restriction.

high impact capabilities evaluation_criterion

A criterion for classifying general-purpose AI models as having systemic risk, evaluated using technical tools, methodologies, indicators, and benchmarks.

high impact on internal market evaluation_criterion

Criterion based on market reach, presumed when model made available to at least 10,000 registered business users in the Union.

high risk AI system evaluation_criterion

Classification for AI systems that have significant adverse impact on fundamental rights protected by the Charter.
  • AI system: AI systems with significant adverse impact on fundamental rights are classified as high risk.
  • Charter: Classification of AI systems as high risk is based on the extent of adverse impact on fundamental rights protected by the Charter.

high-impact capabilities evaluation_criterion

Capabilities in general-purpose AI models that match or exceed those recorded in the most advanced general-purpose AI models, evaluated using appropriate technical tools and methodologies.

high-impact capabilities technical_requirement

Capabilities in AI models that match or exceed the capabilities of the most advanced general-purpose AI models.

high-impact capabilities threshold evaluation_criterion

The applicable threshold that, when met by a general-purpose AI model, triggers the presumption that the model presents systemic risks.

High-Level Expert Group on Artificial Intelligence institution

Expert group that issued ethics guidelines for trustworthy AI.

high-quality data requirement technical_requirement

Requirement that AI systems used in law enforcement must be trained with high-quality data to avoid discriminatory or incorrect outcomes.

high-risk AI classification evaluation_criterion

Classification status for AI systems used in law enforcement contexts where accuracy, reliability, and transparency are particularly important.
  • AI systems in law enforcement: AI systems used in law enforcement are classified as high-risk due to their critical impact on fundamental rights.

high-risk AI system ai_system

An AI system classified as high-risk based on its intended purpose and potential to cause significant harm to health, safety, or fundamental rights, subject to enhanced regulatory requirements including conformity assessment, technical documentation, data governance, human oversight, and market surveillance.
  • Regulation: High-risk AI systems are subject to the requirements and restrictions established by the Regulation.
  • this Regulation: High-risk AI systems are classified and regulated under this Regulation with specific requirements and restrictions.
  • documentation of assessment: Providers must prepare documentation of assessment for high-risk AI systems before market placement.
  • remote biometric identification system: Remote biometric identification systems are classified as high-risk due to risks of bias and discriminatory effects.
  • biometric data: Biometric data as a special category of personal data is relevant to classification of high-risk AI systems.
  • Commission: Commission should provide guidelines specifying practical implementation of conditions for high-risk and non-high-risk AI systems.
  • Regulation (EU) 2016/679: High-risk AI systems must comply with Union data protection law including GDPR.
  • Regulation 2024/900: Regulation 2024/900 establishes requirements that apply to high-risk AI systems.
  • risk-management system: High-risk AI systems require implementation of a risk-management system throughout their lifecycle.
  • data sets: High-risk AI systems require high-quality data sets for training, validation, and testing.
  • discrimination: High-risk AI systems must not perpetuate discrimination prohibited by Union law.
  • transparency about original purpose of data collection: High-risk AI systems require transparency about the original purpose of data collection in their data governance practices.
  • cybersecurity requirements: High-risk AI systems must comply with cybersecurity requirements set out in the regulation.
  • conformity assessment procedure: The conformity assessment procedure applies to high-risk AI systems classified under the regulation.
  • This Regulation: This regulation classifies certain AI systems as high-risk based on defined criteria.
  • data poisoning: Data poisoning represents an AI-specific vulnerability that threatens high-risk AI systems.
  • adversarial attacks: Adversarial attacks represent an AI-specific vulnerability that threatens high-risk AI systems.
  • Regulation (EU) 2024/1689: The regulation establishes requirements and obligations for high-risk AI systems placed on the market.
  • provider: Providers are responsible for placing high-risk AI systems on the market or putting them into service and ensuring registration compliance.
  • quality management system: Providers of high-risk AI systems must establish a sound quality management system.
  • conformity assessment procedure: Providers must accomplish the required conformity assessment procedure for high-risk AI systems.
  • post-market monitoring system: Providers must establish and maintain a robust post-market monitoring system for high-risk AI systems in accordance with Article 72.
  • Regulation: The Regulation establishes specific requirements and obligations for high-risk AI systems.
  • general-purpose AI system: A general-purpose AI system can become a high-risk AI system if its intended purpose is modified for high-risk applications.
  • Article 16(2): Article 16(2) continues to apply to high-risk AI systems that are medical devices.
  • third parties: Third parties supply tools, services, components, or processes that are integrated into high-risk AI systems.
  • impact assessment: Impact assessments must be performed on high-risk AI systems to identify risks and mitigation measures.
  • data governance: High-risk AI systems must comply with data governance requirements set out in the Regulation.
  • internal market: Compliant high-risk AI systems bearing CE marking can move freely within the internal market.
  • Member States: Member States must not create unjustified obstacles to placing compliant high-risk AI systems on the market.
  • market surveillance authority: Market surveillance authorities are responsible for supervision of high-risk AI systems and enforce compliance by restricting or prohibiting non-compliant systems.
  • Annex III: High-risk AI systems are those referred to and categorized in Annex III that pose significant risk of harm.
  • risk management measures: Risk management measures are designed to address risks identified in high-risk AI systems.
  • residual risk: The residual risk of high-risk AI systems must be judged as acceptable.
  • data governance and management practices: Data governance practices apply to high-risk AI systems for their intended purpose.
  • data quality criteria: High-risk AI systems require training data that meets specified quality criteria.
  • bias detection and correction: High-risk AI systems are required to implement bias detection and correction measures.
  • data set characteristics: High-risk AI systems require data sets with specific characteristics tailored to their intended geographical, contextual, and functional settings.
  • technical documentation: Technical documentation must demonstrate compliance of high-risk AI systems with regulatory requirements.
  • Annex IV: High-risk AI systems must include minimum elements specified in Annex IV in their technical documentation.
  • testing data sets: Requirements for high-risk AI systems not using model training techniques apply to testing data sets.
  • Annex IV: High-risk AI systems are required to provide technical documentation as specified in Annex IV.
  • Union harmonisation legislation: High-risk AI systems related to products covered by Union harmonisation legislation must comply with those legal acts.
  • Article 12: High-risk AI systems must comply with record-keeping and logging obligations specified in Article 12.
  • Article 14: High-risk AI systems are subject to human oversight requirements established in Article 14.
  • training, validation and testing data sets: High-risk AI systems require documentation of specifications for training, validation and testing data sets.
  • conformity assessment: High-risk AI systems undergo conformity assessment by providers at the moment of initial deployment.
  • Article 43: High-risk AI systems are subject to the conformity assessment procedure referenced in Article 43.
  • EU declaration of conformity: Providers of high-risk AI systems must draw up an EU declaration of conformity.
  • CE marking: High-risk AI systems require CE marking to be affixed visibly, legibly and indelibly on the system or its packaging/documentation to indicate compliance.
  • accessibility requirements: High-risk AI systems must comply with accessibility requirements in accordance with EU Directives.
  • quality management system: The quality management system requirement applies to high-risk AI systems.
  • Article 49: Article 49 establishes registration requirements that apply to high-risk AI systems.
  • Article 20: Article 20 requires corrective actions and information provision for high-risk AI systems.
  • Section 2: High-risk AI systems must comply with the requirements set out in Section 2.
  • risk management system: High-risk AI systems are required to implement a risk management system as referenced in Article 9.
  • data management: High-risk AI systems require systems and procedures for comprehensive data management before placing on the market or putting into service.
  • harmonised standards: High-risk AI systems should apply harmonised standards or use alternative means to ensure compliance with relevant requirements.
  • serious incident reporting: High-risk AI systems must have procedures for reporting serious incidents in accordance with Article 73.
  • record-keeping: High-risk AI systems must maintain systems and procedures for record-keeping of all relevant documentation and information.
  • accountability framework: High-risk AI systems must establish an accountability framework setting out responsibilities of management and staff.
  • competent authority: Competent authorities oversee and request access to high-risk AI systems and their logs.
  • authorised representative: Authorised representatives perform tasks specified in their mandate related to high-risk AI systems.
  • Article 23: Article 23 governs the placement and conformity requirements for high-risk AI systems.
  • Regulation 2024/1689: High-risk AI systems are subject to the conformity and market placement requirements of the regulation.
  • importer: Importers are responsible for placing high-risk AI systems on the market in conformity with regulations.
  • notified body: Notified bodies evaluate the conformity of high-risk AI systems and issue certificates of conformity.
  • Article 24: Article 24 governs the handling and distribution of high-risk AI systems.
  • Article 79: Article 79 defines risk criteria for high-risk AI systems.
  • distributor: Distributors are required to ensure that storage and transport conditions do not jeopardise compliance of high-risk AI systems.
  • Article 79(1): The definition of risk for high-risk AI systems is provided in Article 79(1).
  • Article 25: Article 25 establishes responsibilities and obligations for high-risk AI systems along the value chain.
  • Article 6: High-risk AI systems are classified according to criteria in Article 6.
  • deployer: Deployers must monitor the operation of high-risk AI systems and verify their registration before use.
  • Article 72: Deployer obligations regarding high-risk AI systems are referenced in Article 72.
  • monitoring obligation: High-risk AI systems are subject to monitoring obligations that can be fulfilled through compliance with financial services law.
  • Article 71: High-risk AI systems must be registered in the EU database referred to in Article 71.
  • fundamental rights impact assessment: High-risk AI systems are subject to mandatory fundamental rights impact assessments before first use.
  • human oversight measures: High-risk AI systems require implementation of human oversight measures as part of deployment.
  • notified body: Notified bodies evaluate the conformity of high-risk AI systems and issue certificates of conformity.
  • national competent authority: National competent authorities confirm risk assessments and manage certificate validity for high-risk AI systems.
  • energy efficiency: High-risk AI systems must meet energy efficiency requirements during their lifecycle.
  • conformity assessment procedure: High-risk AI systems are subject to conformity assessment procedures as specified in the regulation.
  • Union harmonisation legislation: Union harmonisation legislation listed in Annex I applies to certain high-risk AI systems.
  • Section 2: Requirements in Section 2 apply to high-risk AI systems covered by Union harmonisation legislation.
  • authorization: High-risk AI systems require authorization before being put into service.
  • Commission: The Commission decides whether authorizations for high-risk AI systems are justified.
  • EU declaration of conformity: The EU declaration of conformity identifies and documents compliance of the high-risk AI system.
  • CE marking: High-risk AI systems require CE marking to be affixed visibly, legibly and indelibly on the system or its packaging/documentation to indicate compliance.
  • Article 6(3): Article 6(3) establishes criteria for determining whether an AI system is high-risk or not.
  • Annex III: High-risk AI systems are those referred to and categorized in Annex III that pose significant risk of harm.
  • serious incident: High-risk AI systems are subject to serious incident reporting obligations.
  • market surveillance authority: Market surveillance authorities are responsible for supervision of high-risk AI systems and enforce compliance by restricting or prohibiting non-compliant systems.
  • Chapter III, Section 2: High-risk AI systems must comply with requirements set out in Chapter III, Section 2.
  • Article 82: Article 82 establishes requirements and procedures applicable to high-risk AI systems.
  • Article 83: Article 83 establishes formal non-compliance procedures applicable to high-risk AI systems.
  • technical documentation: High-risk AI systems must have technical documentation available for compliance verification.
  • Article 86: Article 86 applies to decisions taken on the basis of output from high-risk AI systems listed in Annex III.
  • Annex III: High-risk AI systems are those referred to and categorized in Annex III that pose significant risk of harm.
  • right to explanation of individual decision-making: The right to explanation applies to decisions made using high-risk AI systems.

high-risk AI system evaluation_criterion

Classification of AI systems that pose significant risks and are subject to stringent regulatory requirements.
  • AI system: AI systems may be classified as high-risk based on evaluation criteria.

high-risk AI systems ai_system

AI systems classified as high-risk under the Regulation that require mandatory compliance with specific requirements including conformity assessment, technical documentation, transparency obligations, human oversight measures, fundamental rights impact assessments, and registration in the EU database before being placed on the market.
  • Union legal framework on AI: The Union legal framework establishes rules that apply to high-risk AI systems.
  • Regulation 2024/1689: The regulation establishes requirements, classifications, and specific responsibilities for the use and deployment of high-risk AI systems.
  • Regulation: The Regulation establishes requirements for the development, assessment, testing, and placement on the market of high-risk AI systems, including explanation rights for affected persons.
  • Regulation (EU) 2017/745: High-risk AI systems may be subject to Regulation (EU) 2017/745 when applicable to the product.
  • Regulation (EU) 2017/746: High-risk AI systems may be subject to Regulation (EU) 2017/746 when applicable to the product.
  • Directive 2006/42/EC: High-risk AI systems may be subject to Directive 2006/42/EC when applicable to the product.
  • product compliance with Union harmonisation legislation: High-risk AI systems are subject to compliance requirements with Union harmonisation legislation.
  • health and safety of persons: High-risk AI systems are identified based on their potential harmful impact on health and safety of persons.
  • Regulation (EC) No 300/2008: High-risk AI systems that are safety components fall within the scope of this regulation.
  • Regulation (EU) No 167/2013: High-risk AI systems that are safety components fall within the scope of this regulation.
  • Regulation (EU) No 168/2013: High-risk AI systems that are safety components fall within the scope of this regulation.
  • Directive 2014/90/EU: High-risk AI systems that are safety components fall within the scope of this directive.
  • Directive (EU) 2016/797: High-risk AI systems that are safety components fall within the scope of this directive.
  • Regulation (EU) 2018/858: High-risk AI systems that are safety components fall within the scope of this regulation.
  • Regulation (EU) 2018/1139: High-risk AI systems that are safety components fall within the scope of this regulation.
  • mandatory requirements for high-risk AI systems: High-risk AI systems are subject to mandatory requirements laid down in the Regulation.
  • Union harmonisation legislation: High-risk AI systems that are safety components or products must comply with Union harmonisation legislation listed in Annex I.
  • conformity assessment procedure: High-risk AI systems are subject to conformity assessment procedures, which may be derogated under exceptional circumstances.
  • Regulation: The Regulation establishes requirements for the development, assessment, testing, and placement on the market of high-risk AI systems, including explanation rights for affected persons.
  • delegated acts: Delegated acts amend the list of high-risk AI systems to account for technological development.
  • narrow procedural task: High-risk AI systems performing narrow procedural tasks may not pose significant risk and are subject to limited risk assessment.
  • profiling: High-risk AI systems are subject to profiling considerations as defined in multiple EU regulations.
  • Regulation (EU) 2016/679: High-risk AI systems must comply with profiling definitions and conditions laid down in Regulation (EU) 2016/679 when processing personal data.
  • Directive (EU) 2016/680: High-risk AI systems must comply with profiling definitions and conditions laid down in Directive (EU) 2016/680.
  • Regulation (EU) 2018/1725: High-risk AI systems must comply with profiling definitions and conditions laid down in Regulation (EU) 2018/1725 when processing personal data.
  • AI systems in education: AI systems used in education or vocational training for determining access, admission, evaluating outcomes, or monitoring behavior are classified as high-risk.
  • right to education and training: High-risk AI systems in education may violate the right to education and training when improperly designed and used.
  • right not to be discriminated against: High-risk AI systems may violate the right not to be discriminated against and perpetuate historical discrimination patterns.
  • AI systems for credit evaluation: Credit evaluation AI systems are classified as high-risk due to their impact on access to financial resources and essential services.
  • AI systems for health and life insurance: Health and life insurance risk assessment AI systems can be classified as high-risk due to significant impact on persons' livelihood.
  • AI systems for emergency call evaluation: Emergency call evaluation and dispatch AI systems are classified as high-risk due to critical decisions affecting life, health, and property.
  • AI systems for fraud detection: AI systems for fraud detection provided by Union law should not be considered high-risk under this Regulation.
  • discrimination: High-risk AI systems must be evaluated for potential discriminatory impacts and perpetuation of historical discrimination patterns.
  • fundamental rights: High-risk AI systems must comply with requirements to protect fundamental rights.
  • Regulation 2024/1689: The regulation establishes requirements, classifications, and specific responsibilities for the use and deployment of high-risk AI systems.
  • law enforcement authorities: High-risk AI systems are intended to be used by or on behalf of law enforcement authorities.
  • Union institutions, bodies, offices, or agencies: High-risk AI systems are intended to be used by Union institutions in support of law enforcement.
  • Union and national law: High-risk AI systems must be permitted under relevant Union and national law.
  • tax and customs authorities: AI systems used by tax and customs authorities should not be classified as high-risk law enforcement systems.
  • financial intelligence units: AI systems used by financial intelligence units for anti-money laundering should not be classified as high-risk law enforcement systems.
  • AI systems for judicial decision-making: AI systems used in judicial decision-making are classified as high-risk systems.
  • AI systems for alternative dispute resolution: AI systems used in alternative dispute resolution with legal effects are classified as high-risk.
  • AI systems for election influence: AI systems intended to influence elections or referenda are classified as high-risk.
  • mandatory requirements: High-risk AI systems must comply with mandatory requirements to mitigate risks and ensure trustworthiness.
  • Regulation: The Regulation establishes requirements for the development, assessment, testing, and placement on the market of high-risk AI systems, including explanation rights for affected persons.
  • risk-management system: The risk-management system applies specifically to high-risk AI systems to ensure their safety and compliance.
  • data governance: High-risk AI systems must comply with data governance requirements.
  • Regulation: The Regulation establishes requirements for the development, assessment, testing, and placement on the market of high-risk AI systems, including explanation rights for affected persons.
  • European common data spaces: European common data spaces provide access to high-quality data for training, validation and testing of high-risk AI systems.
  • European health data space: The European health data space facilitates access to health data for training AI algorithms.
  • Union data protection law: High-risk AI systems must comply with principles of data minimisation and data protection by design and by default.
  • bias detection and correction: Providers of high-risk AI systems must implement bias detection and correction measures.
  • traceability: High-risk AI systems must maintain comprehensible information on their development and performance for traceability.
  • post market monitoring: High-risk AI systems must be subject to post market monitoring throughout their lifetime.
  • technical documentation: Providers of high-risk AI systems are required to maintain technical documentation containing information necessary to assess compliance.
  • automatic recording of events: High-risk AI systems must technically allow for automatic recording of events through logs over their lifetime.
  • transparency requirement: High-risk AI systems are subject to transparency requirements before being placed on the market or put into service.
  • instructions of use: High-risk AI systems must be accompanied by instructions of use containing relevant information about their characteristics, limitations, and appropriate use.
  • post market monitoring: High-risk AI systems are subject to post market monitoring to verify compliance and track operations.
  • cybersecurity requirements: High-risk AI systems must comply with cybersecurity requirements to ensure resilience against cyberattacks.
  • robustness and accuracy: High-risk AI systems are required to meet robustness and accuracy standards.
  • Regulation on horizontal cybersecurity requirements for products with digital elements: High-risk AI systems can demonstrate compliance with cybersecurity requirements by fulfilling the essential requirements of the horizontal cybersecurity regulation.
  • training data sets: Training data sets used in high-risk AI systems are vulnerable to cyberattacks such as data poisoning.
  • trained models: Trained models within high-risk AI systems are vulnerable to adversarial attacks and membership inference attacks.
  • this Regulation: The Regulation applies to high-risk AI systems placed on the market or put into service in the Union.
  • authorised representative: The authorised representative ensures compliance of high-risk AI systems placed on the market or put into service in the Union.
  • Regulation: High-risk AI systems are subject to the requirements and conformity obligations established by the Regulation.
  • conformity assessment: High-risk AI systems must undergo conformity assessment prior to market placement or service deployment.
  • provider: Providers are responsible for placing high-risk AI systems on the market or into service and registering them.
  • Regulation: High-risk AI systems are subject to the requirements and conformity obligations established by the Regulation.
  • Article 13: Article 13 of Directive (EU) 2016/680 governs the implementation of obligations for high-risk AI systems used for law enforcement.
  • Article 13 of Directive (EU) 2016/680: High-risk AI systems used for law enforcement must comply with Article 13 regarding explanation rights.
  • Regulation (EU) 2016/679: High-risk AI systems must comply with profiling definitions and conditions laid down in Regulation (EU) 2016/679 when processing personal data.
  • conformity assessment: High-risk AI systems must undergo conformity assessment prior to market placement or service deployment.
  • data governance requirement: High-risk AI systems should comply with data governance requirements when using relevant geographical and contextual data.
  • robustness and accuracy requirement: High-risk AI systems are subject to robustness and accuracy requirements.
  • New Legislative Framework: High-risk AI systems related to products covered by existing Union harmonisation legislation must comply with both frameworks.
  • CE marking: High-risk AI systems that comply with Regulation requirements must bear the CE marking.
  • market surveillance authorities: Market surveillance authorities monitor and oversee compliance of high-risk AI systems, conduct joint activities to promote compliance, and can authorize market placement under exceptional circumstances.
  • law enforcement authorities: Law enforcement authorities may put specific high-risk AI systems into service without prior market surveillance authorization in justified situations.
  • civil protection authorities: Civil protection authorities may deploy high-risk AI systems without authorization in duly justified situations.
  • EU database: High-risk AI systems must be registered in the EU database at national level.
  • public authorities: Public authorities deploying high-risk AI systems must register them in the EU database.
  • AI regulatory sandbox: High-risk AI systems may be tested within an AI regulatory sandbox regime.
  • real-world testing plan: Testing of high-risk AI systems must be documented in a real-world testing plan.
  • informed consent: Informed consent is required for natural persons participating in testing of high-risk AI systems.
  • vulnerable groups: Additional safeguards are required for vulnerable groups during AI system testing.
  • quality management system: Quality management system is required for high-risk AI systems.
  • providers: Providers place high-risk AI systems on the market and are responsible for their monitoring.
  • this Regulation: The Regulation applies to high-risk AI systems placed on the market or put into service in the Union.
  • market surveillance authority: High-risk AI systems are subject to evaluation and monitoring by market surveillance authorities.
  • general-purpose AI models: High-risk AI systems can be built on general-purpose AI models.
  • codes of conduct: Mandatory requirements applicable to high-risk AI systems are referenced as models for voluntary codes of conduct for non-high-risk systems.
  • Regulation (EU) 2024/1689: Regulation (EU) 2024/1689 establishes requirements that high-risk AI systems must comply with before being placed on the market.
  • quality management system: Providers of high-risk AI systems must implement a quality management system.
  • This Regulation: The Regulation applies to high-risk AI systems placed on the market or put into service in the Union.
  • This Regulation: High-risk AI systems are subject to compliance obligations under the regulation.
  • AI Regulation 2024/1689: The regulation establishes specific requirements for high-risk AI systems and obligations for their operators.
  • Article 6(1): Article 6(1) establishes the classification of high-risk AI systems.
  • Article 57: Article 57 applies to high-risk AI systems insofar as requirements are integrated into Union harmonisation legislation.
  • Regulation /2024/1689/oj: The regulation applies to high-risk AI systems even when released under free and open-source licenses.
  • Article 6: Article 6 defines the classification rules and conditions for determining whether an AI system is high-risk.
  • third-party conformity assessment: High-risk AI systems are required to undergo third-party conformity assessment before placing on market or putting into service.
  • Annex III: High-risk AI systems are classified and listed in Annex III of the Regulation.
  • delegated acts: Delegated acts apply to high-risk AI systems by amending their classification and use-cases.
  • human override capability: High-risk AI systems must allow for human override of decisions or recommendations to prevent harm.
  • Article 8: High-risk AI systems shall comply with requirements laid down in Article 8 and Section 2.
  • Article 9: High-risk AI systems must implement the risk management system established in Article 9.
  • fundamental rights: High-risk AI systems are evaluated based on risks they pose to fundamental rights.
  • health and safety: High-risk AI systems are evaluated based on risks they pose to health and safety.
  • Union harmonisation legislation: High-risk AI systems that are safety components or products must comply with Union harmonisation legislation listed in Annex I.
  • Risk management system: The risk management system applies to and must be implemented for high-risk AI systems throughout their lifecycle.
  • Article 15: Article 15 establishes accuracy, robustness, and cybersecurity requirements that apply to high-risk AI systems.
  • dual verification requirement: The dual verification requirement mandates that high-risk AI systems used for identification must have their output verified by at least two natural persons.
  • Annex III: High-risk AI systems are classified and listed in Annex III of the Regulation.
  • stop button procedure: High-risk AI systems must be equipped with a stop button or similar procedure allowing safe interruption and halt.
  • Article 16: Article 16 establishes obligations that apply to providers of high-risk AI systems.
  • cybersecurity: High-risk AI systems must be resilient against unauthorized alterations through cybersecurity measures.
  • data poisoning: Technical solutions for high-risk AI systems must include measures to prevent and detect data poisoning attacks.
  • model poisoning: Technical solutions for high-risk AI systems must include measures to address model poisoning attacks.
  • adversarial examples: Technical solutions must include measures to prevent and control adversarial examples and model evasion.
  • quality management system: Providers of high-risk AI systems must implement a quality management system.
  • EU declaration of conformity: Providers must draw up an EU declaration of conformity for high-risk AI systems.
  • CE marking: Providers must affix CE marking to high-risk AI systems.
  • quality management system: Quality management systems are required to apply to high-risk AI systems.
  • national competent authorities: High-risk AI systems are subject to oversight by national competent authorities who have access to required documentation.
  • notified bodies: Notified bodies approve changes and issue decisions regarding high-risk AI systems.
  • Article 19: Article 19 establishes requirements for maintaining automatically generated logs from high-risk AI systems.
  • notified bodies: Notified bodies issue decisions and documents related to changes and compliance of high-risk AI systems.
  • Providers of high-risk AI systems: Providers place high-risk AI systems on the market or put them into service.
  • logs automatically generated by high-risk AI systems: High-risk AI systems maintain automatically generated logs as required documentation.
  • product manufacturer: Product manufacturers place high-risk AI systems on the market as safety components of products.
  • third party supplier: Third party suppliers of tools, services, and components used in high-risk AI systems are subject to regulatory obligations.
  • Article 26: High-risk AI systems are subject to the obligations established in Article 26.
  • Annex III: High-risk AI systems are classified and listed in Annex III of the Regulation.
  • Directive (EU) 2016/680: High-risk AI systems must comply with profiling definitions and conditions laid down in Directive (EU) 2016/680.
  • notified bodies: Notified bodies are required to verify the conformity of high-risk AI systems.
  • Article 34: Article 34 applies to verification of conformity of high-risk AI systems.
  • notified body: Notified bodies conduct conformity assessment for high-risk AI systems.
  • notified body: Notified bodies issue certificates for high-risk AI systems and must ensure their continuing conformity.
  • Article 38: Article 38 establishes coordination requirements specifically for high-risk AI systems.
  • common specifications: High-risk AI systems must conform to common specifications to be presumed compliant.
  • Section 2 of this Chapter: High-risk AI systems must comply with requirements set out in Section 2.
  • Article 10(4): High-risk AI systems trained and tested on specific data are presumed to comply with Article 10(4).
  • Article 15: High-risk AI systems certified under cybersecurity schemes are presumed to comply with cybersecurity requirements in Article 15.
  • Article 43: Article 43 establishes conformity assessment procedures applicable to high-risk AI systems.
  • cybersecurity requirements: High-risk AI systems must comply with cybersecurity requirements set out in Article 15.
  • notified bodies: High-risk AI systems may be assessed by notified bodies as part of conformity assessment procedures.
  • Article 97: Article 97 empowers the Commission to amend provisions regarding high-risk AI systems referred to in Annex III.
  • Article 46: Article 46 applies to specific high-risk AI systems that may be placed on the market under exceptional circumstances.
  • market surveillance authority: Market surveillance authorities may authorize the placing on the market or putting into service of specific high-risk AI systems under exceptional reasons.
  • law-enforcement authorities: Law-enforcement authorities may put specific high-risk AI systems into service without prior authorization in urgent public security situations.
  • civil protection authorities: Civil protection authorities may put specific high-risk AI systems into service without prior authorization in urgent public security situations.
  • Article 60: Article 60 establishes rules for testing high-risk AI systems in real world conditions.
  • AI regulatory sandboxes: High-risk AI systems can be tested in AI regulatory sandboxes under specific conditions.
  • Article 71(4): High-risk AI systems must be registered according to the requirements specified in Article 71(4).
  • ethical review: Testing of high-risk AI systems is subject to ethical review required by Union or national law.
  • EU database: High-risk AI systems must be registered in the EU database at national level.
  • Article 49(5): High-risk AI systems must comply with registration requirements in Article 49(5).
  • testing in real world conditions: High-risk AI systems are subject to testing in real world conditions under market surveillance.
  • post-market monitoring system: The post-market monitoring system applies to high-risk AI systems throughout their lifetime.
  • Chapter III, Section 2: High-risk AI systems must maintain continuous compliance with requirements set out in Chapter III, Section 2.
  • Article 73: Article 73 establishes reporting requirements that apply to high-risk AI systems placed on the Union market.
  • Section A of Annex I: High-risk AI systems covered by Union harmonisation legislation listed in Section A of Annex I may integrate existing post-market monitoring systems.
  • serious incident: High-risk AI systems are subject to serious incident notification obligations.
  • Regulation (EU) 2017/745: Regulation (EU) 2017/745 covers medical devices that may incorporate high-risk AI systems as safety components.
  • Regulation (EU) 2017/746: Regulation (EU) 2017/746 covers in vitro diagnostic devices that may incorporate high-risk AI systems as safety components.
  • Article 77: Article 77 establishes regulatory powers and obligations regarding high-risk AI systems.
  • Annex IV: Technical documentation requirements for high-risk AI systems are specified in Annex IV.
  • law enforcement authorities: Law enforcement authorities can be providers of high-risk AI systems.
  • data protection obligation: High-risk AI systems are subject to data protection and confidentiality obligations.
  • Article 95: Article 95 applies to AI systems other than high-risk AI systems, establishing voluntary requirements.
  • codes of conduct: Codes of conduct apply to AI systems, including high-risk AI systems.
  • public authorities: High-risk AI systems intended to be used by public authorities must comply with Regulation requirements by 2 August 2030.
  • Article 49: Article 49 establishes registration requirements that apply to high-risk AI systems.
  • Article 49: High-risk AI systems are subject to registration requirements specified in Article 49.
  • notified body: Notified bodies conduct audits and tests of high-risk AI systems.
  • Article 49(2): High-risk AI systems must comply with registration requirements specified in Article 49(2).
  • Article 6(3): High-risk AI systems may be reclassified as not-high-risk based on conditions in Article 6(3).
  • Electronic instructions for use: Electronic instructions for use are not required for high-risk AI systems in law enforcement, migration, asylum and border control.
  • ANNEX IX: ANNEX IX specifies information requirements for registration of high-risk AI systems.

high-risk AI systems evaluation_criterion

Classification for AI systems with potentially significant impact on persons' livelihood, fundamental rights, democracy, rule of law, individual freedoms, and fair trial rights.
  • AI systems for benefit determination: AI systems used for determining public assistance benefits are classified as high-risk due to their significant impact on persons' livelihood and fundamental rights.
  • AI systems for credit scoring: AI systems evaluating credit score or creditworthiness are classified as high-risk systems.
  • Regulation: The Regulation establishes requirements for the development, assessment, testing, and placement on the market of high-risk AI systems, including explanation rights for affected persons.
  • AI systems for administration of justice and democratic processes: AI systems for administration of justice and democratic processes are classified as high-risk due to their significant impact on democracy and rule of law.

high-risk areas headings documentation

Headings in the annex to the regulation defining high-risk areas, subject to periodic evaluation for amendment.

high-risk classification evaluation_criterion

Classification assigned to AI systems that pose significant risks to safety, health, fundamental rights, or social and economic activities, including those used in migration, asylum, and border control management.

high-risk monitoring mechanisms technical_requirement

Systems and procedures to identify risks to data subjects' rights and freedoms during sandbox experimentation.

high-risk use evaluation_criterion

A classification for AI system uses that present elevated risks, listed in an annex to the AI regulation.
  • AI system: An AI system can be classified as high-risk use when deployed in contexts listed in the regulation's annex.

high-risk uses evaluation_criterion

Categories of AI system applications listed in an annex to a regulation that are considered high-risk.
  • AI system: AI systems may be classified as high-risk based on their intended use and characteristics.

Horizon Europe legislative_procedure

A Union funding programme implemented to support achievement of the regulation's objectives.
  • this Regulation: Horizon Europe funding programme should contribute to achieving the regulation's objectives.

human agency and oversight evaluation_criterion

An ethical principle requiring that AI systems are developed as tools serving people, respecting human dignity and autonomy, with appropriate human control and oversight.

human override capability technical_requirement

The possibility for humans to override AI system decisions or recommendations to prevent potential harm.
  • high-risk AI systems: High-risk AI systems must allow for human override of decisions or recommendations to prevent harm.

human oversight legal_obligation

A requirement that high-risk AI systems be designed with appropriate human-machine interface tools and that deployers assign competent natural persons with necessary training and authority to conduct effective oversight during use.

human oversight technical_requirement

A governance arrangement and risk mitigation measure requiring human involvement in the operation of high-risk AI systems according to instructions of use.
  • deployer: Deployers should implement human oversight arrangements as a governance measure to mitigate risks to fundamental rights.

Human oversight measures technical_requirement

Technical and procedural measures required to be implemented by providers and deployers to ensure natural persons can oversee high-risk AI system functioning and intervene when necessary.
  • High-risk AI systems: High-risk AI systems must be designed with appropriate human oversight measures to facilitate interpretation of outputs by deployers.
  • Biometric identification systems: Biometric identification systems require enhanced human oversight with verification by at least two natural persons.
  • Provider: Providers must identify appropriate human oversight measures before placing high-risk AI systems on the market.
  • Article 14: Article 14 requires assessment and implementation of human oversight measures for AI systems.
  • Article 13(3), point (d): Article 13(3), point (d) requires technical measures to facilitate interpretation of AI system outputs.

human oversight measures legal_obligation

Requirements for natural persons to effectively oversee high-risk AI systems during use, including technical measures and human-machine interface tools to facilitate interpretation of system outputs.
  • Article 14: Article 14 establishes requirements for human oversight measures of high-risk AI systems.
  • Article 14: Human oversight measures are required in accordance with Article 14.

Human-machine interface tools technical_requirement

Tools and mechanisms that enable natural persons to effectively oversee and interact with high-risk AI systems during operation.
  • High-risk AI system: High-risk AI systems must be designed with appropriate human-machine interface tools for oversight.

identity checks legal_obligation

Checks conducted by authorities to verify the identity of persons in accordance with Union and national law.

immigration authorities market_actor

Authorities responsible for immigration matters and conducting identity checks.
  • information systems: Information systems are used by immigration authorities for identity identification.

impact assessment legal_obligation

A mandatory evaluation process that deployers must conduct to identify risks of harm to fundamental rights, relevant processes, affected groups, and mitigation measures for high-risk AI systems.
  • deployer: Deployers are required to conduct impact assessments for high-risk AI systems before deployment.
  • high-risk AI system: Impact assessments must be performed on high-risk AI systems to identify risks and mitigation measures.
  • fundamental rights: Impact assessments identify specific risks of harm likely to impact the fundamental rights of affected persons.

impact assessments documentation

Assessments conducted with relevant stakeholders to evaluate risks and design measures for AI systems.

Impartiality safeguards technical_requirement

Procedures and structures notified bodies must implement to ensure independence, objectivity and impartiality.
  • Notified bodies: Notified bodies must document and implement procedures to safeguard impartiality.

implementing acts regulation

Acts adopted by the Commission to establish common specifications and detailed arrangements for AI regulatory sandboxes, including eligibility criteria and procedures.
  • Article 98(2): Implementing acts are adopted in accordance with the examination procedure referred to in Article 98(2).
  • Official Journal of the European Union: Implementing acts are published in the Official Journal of the European Union.
  • Commission: The Commission adopts implementing acts to establish common specifications.
  • Commission: The Commission adopts implementing acts to provide common rules when codes of practice are inadequate and to specify detailed arrangements for AI regulatory sandboxes.
  • AI regulatory sandboxes: Implementing acts establish common principles and detailed arrangements for the operation of AI regulatory sandboxes.
  • eligibility and selection criteria: Implementing acts require transparent and fair eligibility and selection criteria for participation in AI regulatory sandboxes.
  • sandbox plan: Implementing acts require sandbox plans as part of participation procedures in AI regulatory sandboxes.
  • exit report: Implementing acts require exit reports when participants exit or terminate participation in AI regulatory sandboxes.

important and critical products with digital elements ai_system

Products with digital elements designated as critical infrastructure requiring enhanced conformity assessment provisions.

importer market_actor

A natural or legal person located in the Union that places on the market a high-risk AI system bearing the name or trademark of a third-country entity and verifies its conformity.
  • Regulation: The Regulation applies to importers who may assume provider obligations under certain conditions.
  • provider: An importer can be considered a provider of high-risk AI systems under Article 25 circumstances.
  • Article 3: Article 3 defines the role and responsibilities of an importer.
  • placing on the market: Importers place AI systems on the market bearing third-country trademarks.
  • operator: Operator is defined as including importers among other market actors.
  • Article 23: Article 23 establishes obligations that apply to importers of high-risk AI systems.
  • Article 43: Importers are required to verify that the conformity assessment procedure referred to in Article 43 has been carried out.
  • Article 11: Importers must verify that technical documentation has been drawn up in accordance with Article 11.
  • CE marking: Importers must ensure the high-risk AI system bears the required CE marking.
  • EU declaration of conformity: Importers must ensure the system is accompanied by the EU declaration of conformity and retain copies for 10 years after market placement.
  • instructions for use: Importers must ensure the system is accompanied by instructions for use.
  • Article 22(1): Importers must verify that the provider has appointed an authorised representative in accordance with Article 22(1).
  • high-risk AI system: Importers are responsible for placing high-risk AI systems on the market in conformity with regulations.
  • Section 2: Importers must ensure high-risk AI systems comply with requirements set out in Section 2.
  • provider: An importer can be considered a provider of high-risk AI systems under Article 25 circumstances.
  • market surveillance authorities: Importers must inform market surveillance authorities of non-conformity or risk issues with high-risk AI systems.
  • technical documentation: Importers must ensure technical documentation is made available to competent authorities upon request.
  • Article 23: Importers must comply with obligations laid down in Article 23, point (3).
  • competent authorities: Importers must cooperate with competent authorities regarding high-risk AI systems placed on the market.

Importer obligations legal_obligation

Obligations imposed on importers pursuant to Article 23, subject to administrative fines for non-compliance.
  • Article 23: Importer obligations are established in Article 23.
  • SMEs: SMEs are subject to reduced administrative fines for non-compliance with importer obligations.

importers market_actor

Market actors that import high-risk AI systems and must be informed of corrective actions.

importers and distributors market_actor

Market actors involved in importing and distributing AI systems in the value chain with specific compliance obligations.
  • this Regulation: The Regulation clarifies specific obligations for importers and distributors in the AI value chain.
  • Article 2: Article 2 applies to importers and distributors of AI systems.

In-built operational constraints technical_requirement

System constraints that cannot be overridden by the AI system itself and ensure the system remains responsive to human operators.
  • High-risk AI systems: High-risk AI systems should be subject to in-built operational constraints that cannot be overridden by the system itself.

incident reporting legal_obligation

Requirement to track, document, and report serious incidents and corrective measures to the AI Office and national competent authorities without undue delay.
  • Article 55: Article 55 requires providers to track, document, and report serious incidents to the AI Office.
  • AI Office: Incident reporting obligation requires notification to the AI Office.

Independence requirement technical_requirement

Requirement that notified bodies must be independent from providers and competitors of high-risk AI systems.
  • Notified bodies: Notified bodies must comply with independence requirements from providers and competitors.

independent administrative authority institution

An independent administrative body of a Member State authorized to grant binding authorization for the use of real-time remote biometric identification systems.

independent administrative authority legislative_body

An independent administrative body whose decision is binding and can grant authorization for biometric system use.

independent audit report documentation

Required documentation for ensuring full functionality of the EU database upon deployment.
  • EU database: An independent audit report is required to ensure full functionality of the EU database.

independent experts institution

External experts involved in the evaluation of general-purpose AI models.
  • Commission: The Commission's implementing acts set out detailed arrangements for involving independent experts in evaluations.

information and documentation documentation

Records and documentation that distributors must provide to competent authorities to demonstrate conformity of high-risk AI systems.
  • distributor: Distributors must provide all information and documentation regarding their actions to demonstrate conformity.

information obligation for emotion recognition and biometric categorisation legal_obligation

A legal requirement for deployers to inform natural persons exposed to emotion recognition or biometric categorisation systems about their operation.
  • emotion recognition system: Deployers of emotion recognition systems are subject to the obligation to inform natural persons exposed to the system.
  • biometric categorisation system: Deployers of biometric categorisation systems are subject to the obligation to inform natural persons exposed to the system.
  • Regulation (EU) 2016/679: The information obligation requires compliance with GDPR for personal data processing.
  • Regulation (EU) 2018/1725: The information obligation requires compliance with EU regulation on data protection by Union institutions.
  • Directive (EU) 2016/680: The information obligation requires compliance with the law enforcement directive on data protection.

information requirement legal_obligation

An obligation to provide information to workers and their representatives about the planned deployment of high-risk AI systems at the workplace.
  • Regulation: The information requirement is laid down in the Regulation.
  • fundamental rights protection: The information requirement is ancillary and necessary to the objective of protecting fundamental rights.
  • deployers: Deployers are required to provide information to natural persons about high-risk AI systems and their rights.

information systems ai_system

Systems used by law enforcement, border control, immigration or asylum authorities to identify persons during identity checks in accordance with Union or national law.
  • Regulation: The Regulation governs the use of information systems by authorities for identity identification purposes.
  • law enforcement authorities: Information systems are used by law enforcement authorities to identify persons during identity checks.
  • border control authorities: Information systems are used by border control authorities for identity identification.
  • immigration authorities: Information systems are used by immigration authorities for identity identification.
  • asylum authorities: Information systems are used by asylum authorities for identity identification.

informed consent legal_obligation

A requirement for freely-given, specific, and voluntary consent from natural persons to participate in real-world AI system testing, with exceptions for law enforcement activities.

innovative AI systems ai_system

Advanced AI systems undergoing development, training, testing and validation within regulatory sandboxes to ensure compliance with regulations before market placement.
  • AI regulatory sandbox: AI regulatory sandboxes provide controlled environments for development, training, testing, and validation of innovative AI systems.
  • Regulation: Innovative AI systems must ensure compliance with the primary Regulation during sandbox testing.
  • Union law: Innovative AI systems must comply with relevant Union law in addition to the primary Regulation.
  • national law: Innovative AI systems must comply with relevant national law of Member States.

innovators market_actor

Providers and prospective providers of AI systems seeking to develop and test innovative solutions within regulatory sandboxes.
  • AI regulatory sandbox: Innovators and prospective providers participate in AI regulatory sandboxes to address legal uncertainty in AI development.

input and output modalities evaluation_criterion

A criterion for assessing systemic risk capabilities of general-purpose AI models, including text-to-text, text-to-image, and multi-modality models with state-of-the-art thresholds.

input data data_category

Data provided to or acquired by an AI system that must be relevant and sufficiently representative for the intended purpose.
  • deployer: Deployers that exercise control over input data must ensure it is relevant and sufficiently representative for the intended purpose.

input/output modality technical_requirement

The modality (e.g. text, image) and format of inputs and outputs of the general-purpose AI model.

instructions for use documentation

Documentation provided by providers to deployers in easily understood language containing information about intended purpose, known and foreseeable risks, conditions of use, characteristics, capabilities, limitations, and performance metrics of high-risk AI systems.
  • provider: Providers must document and provide instructions for use that inform deployers of known and foreseeable risks.
  • deployer: Instructions for use guide deployers in understanding risks and using high-risk AI systems appropriately.
  • provider: Providers issue instructions for use containing information relevant to impact assessment and risk mitigation.
  • deployer: Deployers should take into account information from instructions for use when performing impact assessments and implementing human oversight.
  • importer: Importers must ensure the system is accompanied by instructions for use.
  • distributor: Distributors must ensure high-risk AI systems are accompanied by instructions for use.
  • deployers of high-risk AI systems: Deployers must use high-risk AI systems in accordance with accompanying instructions for use.

instructions of use documentation

Required documentation that accompanies high-risk AI systems, containing information about characteristics, capabilities, limitations, performance accuracy metrics, and appropriate usage guidance for deployers.
  • high-risk AI systems: High-risk AI systems must be accompanied by instructions of use containing relevant information about their characteristics, limitations, and appropriate use.

intellectual property rights data_category

Information protected under confidentiality obligations, including source code and trade secrets.
  • Confidentiality: The confidentiality obligation protects intellectual property rights and trade secrets.

intended purpose legal_article

The use for which an AI system is intended by the provider, including the specific context and conditions of use as specified in instructions for use, promotional materials, and technical documentation.
  • AI system: Intended purpose defines how an AI system is intended to be used by the provider.
  • reasonably foreseeable misuse: Reasonably foreseeable misuse is defined in contrast to intended purpose.

intended purpose evaluation_criterion

The specified purpose for which an AI system is designed, which must remain consistent with any changes made to the system.
  • AI system: Changes to the AI system must not affect its intended purpose or compliance with requirements.

Interinstitutional Agreement of 13 April 2016 on Better Law-Making treaty

Agreement establishing principles for consultation of Member State experts in the preparation and consultation procedures for delegated acts.
  • European Commission: Commission consultations must follow principles in the Interinstitutional Agreement.
  • Commission: Commission consults experts designated by Member States in accordance with principles in the agreement.

internal control technical_requirement

A conformity assessment procedure based on internal control mechanisms for high-risk AI systems, conducted without involvement of notified bodies.
  • Annex VI: The internal control conformity assessment procedure is documented in Annex VI.

internal market market_actor

The unified market of the European Union where compliant high-risk AI systems bearing CE marking can move freely.
  • high-risk AI system: Compliant high-risk AI systems bearing CE marking can move freely within the internal market.

internal market fragmentation evaluation_criterion

A negative outcome that diverging national rules on AI could cause, hampering the free circulation and uptake of AI systems.

international organisations institution

Organizations operating at international level that may be exempt from regulation when acting in cooperation or international agreements for law enforcement and judicial cooperation.
  • this Regulation: International organizations are exempt from the regulation when acting in cooperation or international agreements for law enforcement and judicial cooperation.

Ireland market_actor

A Member State not bound by certain rules governing judicial cooperation in criminal matters and police cooperation, with specific exemptions from provisions regarding biometric identification and AI systems.
  • Protocol No 21: Protocol No 21 governs the position of Ireland regarding certain EU rules.
  • Article 5(1): Ireland is not bound by certain provisions of Article 5(1) regarding biometric categorisation systems.
  • Regulation 2024/1689: Ireland is not bound by rules governing judicial cooperation in criminal matters and police cooperation under certain conditions.

judicial authority institution

An independent judicial body authorized to grant express and specific authorization for the use of real-time remote biometric identification systems.

judicial authority legislative_body

A court or judicial body responsible for granting authorization for the use of biometric identification systems.

key performance indicators evaluation_criterion

Measurable metrics included in codes of practice to assess the achievement of specific objectives and monitor compliance.
  • codes of practice: Codes of practice contain key performance indicators to measure achievement of objectives.

Large generative AI models ai_model

A typical example of general-purpose AI models that allow flexible generation of content such as text, audio, images, or video across a wide range of tasks.
  • general-purpose AI models: Large generative AI models are classified as a typical example of general-purpose AI models.

large generative AI models ai_system

AI systems capable of generating text, images, and other content, which require access to vast amounts of data for development and training.
  • text and data mining: Large generative AI models require text and data mining techniques for development and training.
  • rightsholders' authorization requirement: Providers of general-purpose AI models must obtain authorization from rightsholders for text and data mining when rights to opt out have been expressly reserved.

large-scale IT systems institution

IT systems established by legal acts listed in an annex to the Regulation, operated by public authorities.

large-scale IT systems ai_system

IT systems established by legal acts listed in Annex X that must comply with the Regulation by specific deadlines.
  • Regulation: The Regulation applies to large-scale IT systems established by legal acts listed in Annex X.

Law enforcement institution

Public authorities responsible for enforcing laws and maintaining public order, which may use AI systems for biometric identification and surveillance.

Law enforcement market_actor

Public authorities and institutions responsible for law enforcement that deploy and authorize AI systems for identification and surveillance purposes.
  • Regulation 2024/1689: The regulation applies to the use of remote biometric identification systems by law enforcement authorities.

law enforcement legal_obligation

The purpose for which AI systems for real-time remote biometric identification may be authorized for use by competent authorities in publicly accessible spaces.
  • Regulation: The Regulation prohibits the use of AI systems for real-time remote biometric identification for law enforcement purposes, subject to certain exceptions.
  • remote biometric identification systems: Remote biometric identification systems may be used for law enforcement purposes under authorization.

law enforcement and judicial cooperation legal_obligation

Framework for cooperation between the Union and third countries or international organizations for law enforcement and judicial purposes.
  • Europol: Europol is involved in establishing cooperation frameworks and agreements for law enforcement and judicial cooperation.

law enforcement authorities market_actor

Public authorities responsible for law enforcement who use, deploy, or provide high-risk AI systems and are subject to confidentiality and documentation requirements.
  • real-time remote biometric identification systems: Real-time remote biometric identification systems are used by law enforcement authorities in publicly accessible spaces.
  • information systems: Information systems are used by law enforcement authorities to identify persons during identity checks.
  • Regulation (EU) 2024/1689: The regulation applies to law enforcement authorities that deploy post-remote biometric identification systems.
  • Regulation (EU) 2024/1689: The regulation prohibits law enforcement authorities from using post-remote biometric identification systems in an untargeted way without proper legal basis.
  • AI regulatory sandbox: Law enforcement processing of personal data in sandboxes is subject to specific Union or national law and cumulative conditions.
  • high-risk AI systems: Law enforcement authorities can be providers of high-risk AI systems.

law enforcement authorities institution

Public authorities responsible for law enforcement and criminal justice operations, which may deploy specific high-risk AI systems without prior market surveillance authorization in justified situations.

law enforcement authority market_actor

The entity responsible for using real-time remote biometric identification systems and obtaining necessary authorizations and safeguards.

law enforcement authority institution

A public authority or body entrusted by Member State law competent for prevention, investigation, detection or prosecution of criminal offences that is responsible for requesting authorization and using real-time biometric identification systems.

law enforcement purposes legal_obligation

The designated use case for real-time remote biometric identification systems as specified in regulatory frameworks.

law enforcement use authorization legal_obligation

A requirement that real-time remote biometric identification systems in publicly accessible spaces must obtain prior authorization from judicial or administrative authorities.
  • Article 49: The authorization requirement is established in relation to Article 49 registration procedures.

law enforcement, migration, asylum and border control management data_category

High-risk AI application areas requiring registration in a secure non-public section of the EU database.
  • EU database: High-risk AI systems in these areas must be registered in the secure non-public section of the EU database.

law enforcement, migration, border control or asylum market_actor

Sectors where the dual verification requirement may be considered disproportionate under Union or national law.
  • dual verification requirement: The dual verification requirement does not apply to high-risk AI systems used in law enforcement, migration, border control or asylum where Union or national law considers it disproportionate.

law-enforcement authorities institution

Authorities that may put specific high-risk AI systems into service without prior authorization in situations of urgent public security threat.
  • high-risk AI systems: Law-enforcement authorities may put specific high-risk AI systems into service without prior authorization in urgent public security situations.

Learning outcome evaluation systems ai_system

High-risk AI systems intended to evaluate learning outcomes and steer the learning process in educational institutions.
  • ANNEX III: Learning outcome evaluation systems are classified as high-risk AI systems in ANNEX III.

level of autonomy and scalability evaluation_criterion

A criterion for evaluating the systemic risk potential of a general-purpose AI model.

Liability insurance technical_requirement

A requirement for notified bodies to maintain appropriate liability insurance for conformity assessment activities.
  • Notified bodies: Notified bodies are required to take out appropriate liability insurance for their conformity assessment activities.

Logging capabilities technical_requirement

Technical features enabling the recording of events relevant to system traceability, risk identification, and post-market monitoring.
  • Article 12: Article 12 specifies that logging capabilities must enable recording of events relevant to system traceability and risk identification.
  • Article 72: Logging capabilities facilitate post-market monitoring as referenced in Article 72.

logic- and knowledge-based approaches technical_requirement

Techniques that enable AI systems to infer from encoded knowledge or symbolic representation of tasks to be solved.
  • AI system: AI systems can be built using logic- and knowledge-based approaches as techniques enabling inference.

logs documentation

Records that deployers must properly collect, store and interpret in accordance with regulatory requirements.

logs data_category

Automatically generated records from high-risk AI systems that deployers must maintain for at least six months and provide to authorities upon request.
  • Article 12(1): Logs are automatically generated by high-risk AI systems as specified in Article 12(1).
  • deployers: Deployers are required to keep logs automatically generated by high-risk AI systems for at least six months.
  • Union law on the protection of personal data: Log retention requirements are subject to Union law on personal data protection which may provide alternative requirements.

logs automatically generated by high-risk AI systems documentation

Automatic records maintained by high-risk AI systems as part of documentation required under financial services law.

machine learning technical_requirement

A technique that enables AI systems to learn from data how to achieve certain objectives.
  • AI system: AI systems can be built using machine learning approaches as one of the key techniques enabling inference.

Machine-brain interfaces ai_system

Technology that facilitates AI-enabled manipulation by allowing higher degree of control over stimuli presented to persons.

Machine-readable marking technical_requirement

Technical requirement for marking synthetic content outputs as artificially generated or manipulated in machine-readable format.
  • Article 50: Article 50 requires providers to ensure synthetic content is marked in machine-readable format as artificially generated.

making available on the market legal_obligation

The supply of an AI system or a general-purpose AI model for distribution or use on the Union market in the course of a commercial activity, whether in return for payment or free of charge.
  • distributor: Distributors make AI systems available on the Union market.

mandatory requirements legal_obligation

Obligatory measures that providers of high-risk AI systems must comply with to ensure trustworthiness and mitigate risks.
  • high-risk AI systems: High-risk AI systems must comply with mandatory requirements to mitigate risks and ensure trustworthiness.
  • Charter: Mandatory requirements are based on applicable requirements resulting from the Charter.
  • provider: Providers must adopt measures to comply with mandatory requirements of the Regulation.

mandatory requirements for high-risk AI systems legal_obligation

Mandatory requirements that the Commission must take into account when adopting delegated or implementing acts for high-risk AI systems.
  • high-risk AI systems: High-risk AI systems are subject to mandatory requirements laid down in the Regulation.

manipulative or deceptive techniques technical_requirement

Techniques designed to deceive or manipulate persons, prohibited in AI systems.
  • Prohibited AI practices: Prohibited AI practices prohibit the use of manipulative or deceptive techniques in AI systems.

Manipulative or exploitative AI-enabled practices legal_obligation

Prohibited practices involving AI systems that manipulate or exploit individuals without requiring intent to cause harm.
  • Regulation: The regulation establishes prohibitions for manipulative and exploitative AI-enabled practices.

market actors market_actor

Relevant actors placing AI systems or models on the market, putting them into service or use in the Union, subject to requirements and obligations.
  • This Regulation: This Regulation applies to market actors placing AI systems or models on the market or putting them into service.

market operators market_actor

Entities operating in the AI market who should develop common criteria and shared understanding of relevant concepts.
  • Regulation 2024/1689: Market operators must develop common criteria and shared understanding of concepts provided in the regulation.

market surveillance legal_obligation

Obligation for competent authorities to conduct ex post market surveillance activities regarding AI systems in financial institutions.
  • Regulation (EU) 2019/1020: Regulation 2019/1020 provides powers to competent authorities to enforce market surveillance requirements.
  • Directive 2013/36/EU: Directive 2013/36/EU requires competent authorities to conduct market surveillance of AI systems in financial institutions.

market surveillance authorities institution

National authorities designated to monitor high-risk AI systems on the market, enforce compliance with regulations, access technical documentation, and exercise corrective powers independently and impartially.
  • high-risk AI systems: Market surveillance authorities monitor and oversee compliance of high-risk AI systems, conduct joint activities to promote compliance, and can authorize market placement under exceptional circumstances.
  • EU database: Market surveillance authorities have restricted access to the secure non-public section of the EU database.
  • This Regulation: The regulation establishes market surveillance authorities with enforcement powers.
  • This Regulation: Market surveillance authorities exercise enforcement powers laid down in this regulation.
  • this Regulation: Market surveillance authorities have enforcement powers laid down in this Regulation.
  • Regulation (EU) 2019/1020: Market surveillance authorities exercise powers laid down in Regulation (EU) 2019/1020.
  • Member States: Member States establish and maintain market surveillance authorities with powers to enforce AI regulations in accordance with Article 75.
  • AI systems: Market surveillance authorities assess the risk posed by AI systems and may take measures when they present a risk.
  • access to personal data: Market surveillance authorities are required to have power to obtain access to all personal data being processed.
  • AI Office: The AI Office provides coordination support for joint investigations conducted by market surveillance authorities.
  • Providers of high-risk AI systems: Providers must inform market surveillance authorities of non-compliance and corrective actions.
  • importer: Importers must inform market surveillance authorities of non-conformity or risk issues with high-risk AI systems.
  • notified body: Notified bodies must respond to requests for information from market surveillance authorities regarding conformity assessment activities.
  • scientific panel: The scientific panel supports the work of market surveillance authorities at their request.
  • providers: Providers must report serious incidents to market surveillance authorities of Member States where incidents occurred.
  • Article 14(4): Market surveillance authorities may exercise powers specified in Article 14(4) remotely for enforcement.
  • Regulation (EU) 2016/679: Market surveillance authorities are designated based on competent data protection supervisory authorities under this regulation.
  • Directive (EU) 2016/680: Designation of market surveillance authorities is subject to conditions laid down in Articles 41 to 44 of this directive.
  • Court of Justice of the European Union: Market surveillance activities do not apply to the Court of Justice when acting in its judicial capacity.
  • Member States: Member States establish and maintain market surveillance authorities with powers to enforce AI regulations in accordance with Article 75.
  • Commission: Market surveillance authorities and the Commission can propose joint activities and investigations.
  • Regulation (EU) 2019/1020: Joint activities and investigations follow procedures outlined in Article 9 of this regulation.
  • Article 60: Market surveillance authorities must verify compliance with Article 60 as part of their supervisory role.
  • Article 77: Article 77 requires market surveillance authorities to organize testing and maintain communication with public authorities.
  • Confidentiality: The confidentiality obligation applies to market surveillance authorities in their regulatory activities.
  • cybersecurity measures: Authorities must implement adequate and effective cybersecurity measures to protect information and data.
  • Article 74: Market surveillance authorities' access rights are established in Article 74(8) and (9).
  • security clearance requirement: Security clearance requirement applies to market surveillance authority staff accessing technical documentation.
  • The Commission: Market surveillance authorities may request the Commission to exercise enforcement powers under the Regulation.

market surveillance authorities market_actor

Authorities responsible for monitoring market compliance with the Regulation, investigating possible infringements, and participating in the Board's standing sub-group.
  • AI Office: Market surveillance authorities can request the AI Office to investigate possible infringements.
  • Board: The Board establishes a standing sub-group providing a platform for cooperation among market surveillance authorities.

Market surveillance authorities legislative_body

Authorities responsible for monitoring compliance and market surveillance activities related to AI systems.
  • Exit report: Market surveillance authorities take exit reports positively into account for accelerating assessment procedures.

market surveillance authority institution

A national competent authority designated by each Member State to supervise market compliance of AI systems, monitor high-risk AI systems, receive notifications and complaints, and enforce regulatory requirements with investigative and corrective powers.
  • notification requirement: Market surveillance authorities must be notified of each use of real-time biometric identification systems.
  • real-time biometric identification system: Market surveillance authorities must be notified of each use of real-time biometric identification systems.
  • Commission: Market surveillance authority notifies the Commission of evaluation results, required actions, and non-compliance not restricted to national territory.
  • deployer: Deployers must notify market surveillance authorities of fundamental rights impact assessment results, risks, and serious incidents without undue delay.
  • Member States: Each Member State shall designate at least one market surveillance authority as a national competent authority and single point of contact.
  • confidentiality rules: Members of market surveillance authorities are subject to confidentiality rules under the Regulation.
  • high-risk AI systems: High-risk AI systems are subject to evaluation and monitoring by market surveillance authorities.
  • personal data: Market surveillance authorities have the power to obtain access to personal data processed by high-risk AI systems.
  • high-risk AI system: Market surveillance authorities are responsible for supervision of high-risk AI systems and enforce compliance by restricting or prohibiting non-compliant systems.
  • AI Office: Market surveillance authorities should cooperate with the AI Office to carry out evaluations of compliance for general-purpose AI systems.
  • Chapter VI of Regulation (EU) 2019/1020: Procedures for mutual assistance in cross-border cases apply to market surveillance authorities.
  • Regulation: The Regulation establishes the role of market surveillance authorities to receive complaints about infringements.
  • Regulation (EU) 2019/1020: Market surveillance authorities carry out activities pursuant to Regulation (EU) 2019/1020.
  • real-time remote biometric identification system: Use of the system must be notified to and supervised by the market surveillance authority.
  • authorised representative: The authorised representative must inform the market surveillance authority of mandate termination.
  • post-remote biometric identification system: Market surveillance authorities monitor and evaluate the use of post-remote biometric identification systems.
  • notified body: Market surveillance authority acts as a notified body for high-risk AI systems used by law enforcement and Union institutions.
  • high-risk AI systems: Market surveillance authorities may authorize the placing on the market or putting into service of specific high-risk AI systems under exceptional reasons.
  • authorization: Market surveillance authorities issue authorizations for high-risk AI systems.
  • real-world testing plan: Real-world testing plans must be submitted to and approved by the market surveillance authority.
  • testing in real world conditions: Market surveillance authorities use their competences and powers to ensure testing in real world conditions complies with regulations and is conducted safely.
  • Article 3, point (49)(c): Market surveillance authorities must follow the definition of serious incidents in Article 3, point (49)(c).
  • Article 77(1): Market surveillance authorities are required to inform national public authorities or bodies referred to in Article 77(1).
  • Article 19 of Regulation (EU) 2019/1020: Market surveillance authorities must take appropriate measures as provided in Article 19 within seven days.
  • Union law on competition rules: Market surveillance authorities must consider Union competition rules during their surveillance activities.
  • Commission: Market surveillance authority notifies the Commission of evaluation results, required actions, and non-compliance not restricted to national territory.
  • Articles 79 to 83: Procedural articles do not apply to AI systems where equivalent protection procedures already exist in sectoral legislation.
  • Member States: Each Member State shall designate at least one market surveillance authority as a national competent authority and single point of contact.
  • Article 14 of Regulation (EU) 2019/1020: Market surveillance authorities exercise powers under Article 14 of Regulation (EU) 2019/1020, including remote enforcement capabilities.
  • high-risk AI system: Market surveillance authorities are responsible for supervision of high-risk AI systems and enforce compliance by restricting or prohibiting non-compliant systems.
  • AI Office: Market surveillance authorities cooperate with and submit requests to the AI Office for compliance evaluations.
  • Article 78: Market surveillance authorities must safeguard confidentiality of information in accordance with Article 78.
  • AI systems presenting a risk: Market surveillance authorities carry out evaluation of AI systems presenting a risk for regulatory compliance.
  • AI system: Market surveillance authority evaluates AI systems for compliance with regulatory requirements and proper high-risk classification.
  • corrective actions: Market surveillance authority requires operators to take corrective actions to bring AI systems into compliance.
  • Commission: Market surveillance authority notifies the Commission of evaluation results, required actions, and non-compliance not restricted to national territory.
  • Member States: Market surveillance authority informs other Member States of evaluation results and required actions.
  • Regulation (EU) 2019/1020: Article 18 of Regulation (EU) 2019/1020 applies to measures taken by market surveillance authorities.
  • AI system: Market surveillance authority evaluates AI systems for compliance with regulatory requirements and proper high-risk classification.
  • Commission guidelines: Market surveillance authority bases evaluation on Commission guidelines.
  • Commission: Market surveillance authority notifies the Commission of evaluation results, required actions, and non-compliance not restricted to national territory.
  • Article 11 of Regulation (EU) 2019/1020: Market surveillance authorities exercise monitoring powers in accordance with Article 11 of Regulation (EU) 2019/1020.
  • EU database: Market surveillance authorities may access information stored in the EU database referred to in Article 71.
  • Commission: The Commission evaluates national measures and enters into consultation with market surveillance authorities.
  • Commission: The Commission enters into consultation with market surveillance authorities regarding national measures.
  • Article 79: Market surveillance authority must perform evaluation under Article 79.
  • corrective action: Market surveillance authority requires operators to take corrective action.
  • Article 85: Article 85 requires market surveillance authorities to handle complaints regarding infringements of the regulation.

market surveillance authority legislative_body

Authority responsible for monitoring compliance with testing requirements and receiving notifications regarding testing extensions.

market surveillance authority legal_obligation

At least one national competent authority designated by each Member State for market surveillance purposes.
  • Member States: Each Member State shall designate at least one market surveillance authority as a national competent authority and single point of contact.

Marking in machine readable format technical_requirement

Technical requirement for AI system providers to embed solutions enabling machine-readable marking and detection of AI-generated or manipulated output.
  • Regulation 2024/1689: The regulation requires AI system providers to embed technical solutions for machine-readable marking.
  • AI systems generating synthetic content: AI systems generating synthetic content are subject to marking requirements.
  • General-purpose AI models: General-purpose AI models generating content are subject to marking and detection requirements.
  • AI system providers: Providers are required to embed technical solutions for marking AI-generated content.
  • Downstream providers: Downstream providers are facilitated in fulfilling marking obligations through techniques implemented at system or model level.

marking obligation legal_obligation

An obligation requiring clear and distinguishable disclosure that AI-generated or manipulated content has been artificially created.
  • AI system: AI systems are subject to marking obligations for generated or manipulated content.
  • AI model: AI models can be implemented with techniques to facilitate fulfillment of marking obligations.

material influence on decision-making evaluation_criterion

A criterion used to determine whether an AI system in a pre-defined high-risk area actually poses significant risk by assessing its impact on the substance and outcome of decision-making.
  • Regulation: The Regulation establishes the criterion of material influence on decision-making to identify exceptions to high-risk classification.

medical device ai_system

A device regulated under Regulation (EU) 2017/745 that may incorporate high-risk AI systems.

Medical or safety AI systems ai_system

AI systems placed on the market strictly for medical or safety reasons, such as systems intended for therapeutic use.

Medical treatment practices legal_obligation

Lawful practices in medical context including psychological treatment and physical rehabilitation carried out in accordance with applicable law and medical standards.
  • Regulation: Medical treatment practices are exempt from the regulation's prohibitions when carried out in accordance with applicable law and medical standards.

Member State legislative_body

A Member State of the European Union responsible for authorizing the use of real-time biometric identification systems, establishing national rules, determining language requirements, overseeing notified bodies, conducting market surveillance, and implementing AI regulations including administrative fines.
  • Commission: Member States must notify the Commission of their national rules and common specifications within the required timeframe.
  • Instructions of use: Instructions of use must be provided in a language determined by the Member State concerned.
  • Instructions for use: Instructions for use must be made available in a language as determined by the Member State concerned.
  • real-time remote biometric identification system: Member States establish national rules for authorization and use of real-time remote biometric identification systems.
  • Commission: Member States must notify the Commission of their national rules and common specifications within the required timeframe.
  • technical documentation: Member States determine conditions under which documentation remains at the disposal of national competent authorities.
  • Notified bodies: Member States have authority over notified bodies established within their jurisdiction.
  • notified body: Member States notify bodies and are responsible for ensuring their compliance.
  • authorization: Member States can raise objections to authorizations issued by other Member States' market surveillance authorities.
  • AI regulatory sandboxes: Member States may establish AI regulatory sandboxes with participation recognized uniformly across the Union.
  • AI system: Member States shall take restrictive measures such as requiring withdrawal of non-compliant AI systems from their market.
  • Commission: Member States must notify the Commission of their national rules and common specifications within the required timeframe.

Member States legislative_body

Individual EU countries responsible for implementing and enforcing the AI Regulation, establishing competent authorities and regulatory sandboxes, designating market surveillance authorities, and ensuring compliance within their jurisdictions while retaining competences concerning national security.
  • AI Regulation: The regulation prevents Member States from imposing restrictions on AI development, marketing, and use unless explicitly authorized.
  • high-risk AI system: Member States must not create unjustified obstacles to placing compliant high-risk AI systems on the market.
  • AI regulatory sandbox: Member States are required to establish at least one AI regulatory sandbox at national level through their competent authorities.
  • AI regulatory sandbox: Member States are required to establish at least one AI regulatory sandbox at national level through their competent authorities.
  • AI regulatory sandboxes: Member States shall establish and maintain at least one AI regulatory sandbox at national level to support innovation.
  • SMEs: Member States should develop initiatives and provide support channels for SMEs throughout their development path.
  • this Regulation: Member States are responsible for ensuring compliance with the regulation within their jurisdictions.
  • testing and experimentation facilities: Member States establish testing and experimentation facilities at national level.
  • Board: The Board is composed of representatives of Member States.
  • scientific panel: Member States can request support from the scientific panel for enforcement activities.
  • Regulation: The Regulation applies to and must be enforced by Member States.
  • notifying authority: Each Member State should designate at least one notifying authority as a national competent authority.
  • market surveillance authority: Each Member State shall designate at least one market surveillance authority as a national competent authority and single point of contact.
  • market surveillance authorities: Member States establish and maintain market surveillance authorities with powers to enforce AI regulations in accordance with Article 75.
  • penalties and administrative fines: Member States must establish and notify penalty rules to the Commission.
  • Regulation: The Regulation applies to and must be enforced by Member States.
  • This Regulation: The regulation respects the competences of Member States concerning national security.
  • Commission: Member States must immediately inform the Commission of findings regarding AI system risks and may request guideline updates.
  • notifying authority: Notifying authorities must notify other Member States of conformity assessment bodies.
  • notified body: Member States can raise objections to notified body notifications and notify conformity assessment bodies.
  • notifying authority: The notifying authority must inform other Member States about designation changes and certificate suspensions.
  • 2 August 2026: Member States must ensure operational sandboxes by the deadline of 2 August 2026.
  • European Artificial Intelligence Board: Member States designate representatives to compose the European Artificial Intelligence Board.
  • Regulation: Member States are required to facilitate tasks entrusted to the AI Office and implement the Regulation.
  • Board: Member States' designated representatives adopt the Board's rules of procedure by two-thirds majority.
  • Board: The Board advises and assists Member States in applying the Regulation.
  • the Board: The Board is required to advise and assist Member States.
  • general-purpose AI models: Member States receive opinions on qualified alerts regarding general-purpose AI models and monitor their enforcement.
  • scientific panel: Member States may call upon experts of the scientific panel to support their enforcement activities.
  • notifying authority: Member States are required to establish or designate at least one notifying authority.
  • market surveillance authority: Each Member State shall designate at least one market surveillance authority as a national competent authority and single point of contact.
  • Commission: The Commission requires Member States to communicate the identity of competent authorities.
  • scientific panel: Experts from the scientific panel support Member States' enforcement activities under the Regulation.
  • Regulation 2024/1689: The regulation applies to and establishes obligations for Member States regarding competent authorities.
  • national competent authorities: Member States designate and ensure adequate resourcing of national competent authorities.
  • Commission: Member States must immediately inform the Commission of findings regarding AI system risks and may request guideline updates.
  • EU database for high-risk AI systems: Member States collaborate with the Commission in setting up and maintaining the EU database.
  • market surveillance authority: Each Member State shall designate at least one market surveillance authority as a national competent authority and single point of contact.
  • market surveillance authorities: Member States establish and maintain market surveillance authorities with powers to enforce AI regulations in accordance with Article 75.
  • market surveillance authority: Market surveillance authority informs other Member States of evaluation results and required actions.
  • Commission: Member States must immediately inform the Commission of findings regarding AI system risks and may request guideline updates.
  • Commission: Commission shall evaluate national measures taken by Member States.
  • AI Office: The AI Office and Member States jointly encourage and facilitate the drawing up of codes of conduct.
  • codes of conduct: Member States work with the AI Office to facilitate the development of codes of conduct.

Member States market_actor

EU member states responsible for implementing the Regulation, laying down penalties, ensuring compliance through competent authorities, and promoting research and development of beneficial AI solutions.
  • Regulation: Member States must align international agreements with the requirements of this Regulation.
  • AI system: Member States are encouraged to support development of AI systems with socially and environmentally beneficial outcomes.
  • Regulation (EU) 2023/988: The regulation applies to Member States who must implement its provisions.
  • Administrative penalties and fines: Member States must lay down effective, proportionate and dissuasive penalties for infringement.
  • Commission: Member States have control mechanisms over the Commission's exercise of implementing powers under Regulation (EU) No 182/2011.
  • Commission: Member States can request the Commission to update guidelines.
  • Article 99: Article 99 requires Member States to lay down penalty rules and enforcement measures.
  • Regulation: Member States apply penalties and enforce the Regulation through their competent authorities.
  • European Commission: The Commission requests information from Member States regarding AI system compliance.

Member States' law enforcement authorities market_actor

National law enforcement bodies of EU Member States authorized to request comparison with Eurodac data.
  • Eurodac: Member States' law enforcement authorities are authorized to request comparison with Eurodac data.

Metadata identifications technical_requirement

Technical method for identifying and marking AI-generated or manipulated content through metadata.
  • Content origin detection: Metadata identifications are cited as one appropriate technique for detecting AI-generated content.

microenterprises market_actor

Small business entities as defined by Recommendation 2003/361/EC that may fulfill quality management system obligations in a simplified manner.
  • quality management system: Microenterprises are subject to simplified quality management system requirements under Article 63.

military, defence or national security purposes legal_obligation

Specific purposes for which AI systems are excluded from the scope of the Regulation, allowing entities to use AI systems for these purposes without regulatory compliance requirements.
  • AI system: AI systems used for military, defence, or national security purposes are excluded from the scope of this Regulation.
  • Regulation: The Regulation excludes military, defence, and national security purposes from its scope.
  • This Regulation: The regulation does not apply to AI systems used exclusively for military, defence or national security purposes.

model architecture technical_requirement

Technical specification of a general-purpose AI model including its structure and number of parameters.
  • technical documentation: Technical documentation must include the architecture and number of parameters of the model.

model cards documentation

A widely adopted documentation practice for AI systems that facilitates information sharing along the AI value chain.
  • AI value chain: Model cards are encouraged as a documentation practice to accelerate information sharing along the AI value chain.

model evaluation technical_requirement

Required assessments that providers must conduct on general-purpose AI models with systemic risks, including adversarial testing prior to market placement.
  • provider: Providers of general-purpose AI models with systemic risks are required to perform model evaluations.

model evaluation legal_obligation

Requirement for providers to perform evaluation using standardized protocols and tools, including adversarial testing to identify and mitigate systemic risks.
  • Article 55: Article 55 requires providers to perform model evaluation using standardized protocols and tools.

model fairness technical_requirement

A factor influencing systemic risks of general-purpose AI models.
  • systemic risks: Systemic risks are influenced by model fairness.

model modification and fine-tuning technical_requirement

Changes made to existing AI models that trigger limited compliance obligations for providers regarding documentation updates.
  • technical documentation: Modifications or fine-tuning of models require updating technical documentation with information on changes and new training data sources.

model poisoning technical_requirement

Attack method targeting pre-trained components used in training that requires preventive and detection measures.
  • high-risk AI systems: Technical solutions for high-risk AI systems must include measures to address model poisoning attacks.

model reliability technical_requirement

A factor influencing systemic risks of general-purpose AI models.
  • systemic risks: Systemic risks are influenced by model reliability.

model security technical_requirement

A factor influencing systemic risks of general-purpose AI models.
  • systemic risks: Systemic risks are influenced by model security.

model testing and evaluation technical_requirement

Technical processes for testing and evaluating AI models as part of system development.

model training technical_requirement

A component of AI system development involving the training of AI models.

Monitoring actions legal_obligation

Obligation for the AI Office to take necessary actions to monitor effective implementation and compliance with the Regulation.
  • Regulation 2024/1689: The Regulation requires the AI Office to take monitoring actions to ensure compliance.

monitoring obligation legal_obligation

Requirement to monitor high-risk AI systems, which can be fulfilled through compliance with internal governance rules under financial services law.
  • high-risk AI system: High-risk AI systems are subject to monitoring obligations that can be fulfilled through compliance with financial services law.

monitoring of AI system performance legal_obligation

Obligation for deployers to monitor the functioning of high-risk AI systems in real-life settings and maintain appropriate records.
  • deployers: Deployers are required to monitor the functioning of high-risk AI systems and maintain records as appropriate.

monitoring, functioning and control technical_requirement

Requirements for detailed information about AI system monitoring, performance capabilities, limitations, and control mechanisms.

mutual recognition agreements treaty

International agreements enabling recognition of conformity assessment results from third countries.
  • Commission: The Commission is tasked with pursuing the conclusion of mutual recognition agreements with third countries.

narrow procedural task technical_requirement

A limited-scope task performed by AI systems such as transforming unstructured data, classifying documents, or detecting duplicates that poses only limited risks.
  • high-risk AI systems: High-risk AI systems performing narrow procedural tasks may not pose significant risk and are subject to limited risk assessment.

National accreditation body institution

An institution that issues accreditation certificates attesting that conformity assessment bodies fulfill specified requirements.
  • Accreditation certificate: National accreditation bodies issue accreditation certificates attesting compliance with Article 31 requirements.

national competent authorities institution

Member State authorities designated to supervise AI systems, receive technical documentation and incident reports, establish and oversee AI regulatory sandboxes, enforce regulations, and apply administrative fines.
  • technical documentation: Technical documentation must be made available upon request to national competent authorities.
  • general-purpose AI models: Providers must report relevant information and corrective measures to national competent authorities.
  • AI Office: The AI Office collaborates with relevant national competent authorities on codes of practice.
  • AI regulatory sandbox: National competent authorities are responsible for establishing and managing AI regulatory sandboxes.
  • AI system: National competent authorities receive reports and documented allegations regarding potential harms from AI systems.
  • high-risk AI systems: High-risk AI systems are subject to oversight by national competent authorities who have access to required documentation.
  • certificate suspension or restriction: National competent authorities must receive information about suspended or withdrawn certificates and take appropriate measures.
  • provider: Provider must confirm in writing to national competent authorities that another qualified notified body assumes functions during suspension.
  • EU declaration of conformity: National competent authorities receive and review EU declarations of conformity upon request.
  • the Board: The Board contributes to coordination among national competent authorities.
  • Regulation 2024/1689: The regulation governs the tasks and requirements for national competent authorities.
  • Member States: Member States designate and ensure adequate resourcing of national competent authorities.
  • Article 78: National competent authorities must act in accordance with confidentiality obligations set out in Article 78.
  • cybersecurity: National competent authorities must take appropriate measures to ensure an adequate level of cybersecurity.
  • AI technologies expertise: Personnel of national competent authorities must have in-depth understanding of AI technologies, data, and data computing.
  • confidential information exchange: National competent authorities are subject to confidential information exchange obligations.
  • European Commission: The Commission requests information from national competent authorities.

national competent authorities legislative_body

Government bodies responsible for notifying conformity assessment bodies, establishing and supervising AI regulatory sandboxes, and ensuring compliance with regulatory requirements.
  • notified bodies: National competent authorities notify and oversee notified bodies for conformity assessment.
  • AI regulatory sandbox: National competent authorities establish and manage AI regulatory sandboxes.
  • notified bodies: National competent authorities may cooperate with notified bodies in supervising AI regulatory sandboxes.
  • European standardisation organisations: National competent authorities may involve European standardisation organisations in sandbox supervision.

national competent authority institution

A Member State authority responsible for oversight of AI systems, confirming risk assessments, and managing certificate validity periods.
  • European Data Protection Supervisor: The European Data Protection Supervisor functions as a national competent authority for AI systems used by Union institutions.
  • high-risk AI system: National competent authorities confirm risk assessments and manage certificate validity for high-risk AI systems.

national data protection authorities institution

Member State authorities responsible for data protection oversight and reporting on remote biometric identification system use under Directive (EU) 2016/680.
  • Directive (EU) 2016/680: Directive (EU) 2016/680 confers powers to national data protection authorities.
  • Commission: National data protection authorities must submit annual reports to the Commission on remote biometric identification system use.

national data protection authorities legislative_body

National authorities responsible for data protection oversight and supervision of personal data processing in AI systems.
  • National competent authorities: National competent authorities must ensure that national data protection authorities are associated with sandbox operations when personal data is involved.

national data protection authority institution

A national authority responsible for ensuring compliance with data protection regulations in the use of biometric identification systems and must be notified of each use of real-time biometric identification systems.

national law regulation

Member State-specific legislation that AI systems must comply with during regulatory sandbox operations.
  • innovative AI systems: Innovative AI systems must comply with relevant national law of Member States.

National market surveillance authorities institution

Member State authorities responsible for monitoring market compliance and submitting annual reports on remote biometric identification system use.
  • Commission: National market surveillance authorities must submit annual reports to the Commission on remote biometric identification system use.

natural person or group of persons data_category

Individuals or groups of individuals who are subjects of AI system evaluation and classification.

natural persons data_category

Individuals who are subjects of decisions made or assisted by high-risk AI systems and who must be informed of such use.
  • deployers: Deployers must inform natural persons that they are subject to the use of high-risk AI systems.

natural persons located in the Union data_category

Individuals within the Union whose fundamental rights and freedoms require protection from AI system outputs.
  • this Regulation: The regulation aims to ensure effective protection of natural persons located in the Union.

necessity and proportionality requirement evaluation_criterion

A criterion requiring that biometric system use must be necessary and proportionate to achieving specified law enforcement objectives.

New Legislative Framework legislative_procedure

A Union harmonisation framework comprising harmonised rules that apply across sectors for product legislation, establishing requirements for products containing AI systems and clarifying operator roles and obligations.

Non-discrimination legal_obligation

Fundamental right and legal principle that AI systems should not violate through discriminatory outcomes based on protected characteristics.

non-discrimination law regulation

Union law prohibiting discriminatory practices.
  • Union law: Union law includes non-discrimination law as a component.

non-personal data data_category

Data that does not relate to identified or identifiable natural persons, requiring appropriate safeguards for transfer to third countries as defined in Regulation (EU) 2016/679.

Notification procedure legislative_procedure

The formal process by which notifying authorities notify the Commission and Member States of conformity assessment bodies.
  • Article 30: Article 30 defines the notification procedure for conformity assessment bodies.
  • attestation of competence: The notification procedure requires relevant attestation of competence.
  • AI systems: The notification procedure applies to conformity assessment bodies assessing specific types of AI systems.

notification requirement legal_obligation

The mandatory obligation to notify relevant authorities (market surveillance and data protection authorities) of the use of real-time remote biometric identification systems, or for providers of general-purpose AI models meeting systemic risk conditions to notify the Commission within two weeks.
  • market surveillance authority: Market surveillance authorities must be notified of each use of real-time biometric identification systems.
  • national data protection authority: National data protection authorities must be notified of each use of real-time biometric identification systems.
  • AI Office: The notification requirement mandates that providers inform the AI Office within two weeks of meeting systemic risk criteria.
  • Regulation 2024/1689: The regulation establishes the obligation for providers to notify the AI Office of systemic risk classification.
  • transparency obligations: Transparency obligations include the requirement to notify natural persons of AI system interaction.
  • accessible formats: Notifications must be provided in accessible formats for persons with disabilities.
  • real-time remote biometric identification system: The notification requirement applies to all uses of real-time remote biometric identification systems in publicly accessible spaces.
  • Article 52: Article 52 establishes the legal obligation for providers to notify the Commission within two weeks when conditions are met.
  • provider: The notification requirement applies to providers of general-purpose AI models that meet systemic risk conditions.
  • Commission: The notification requirement requires providers to inform the Commission of systemic risk conditions.

notified bodies institution

Third-party organizations designated by national competent authorities to conduct conformity assessments for high-risk AI systems and verify compliance with regulatory requirements.
  • conformity assessment: Conformity assessment procedures for high-risk AI systems involve notified bodies as third-party assessors.
  • Regulation: The regulation establishes the role and requirements for notified bodies in conducting third-party conformity assessments.
  • national competent authorities: National competent authorities notify and oversee notified bodies for conformity assessment.
  • Article R23 of Annex I to Decision No 768/2008/EC: Notifications of notified bodies are sent through the electronic notification tool established by this legal article.
  • national competent authorities: National competent authorities may cooperate with notified bodies in supervising AI regulatory sandboxes.
  • This Regulation: The regulation requires notified bodies for conformity assessment, with provisions applying from August 2025.
  • high-risk AI systems: Notified bodies approve changes and issue decisions regarding high-risk AI systems.
  • high-risk AI systems: Notified bodies issue decisions and documents related to changes and compliance of high-risk AI systems.
  • high-risk AI systems: Notified bodies are required to verify the conformity of high-risk AI systems.
  • Article 34: Notified bodies are subject to operational obligations specified in Article 34.
  • Article 33: Article 33 governs the subcontracting and subsidiary arrangements of notified bodies.
  • Article 34: Article 34 establishes operational obligations for notified bodies regarding verification and documentation.
  • conformity assessment procedures: Notified bodies must verify conformity in accordance with conformity assessment procedures.
  • high-risk AI systems: High-risk AI systems may be assessed by notified bodies as part of conformity assessment procedures.
  • Annex VII: Annex VII procedure requires involvement of notified bodies in quality management system and technical documentation assessment.
  • Annex VII: Certificates issued by notified bodies are issued in accordance with Annex VII.
  • Article 44: Article 44 establishes requirements for certificates issued by notified bodies.
  • Confidentiality: The confidentiality obligation applies to notified bodies involved in the application of the Regulation.

notified bodies market_actor

Relevant actors within the AI ecosystem involved in conformity assessment and testing of AI systems.
  • AI regulatory sandboxes: AI regulatory sandboxes facilitate the involvement of notified bodies in the AI ecosystem.

notified body institution

A conformity assessment body designated under national law to conduct conformity assessment procedures for high-risk AI systems, issue certificates, and remain responsible for monitoring compliance.
  • conformity assessment body: A notified body is a type of conformity assessment body that has been formally notified.
  • Providers of high-risk AI systems: Providers must inform notified bodies that issued certificates of non-compliance.
  • authorised representative: The authorised representative must inform the notified body of mandate termination where applicable.
  • high-risk AI system: Notified bodies evaluate the conformity of high-risk AI systems and issue certificates of conformity.
  • Article 31: Article 31 establishes the requirements that notified bodies must satisfy.
  • high-risk AI system: Notified bodies evaluate the conformity of high-risk AI systems and issue certificates of conformity.
  • organizational requirements: Notified bodies must satisfy organizational requirements.
  • cybersecurity requirements: Notified bodies must satisfy suitable cybersecurity requirements.
  • quality management requirements: Notified bodies must satisfy quality management requirements.
  • Commission: The Commission reviews notifications and decides on authorization of notified bodies.
  • Member States: Member States can raise objections to notified body notifications and notify conformity assessment bodies.
  • conformity assessment: Notified bodies are required to conduct conformity assessment activities for high-risk AI systems.
  • notifying authority: The notifying authority investigates notified bodies and can restrict, suspend, or withdraw their designation based on non-compliance.
  • high-risk AI systems: Notified bodies conduct conformity assessment for high-risk AI systems.
  • Article 31: Notified bodies must fulfill the requirements laid down in Article 31.
  • high-risk AI systems: Notified bodies issue certificates for high-risk AI systems and must ensure their continuing conformity.
  • 10-day notification requirement: Notified bodies must inform providers within 10 days when their designation has been suspended, restricted, or withdrawn.
  • certificate suspension or restriction: Notified bodies must monitor and remain responsible for certificates during suspension or restriction periods.
  • Article 37: Article 37 governs the challenge procedures and competence requirements for notified bodies.
  • Commission: The Commission investigates and evaluates the competence of notified bodies.
  • Commission: The Commission oversees notified bodies and ensures they meet requirements for notification.
  • Member State: Member States notify bodies and are responsible for ensuring their compliance.
  • conformity assessment procedures: Notified bodies are required to conduct conformity assessment procedures for high-risk AI systems.
  • conformity assessment procedure: Notified bodies conduct conformity assessments for high-risk AI systems.
  • market surveillance authority: Market surveillance authority acts as a notified body for high-risk AI systems used by law enforcement and Union institutions.
  • Article 31(4): Notified bodies must comply with requirements laid down in Article 31(4).
  • Article 31(5): Notified bodies must comply with requirements laid down in Article 31(5).
  • Article 31(10): Notified bodies must comply with requirements laid down in Article 31(10).
  • Article 31(11): Notified bodies must comply with requirements laid down in Article 31(11).
  • Article 45: Article 45 establishes information obligations that notified bodies must fulfill.
  • notifying authority: Notified bodies are required to inform the notifying authority of certificates, approvals, and conformity assessment activities.
  • Union technical documentation assessment certificate: Notified bodies issue Union technical documentation assessment certificates when conformity is established in accordance with Annex VII.
  • quality management system approval: Notified bodies issue quality management system approvals in accordance with Annex VII requirements.
  • market surveillance authorities: Notified bodies must respond to requests for information from market surveillance authorities regarding conformity assessment activities.
  • appeal procedure: An appeal procedure is available against decisions of notified bodies regarding conformity certificates.
  • Union technical documentation assessment certificates: Notified bodies issue, refuse, withdraw, suspend or restrict Union technical documentation assessment certificates.
  • Article 78: Notified bodies must safeguard confidentiality of information in accordance with Article 78.
  • Article 43: Notified bodies are responsible for conformity assessment procedures established in Article 43.
  • quality management system: The notified body assesses whether the quality management system satisfies the requirements referred to in Article 17.
  • provider: Providers must inform the notified body of any intended changes to the quality management system or AI system.
  • technical documentation: The notified body examines the technical documentation relating to the AI system.
  • technical documentation: The notified body requires examination of technical documentation to assess AI system conformity.
  • training, validation, and testing data sets: The notified body must be granted access to training, validation, and testing data sets for conformity assessment.
  • training and trained models: The notified body may require access to training and trained models of the AI system for conformity assessment.
  • quality management system: The notified body assesses whether the quality management system satisfies the requirements referred to in Article 17.
  • provider: Providers must inform the notified body of any intended changes to the quality management system or AI system.
  • periodic audits: The notified body carries out periodic audits to verify provider compliance.
  • AI systems: The notified body may conduct additional tests of AI systems for which a certificate was issued.
  • high-risk AI systems: Notified bodies conduct audits and tests of high-risk AI systems.
  • certificate: Certificates are issued by notified bodies to verify AI system compliance.

Notified body requirements legal_obligation

Requirements and obligations imposed on notified bodies pursuant to Articles 31, 33, and 34, subject to administrative fines for non-compliance.
  • Article 31: Notified body requirements are established in Article 31.
  • Article 33: Notified body requirements are established in Article 33.
  • Article 34: Notified body requirements are established in Article 34.
  • SMEs: SMEs are subject to reduced administrative fines for non-compliance with notified body requirements.

notifying authorities market_actor

Authorities responsible for notifying bodies under the Regulation and participating in the Board's standing sub-group.
  • Board: The Board establishes a standing sub-group providing a platform for cooperation among notifying authorities.

Notifying authorities institution

Competent authorities responsible for assessing and notifying conformity assessment bodies, organized to ensure independence in decision-making.
  • Article 78: Notifying authorities must safeguard confidentiality in accordance with Article 78.
  • Competent personnel requirement: Notifying authorities are required to have adequate competent personnel with expertise in information technologies, AI, law, and fundamental rights.
  • Conformity assessment bodies: Conformity assessment bodies must submit applications for notification to notifying authorities.

notifying authority institution

A national competent authority designated by each Member State to supervise the application of the Regulation, including assessment, designation, monitoring, and notification of conformity assessment bodies.
  • Member States: Each Member State should designate at least one notifying authority as a national competent authority.
  • Regulation: Notifying authorities must exercise their powers independently and impartially to ensure application of the Regulation.
  • conformity assessment body: Notifying authorities are responsible for assessment, designation, monitoring, and verification of compliance of conformity assessment bodies.
  • Article 28: Article 28 establishes requirements and procedures for notifying authorities.
  • Regulation (EC) No 765/2008: Notifying authorities may conduct assessment and monitoring in accordance with Regulation (EC) No 765/2008.
  • Commission: Notifying authorities must notify the Commission of conformity assessment bodies and provide relevant information upon request.
  • Member States: Notifying authorities must notify other Member States of conformity assessment bodies.
  • notified body: The notifying authority investigates notified bodies and can restrict, suspend, or withdraw their designation based on non-compliance.
  • Commission: Notifying authorities must notify the Commission of conformity assessment bodies and provide relevant information upon request.
  • Member States: The notifying authority must inform other Member States about designation changes and certificate suspensions.
  • certificate suspension or restriction: The notifying authority confirms risks and outlines timelines for remedying suspensions or restrictions.
  • Article 31: Notifying authorities are subject to responsibilities laid down in Article 31.
  • Commission: The Commission requires notifying authorities to provide relevant information and ensure notified bodies participate in coordination groups.
  • notified body: Notified bodies are required to inform the notifying authority of certificates, approvals, and conformity assessment activities.

notifying authority legal_obligation

At least one national competent authority designated by each Member State for notification purposes.
  • Member States: Member States are required to establish or designate at least one notifying authority.

number of business and end users evaluation_criterion

A criterion for assessing the systemic risk designation of a general-purpose AI model based on its user reach.

number of parameters evaluation_criterion

Criterion for designating general-purpose AI models with systemic risk, measuring model complexity.

number of registered end-users evaluation_criterion

Criterion for designating general-purpose AI models with systemic risk based on user base size.

Obligations for providers of general-purpose AI models legal_obligation

Set of mandatory requirements that providers of general-purpose AI models must fulfill under Article 53, including technical documentation, information sharing, and copyright policy implementation.
  • Article 53: Article 53 establishes the legal obligations that providers of general-purpose AI models must fulfill.
  • technical documentation: Providers must draw up and maintain technical documentation including training, testing, and evaluation results.
  • Annex XII: Information and documentation provided to AI system providers must contain minimum elements set out in Annex XII.
  • AI Office: Technical documentation must be provided to the AI Office upon request.
  • Union law on copyright and related rights: Providers must comply with Union law on copyright and related rights, including implementing policies to identify and comply with rights reservations.
  • general-purpose AI models: The obligations apply to providers of general-purpose AI models.
  • AI systems: Providers must make information available to providers of AI systems who intend to integrate general-purpose AI models.

Official Journal of the European Union documentation

Official publication where references to harmonised standards, certified high-risk AI systems, conformity statements, and revocation decisions are published.
  • cybersecurity requirement: References to certified high-risk AI systems meeting cybersecurity requirements are published in the Official Journal.
  • Harmonised standards: References to harmonised standards are published in the Official Journal of the European Union.
  • harmonised standard: References to harmonised standards are published in the Official Journal of the European Union.
  • implementing acts: Implementing acts are published in the Official Journal of the European Union.
  • delegated act: Decisions of revocation are published in the Official Journal to take effect.

Official Journal of the European Union institution

The official publication where harmonised standards and EU regulations are published.
  • harmonised standards: References to harmonised standards are published in the Official Journal of the European Union.

online chatbot ai_system

An example AI system that performs searches of websites and generates outputs combining different sources of information.

online search engines ai_system

AI systems that perform searches across websites and incorporate results into existing knowledge to generate combined outputs.

open-source model release technical_requirement

The public release of a general-purpose AI model in open-source format, which may complicate implementation of regulatory compliance measures.
  • Regulation 2024/1689: Open-source model releases are subject to special consideration under the regulation due to compliance implementation challenges.
  • Regulation: Open-source model releases may complicate implementation of compliance measures under the Regulation.

operator market_actor

An entity such as a provider, manufacturer, deployer, authorised representative, importer, or distributor involved in the supply chain of AI systems and responsible for ensuring corrective actions and compliance with regulations.
  • importer: Operator is defined as including importers among other market actors.
  • distributor: Operator is defined as including distributors among other market actors.
  • AI system: Operator must ensure corrective action is taken for all AI systems made available on the Union market.
  • Article 50: Operators must comply with Article 50.

operators of AI systems market_actor

Entities responsible for placing AI systems on the market or putting them into service, including providers and public authorities.
  • This Regulation: The Regulation requires operators to take necessary steps to comply with its requirements by specified deadlines.

ordinary legislative procedure legislative_procedure

The legislative procedure followed in the enactment of the AI Regulation.

organizational requirements technical_requirement

Requirements for notified bodies regarding organizational structure, allocation of responsibilities, and reporting lines.
  • notified body: Notified bodies must satisfy organizational requirements.

Oversight measures technical_requirement

Commensurate measures built into or implemented alongside high-risk AI systems to enable effective human oversight based on risk level and autonomy.
  • High-risk AI system: Oversight measures are implemented either built into the system or by the deployer based on risk and autonomy level.
  • Provider: Providers must identify and build oversight measures into high-risk AI systems before placing them on the market.
  • Deployer: Deployers must implement oversight measures identified by the provider that are appropriate for their context.

penalties and administrative fines legal_obligation

Enforcement mechanism requiring Member States to establish and implement rules on penalties and administrative fines, applicable from 2 August 2025.
  • Member States: Member States must establish and notify penalty rules to the Commission.
  • This Regulation: Penalty provisions apply from August 2025 as part of the regulation's enforcement.
  • AI Regulation 2024/1689: The regulation requires member states to establish and implement rules on penalties with application from 2 August 2025.

performance metrics technical_requirement

Expected level of performance that should be declared in the accompanying instructions of use for high-risk AI systems.
  • High-risk AI systems: Expected level of performance metrics should be declared in the accompanying instructions of use.

performance metrics evaluation_criterion

Metrics used to measure accuracy, robustness, and compliance with relevant requirements, as well as potentially discriminatory impacts.
  • Chapter III, Section 2: Performance metrics are subject to requirements established in Chapter III, Section 2.

performance of an AI system evaluation_criterion

The ability of an AI system to achieve its intended purpose.

performance, accuracy and robustness requirements technical_requirement

Technical requirements specifying that AI systems must meet adequate standards in performance, accuracy, and robustness before deployment.

periodic audits evaluation_criterion

Regular inspections conducted by notified bodies to verify that providers maintain and apply their quality management systems.
  • notified body: The notified body carries out periodic audits to verify provider compliance.

personal data data_category

Data relating to identified or identifiable natural persons, subject to Union data protection law and protection requirements when processed in connection with AI systems and biometric identification.
  • AI systems: AI systems may involve the processing of personal data in their design, development, or use.
  • free and open-source AI components: Use of personal data in free and open-source AI components is restricted to specific purposes like security and interoperability.
  • Regulation (EU) 2016/679: The GDPR governs the processing and protection of personal data used in the AI regulatory sandbox and throughout the European Union.
  • Regulation (EU) 2018/1725: This regulation governs personal data processing by EU institutions and bodies in the context of the AI regulatory sandbox.
  • Directive (EU) 2016/680: This directive applies to personal data processing by competent authorities in the AI regulatory sandbox context.
  • Article 6(4): Article 6(4) specifies conditions for reusing personal data collected for other purposes in the AI regulatory sandbox.
  • Article 9(2), point (g): This article establishes conditions for processing special categories of personal data in the sandbox.
  • providers and prospective providers: Providers must comply with data protection regulations when using personal data in the sandbox.
  • Union data protection law: Personal data transfers and processing are governed by Union data protection law.
  • market surveillance authority: Market surveillance authorities have the power to obtain access to personal data processed by high-risk AI systems.
  • Regulation /2024/1689/oj: The regulation governs the processing of personal data in connection with its rights and obligations.
  • Regulation (EU) 2016/679: Personal data is defined in Article 4, point (1), of Regulation (EU) 2016/679.
  • Article 59: Article 59 governs the further processing of personal data for AI system development in the sandbox.
  • Chapter III, Section 2: Chapter III, Section 2 establishes requirements that personal data processing must comply with in sandbox contexts.
  • Union data protection law: Personal data processing must comply with Union data protection law requirements.
  • functionally separate, isolated and protected data processing environment: Personal data in sandbox contexts must be processed in a functionally separate, isolated and protected environment.
  • EU database: The EU database contains personal data only as necessary, including names and contact details of responsible natural persons.

Personal data deletion legal_obligation

A requirement that personal data of testing subjects must be deleted after the test is performed, particularly in law enforcement contexts.

personal data processing data_category

The processing of personal data by innovative AI systems within regulatory sandboxes.

personal data processing legal_obligation

Processing of personal data in AI regulatory sandboxes and for AI system development, subject to specific legal conditions.

personal data protection legal_obligation

The requirement to protect individuals' personal data through lawful processing in accordance with Union and national law, particularly regarding restrictions on AI system use in law enforcement.
  • Regulation 2024/1689: The regulation establishes obligations for protecting individuals' personal data regarding AI system use in law enforcement.
  • Regulation: The Regulation does not provide legal ground for personal data processing unless specifically provided for within it.

personal data protection measures technical_requirement

Appropriate technical and organisational measures required to protect personal data processed in the sandbox context.
  • AI regulatory sandbox: The sandbox requires implementation of appropriate technical and organisational measures to protect personal data.

placing on the market legal_obligation

The first making available of an AI system or general-purpose AI model on the Union market, which triggers provider obligations.
  • general-purpose AI models: Obligations for providers of general-purpose AI models apply once the models are placed on the market.
  • importer: Importers place AI systems on the market bearing third-country trademarks.

polygraphs and similar tools ai_system

Tools used in law enforcement for assessing reliability and detecting deception.

post market monitoring legal_obligation

Ongoing monitoring of high-risk AI systems after they are placed on the market or put into service to verify compliance and track operations.
  • high-risk AI systems: High-risk AI systems must be subject to post market monitoring throughout their lifetime.
  • high-risk AI systems: High-risk AI systems are subject to post market monitoring to verify compliance and track operations.

post systems ai_system

AI systems that process biometric data that has already been captured, with comparison and identification occurring after a significant delay.

post-market monitoring legal_obligation

An ongoing obligation for providers of general-purpose AI models with systemic risks to continuously assess, monitor, and ensure compliance after placement on the market.
  • provider: Providers must continuously implement post-market monitoring for general-purpose AI models with systemic risks.

post-market monitoring plan documentation

A plan that forms part of technical documentation and establishes detailed provisions for evaluating and monitoring high-risk AI system performance after market placement.
  • Annex IV: The post-market monitoring plan is part of the technical documentation referred to in Annex IV.
  • Commission: The Commission shall adopt an implementing act laying down detailed provisions establishing a template for the post-market monitoring plan.
  • Article 72: The post-market monitoring plan is referred to in Article 72(3).

post-market monitoring plan legal_obligation

A required plan for monitoring high-risk AI systems after they are placed on the market.
  • Article 98(2): The post-market monitoring plan must be adopted in accordance with the examination procedure in Article 98(2).

post-market monitoring system technical_requirement

A system that providers must establish to actively and systematically collect, document, and analyze relevant data on the performance of high-risk AI systems after they are placed on the market.
  • high-risk AI system: Providers must establish and maintain a robust post-market monitoring system for high-risk AI systems in accordance with Article 72.
  • This Regulation: The regulation requires providers of high-risk AI systems to have a post-market monitoring system in place.
  • Article 72: The post-market monitoring system requirement is governed by Article 72.
  • Article 72: Article 72 establishes requirements for providers to establish and document a post-market monitoring system.
  • providers: Providers are required to establish and document a post-market monitoring system proportionate to the nature of AI technologies and risks.
  • high-risk AI systems: The post-market monitoring system applies to high-risk AI systems throughout their lifetime.
  • deployers: Deployers may provide relevant data to providers for the post-market monitoring system.

post-market monitoring system legal_obligation

All activities carried out by providers of AI systems to collect and review experience gained from the use of AI systems they place on the market or put into service for identifying corrective or preventive actions.

post-marketing monitoring legal_obligation

Obligation for providers to monitor AI systems after they are placed on the market, integrated into financial services directives.
  • Directive 2013/36/EU: Directive 2013/36/EU integrates post-marketing monitoring obligations for providers.

post-remote biometric identification ai_system

High-risk AI system used for biometric identification of persons in remote settings.
  • deployer: Deployers of post-remote biometric identification systems must obtain prior authorization from judicial or administrative authorities.
  • Directive (EU) 2016/680: Post-remote biometric identification systems are subject to Directive (EU) 2016/680.

post-remote biometric identification system ai_system

A high-risk AI system used for biometric identification at a distance other than real-time, subject to restrictions when deployed by law enforcement.
  • remote biometric identification system: Post-remote biometric identification system is a specific type of remote biometric identification system.
  • Regulation (EU) 2024/1689: The regulation establishes rules and restrictions for the use of post-remote biometric identification systems.
  • Article 9 of Regulation (EU) 2016/679: The use of post-remote biometric identification systems is subject to Article 9 of GDPR regarding special categories of personal data.
  • Article 10 of Directive (EU) 2016/680: The use of post-remote biometric identification systems is subject to Article 10 of the law enforcement directive regarding biometric data.
  • market surveillance authority: Market surveillance authorities monitor and evaluate the use of post-remote biometric identification systems.
  • national data protection authority: National data protection authorities supervise the use of post-remote biometric identification systems and receive reports on their deployment.
  • Regulation (EU) 2024/1689: The regulation restricts the use of post-remote biometric identification systems by law enforcement to targeted purposes linked to criminal offences or missing persons.

post-remote biometric identification systems ai_system

AI systems for identifying individuals remotely using biometric data, subject to proportionality, legitimacy, and necessity requirements.
  • Article 4 (1) of Directive (EU) 2016/680: Post-remote biometric identification systems must respect principles of lawfulness, fairness, transparency, purpose limitation, accuracy, and storage limitation.
  • law enforcement: Post-remote biometric identification systems are restricted in law enforcement use to prevent indiscriminate surveillance.

preparatory assessment task technical_requirement

A preparatory function performed by AI systems that has low impact on final assessments relevant to high-risk use cases.

prevention of threat to life or safety evaluation_criterion

Legitimate objective for using real-time biometric identification systems to prevent specific, substantial and imminent threats to natural persons or terrorist attacks.

principle of non-refoulement legal_obligation

An international legal principle prohibiting the return of individuals to territories where they face persecution or harm.

privacy and data governance evaluation_criterion

An ethical principle requiring AI systems to comply with privacy and data protection rules while processing high-quality and integrity data.

privacy-preserving techniques technical_requirement

Methods used in AI system development and testing to protect privacy while maintaining data utility.
  • data sets: Data sets should comply with privacy-preserving techniques during AI system development and testing.

procedural safeguards technical_requirement

Requirements including effective judicial remedies and due process that must accompany the exercise of powers to impose administrative fines.
  • administrative fines: The exercise of powers to impose administrative fines is subject to appropriate procedural safeguards.

processing logs documentation

Records of personal data processing activities maintained for the duration of sandbox participation.
  • AI regulatory sandbox: Logs of personal data processing are maintained for the duration of sandbox participation.

product compliance with Union harmonisation legislation legal_obligation

The requirement that products must comply with all applicable Union harmonisation legislation before being placed on the market.
  • high-risk AI systems: High-risk AI systems are subject to compliance requirements with Union harmonisation legislation.

product manufacturer market_actor

An entity responsible for ensuring that AI systems embedded in final products as safety components comply with Regulation requirements and may be considered the provider of high-risk AI systems.
  • Regulation: Product manufacturers must ensure embedded AI systems comply with Regulation requirements when AI is a safety component.
  • high-risk AI systems: Product manufacturers place high-risk AI systems on the market as safety components of products.

product manufacturers market_actor

Entities placing on the market or putting into service an AI system together with their product under their own name or trademark.
  • Article 2: Article 2 applies to product manufacturers placing AI systems on the market.

Professional integrity evaluation_criterion

A criterion requiring notified bodies to maintain the highest degree of professional integrity in their work.

Professional secrecy legal_obligation

An obligation requiring staff of notified bodies to maintain confidentiality of information obtained during their tasks.
  • Notified bodies: Notified bodies are required to observe professional secrecy regarding information obtained during their tasks.

profiling legal_obligation

A legal concept defined across multiple EU regulations and directives that AI systems must consider when assessing high-risk use cases.
  • high-risk AI systems: High-risk AI systems are subject to profiling considerations as defined in multiple EU regulations.

profiling data_category

Profiling as defined in Article 4, point (4), of Regulation (EU) 2016/679.

profiling in criminal proceedings ai_system

AI systems used for profiling in detection, investigation or prosecution of criminal offences.

profiling of natural persons legal_obligation

The practice of automatically processing personal data to evaluate personality traits, characteristics, or past criminal behaviour of individuals or groups, which classifies an AI system as high-risk.
  • AI system: An AI system that performs profiling of natural persons is always considered high-risk.

Prohibited AI practices legal_obligation

Set of obligations prohibiting specific AI practices including subliminal techniques, manipulation, deception, and social scoring systems under Article 5.
  • Article 5: Article 5 establishes the legal obligations regarding prohibited AI practices.
  • AI system: Prohibited AI practices apply to AI systems placed on market, put into service, or used.
  • subliminal techniques: Prohibited AI practices prohibit the deployment of subliminal techniques in AI systems.
  • manipulative or deceptive techniques: Prohibited AI practices prohibit the use of manipulative or deceptive techniques in AI systems.
  • vulnerability exploitation: Prohibited AI practices prohibit exploitation of vulnerabilities of natural persons.

prohibited practices legal_obligation

Practices forbidden under the Regulation regarding AI systems placement and use, subject to annual assessment for list amendments.
  • this Regulation: The regulation lays down prohibited practices that AI systems must not violate.
  • AI systems: Prohibited practices restrict the placement, putting into service, and use of AI systems.
  • This Regulation: The regulation prohibits certain practices and requires annual assessment of the prohibition list.

prohibited systems ai_system

AI systems whose use is prohibited under the regulation, subject to enforcement action when placed on market or put into service in violation.
  • this Regulation: Prohibited systems placed on market or put into service in violation are subject to enforcement action.

prohibition of AI practices legal_obligation

A regulatory requirement prohibiting certain AI practices as defined in Article 5.
  • Article 5: Article 5 establishes the prohibition of certain AI practices.
  • administrative fine: Non-compliance with prohibited AI practices is subject to administrative fines up to EUR 1,500,000.

Prohibition of biometric categorisation legal_obligation

Legal obligation prohibiting biometric categorisation systems that infer protected characteristics based on biometric data.

Prohibition of criminal risk assessment AI legal_obligation

Legal obligation prohibiting the placing on market, putting into service, or use of AI systems for criminal risk assessment based solely on profiling.

Prohibition of emotion inference in workplace and education legal_obligation

Legal obligation prohibiting the placing on market, putting into service, or use of AI systems to infer emotions in workplace and education institutions, except for medical or safety reasons.

Prohibition of facial recognition database scraping legal_obligation

Legal obligation prohibiting the placing on market, putting into service, or use of AI systems that create or expand facial recognition databases through untargeted scraping.

Prohibition of manipulative AI systems legal_obligation

Legal requirement to prohibit the placing on market, putting into service, or use of AI systems that materially distort human behavior with significant harmful effects.
  • AI-enabled manipulative techniques: Legal obligation prohibits the placing on market and use of AI systems with manipulative objectives or effects.
  • Directive (EU) 2019/882: Legal obligation references Directive 2019/882 regarding disability definition and vulnerable persons protection.

Prohibition of manipulative or exploitative AI-enabled practices legal_obligation

AI systems that materially distort behavior and cause significant harm to persons or groups should be prohibited.
  • AI systems: The prohibition applies to AI systems that distort behavior and cause significant harm.
  • Significant harm: The prohibition defines significant harm as a criterion for determining prohibited AI practices.

Prohibition on AI-predicted behavior assessment legal_obligation

Natural persons should never be judged on AI-predicted behavior based solely on profiling, personality traits or characteristics without reasonable suspicion and human assessment.
  • TFEU: The legal obligation regarding AI-predicted behavior assessment is based on principles established in the TFEU.
  • Risk assessments based on profiling: The legal obligation prohibits the use of risk assessment AI systems that judge individuals based solely on profiling and personality traits.
  • Risk analytics for financial fraud detection: Financial fraud detection systems comply with the prohibition as they do not profile individuals or assess personality traits.
  • Risk analytics for narcotics localization: Narcotics localization tools comply with the prohibition as they are not based on individual profiling or personality assessment.

prohibition on biometric categorisation legal_obligation

Legal requirement prohibiting the use of biometric categorisation systems to infer sensitive personal attributes about individuals.

prohibition on emotional detection systems in workplace and education legal_obligation

Mandatory requirement prohibiting the placing on market, putting into service, or use of AI systems for emotional state detection in workplace and education contexts.

Prohibition on untargeted facial recognition database creation legal_obligation

Placing on market, putting into service, or use of AI systems that create or expand facial recognition databases through untargeted scraping of facial images from internet or CCTV footage is prohibited.

prohibitions legal_obligation

Specific prohibitions on certain uses of AI that apply from February 2025 due to unacceptable risks.
  • This Regulation: The regulation contains prohibitions on certain AI uses applicable from February 2025.

proportionality and appropriateness evaluation_criterion

Principles guiding the determination of fine amounts in accordance with the nature, gravity and duration of infringements.

proportionate use requirement technical_requirement

Requirement that biometric identification systems be used in a responsible and proportionate manner with specific safeguards and conditions.
  • Regulation: The Regulation requires that biometric identification systems be used in a responsible and proportionate manner with specific safeguards.

prospective provider market_actor

Entity planning to provide AI systems that must comply with testing and reporting requirements before market placement.
  • serious incident: Prospective providers must report serious incidents and adopt immediate mitigation measures or suspend testing.
  • Union and national liability law: Prospective providers are liable under applicable Union and national liability law for damages caused during testing.

Protocol No 21 treaty

Protocol on the position of the United Kingdom and Ireland in respect of the area of freedom, security and justice, annexed to the TEU and TFEU.
  • Ireland: Protocol No 21 governs the position of Ireland regarding certain EU rules.

Protocol No 22 on the position of Denmark treaty

A protocol annexed to the TEU and TFEU that specifies Denmark's position regarding binding rules, particularly concerning biometric categorization systems and AI systems in police and judicial cooperation.
  • TEU: Protocol No 22 is annexed to the TEU.
  • TFEU: Protocol No 22 is annexed to the TFEU.

provider market_actor

A natural or legal person responsible for developing and placing AI systems on the market or putting them into service, bearing obligations for compliance, documentation, risk management, and conformity assessment.
  • mandatory requirements: Providers must adopt measures to comply with mandatory requirements of the Regulation.
  • risk-management system: Providers must establish and maintain a risk-management system to identify and mitigate risks for high-risk AI systems.
  • Regulation 2024/900: Providers of products containing high-risk AI systems are subject to compliance requirements under Regulation 2024/900.
  • documentation and procedures: Providers must maintain documentation and procedures to demonstrate compliance with applicable requirements.
  • instructions for use: Providers must document and provide instructions for use that inform deployers of known and foreseeable risks.
  • reasonably foreseeable misuse: Providers must identify and address reasonably foreseeable misuse in their risk management processes.
  • Regulation (EU) 2024/1689: The regulation requires that a provider takes responsibility for placing on the market or putting into service a high-risk AI system.
  • high-risk AI system: Providers are responsible for placing high-risk AI systems on the market or putting them into service and ensuring registration compliance.
  • accessibility requirements: Providers must ensure full compliance with accessibility requirements including Directives 2016/2102 and 2019/882.
  • Regulation: The Regulation establishes obligations that apply to providers of AI systems.
  • distributor: A distributor can be considered a provider under Article 25 circumstances, such as putting its name or trademark on a high-risk AI system.
  • importer: An importer can be considered a provider of high-risk AI systems under Article 25 circumstances.
  • deployer: A deployer can be considered a provider of high-risk AI systems under Article 25 circumstances, particularly when modifying AI systems.
  • Regulation: Providers of high-risk AI systems must comply with obligations established in the Regulation.
  • competent authorities: Providers must closely cooperate with competent authorities established under the Regulation.
  • high-risk AI systems: Providers are responsible for placing high-risk AI systems on the market or into service and registering them.
  • Regulation: The provider must fully comply with the obligations set out in the Regulation.
  • third parties: The provider requires third parties to provide necessary information, capabilities, technical access and assistance based on the state of the art.
  • instructions for use: Providers issue instructions for use containing information relevant to impact assessment and risk mitigation.
  • model evaluation: Providers of general-purpose AI models with systemic risks are required to perform model evaluations.
  • adversarial testing: Providers must conduct and document adversarial testing of models prior to market placement.
  • cybersecurity protection: Providers must ensure adequate cybersecurity protection for general-purpose AI models with systemic risks.
  • risk-management policies: Providers must implement risk-management policies including accountability and governance processes.
  • post-market monitoring: Providers must continuously implement post-market monitoring for general-purpose AI models with systemic risks.
  • Article 3: Article 3 defines the role and responsibilities of a provider.
  • provider assessment documentation: Providers must document their assessment that an AI system is not high-risk before market placement.
  • automatically generated logs: Providers must provide access to automatically generated logs upon request by competent authorities.
  • authorised representative: Providers established in third countries must appoint an authorised representative in the Union before making high-risk AI systems available on the market.
  • authorised representative: Authorised representatives are appointed by and act on behalf of providers in regulatory matters.
  • Article 11: The provider is required to draw up technical documentation in accordance with Article 11.
  • Article 49(1): The provider is subject to registration obligations referred to in Article 49(1).
  • importer: An importer can be considered a provider of high-risk AI systems under Article 25 circumstances.
  • Article 16: Providers of high-risk AI systems are subject to obligations defined in Article 16.
  • conformity assessment: Providers must fulfill conformity assessment requirements for high-risk AI systems.
  • documentation: Providers must make available necessary information and documentation for compliance with regulatory obligations.
  • third party supplier: Providers must establish written agreements with third parties specifying necessary information and technical access.
  • deployer: A deployer can be considered a provider of high-risk AI systems under Article 25 circumstances, particularly when modifying AI systems.
  • national competent authorities: Provider must confirm in writing to national competent authorities that another qualified notified body assumes functions during suspension.
  • EU declaration of conformity: The provider draws up and maintains the EU declaration of conformity for high-risk AI systems.
  • EU declaration of conformity: The provider draws up and maintains the EU declaration of conformity.
  • Section 2: By drawing up the EU declaration of conformity, the provider assumes responsibility for compliance with requirements in Section 2.
  • notification requirement: The notification requirement applies to providers of general-purpose AI models that meet systemic risk conditions.
  • Commission: Providers may request reassessment of systemic risk designations from the Commission.
  • Article 53: Article 53 establishes obligations for providers of general-purpose AI models.
  • Regulation 2024/1689: Providers must comply with the obligations established in the regulation.
  • authorised representative: Authorised representatives are appointed by and act on behalf of providers in regulatory matters.
  • free and open-source licence: Providers of models released under free and open-source licences may be exempt from certain obligations unless systemic risks are present.
  • serious incident: Providers must report serious incidents and adopt immediate mitigation measures or suspend testing.
  • Union and national liability law: Providers are liable under applicable Union and national liability law for damages caused during testing.
  • Article 61: Article 61 requires providers to obtain informed consent and provide contact details to testing subjects.
  • EU database: Providers are required to enter data listed in Sections A and B of Annex VIII into the EU database.
  • serious incident: Provider must report serious incidents within specified timeframes.
  • competent authorities: Provider must cooperate with competent authorities during investigations.
  • risk assessment: Provider must perform risk assessment of the serious incident.
  • corrective action: Provider must perform corrective action following a serious incident.
  • AI system: The provider places the AI system on the market or into service and is responsible for its classification.
  • Article 80: Provider is subject to requirements and obligations established in Article 80.
  • Article 99: Provider is subject to fines under Article 99 for non-compliance.
  • corrective action: Provider must ensure corrective action is taken on all concerned AI systems.
  • quality management system: Providers must implement and maintain approved quality management systems for their AI systems.
  • technical documentation: Providers must prepare and submit technical documentation for each AI system, including evidence as required by the notified body.
  • notified body: Providers must inform the notified body of any intended changes to the quality management system or AI system.
  • quality management system: Providers must implement and maintain approved quality management systems for their AI systems.
  • notified body: Providers must inform the notified body of any intended changes to the quality management system or AI system.

provider assessment documentation documentation

Documentation required from providers demonstrating that an AI system referred to in Annex III is not high-risk before market placement.
  • provider: Providers must document their assessment that an AI system is not high-risk before market placement.
  • Article 49(2): Providers documenting non-high-risk assessment are subject to registration obligations in Article 49(2).

Provider obligations legal_obligation

Obligations imposed on providers pursuant to Article 16, subject to administrative fines for non-compliance.
  • Article 16: Provider obligations are established in Article 16.
  • SMEs: SMEs are subject to reduced administrative fines for non-compliance with provider obligations.

provider of the general-purpose AI model market_actor

An entity that develops and provides general-purpose AI models and is subject to documentation and information requests.
  • Commission: The Commission may request providers to provide documentation and additional information for compliance assessment.
  • Articles 53 and 55: Articles 53 and 55 establish documentation requirements for providers of general-purpose AI models.
  • qualified alert: A qualified alert indicates the point of contact of the provider concerned with systemic risk.
  • AI Office: The AI Office may initiate a structured dialogue with the provider before sending a request for information.
  • Regulation: Providers of general-purpose AI models are subject to compliance with the Regulation.
  • Article 101: Article 101 specifies fines for supplying incorrect, incomplete or misleading information.

provider of the system market_actor

Entity responsible for ensuring AI system compliance through corrective action when required by notified bodies.
  • AI system: The provider of the system is responsible for ensuring AI system compliance through corrective action.

Provider or deployer intention technical_requirement

The requirement that it is not necessary for the provider or deployer to have intention to cause significant harm if harm results from manipulative or exploitative AI-enabled practices.

Provider or prospective provider market_actor

An entity responsible for developing AI systems and ensuring compliance with testing provisions and oversight requirements.

provider's obligation to comply legal_obligation

The requirement for providers to demonstrate conformity with regulatory requirements.
  • harmonised standards: Harmonised standards are a means for providers to demonstrate conformity with regulatory requirements.
  • common specifications: Common specifications facilitate provider compliance with regulatory requirements.

providers market_actor

Organizations that develop and place AI systems or general-purpose AI models on the market, responsible for establishing baseline obligations, documentation, post-market monitoring, and incident reporting.
  • detection and disclosure of artificially generated outputs: Providers are placed with obligations to enable detection and disclosure of artificially generated outputs.
  • harmonised standards: Providers can comply with harmonised standards to demonstrate conformity with regulatory requirements.
  • this Regulation: The regulation establishes obligations and requirements that apply to AI system providers.
  • high-risk AI systems: Providers place high-risk AI systems on the market and are responsible for their monitoring.
  • Article 2: Article 2 applies to providers placing AI systems on the market.
  • post-market monitoring system: Providers are required to establish and document a post-market monitoring system proportionate to the nature of AI technologies and risks.
  • market surveillance authorities: Providers must report serious incidents to market surveillance authorities of Member States where incidents occurred.

providers and deployers of AI systems market_actor

Entities established in third countries that provide or deploy AI systems whose output is intended for use in the Union.
  • this Regulation: The regulation applies to providers and deployers of AI systems established in third countries when output is intended for Union use.

providers and prospective providers market_actor

Organizations developing AI systems within regulatory sandboxes who must implement safeguards, cooperate with authorities, and remain liable for damages caused by their systems.
  • AI systems: Providers develop and test AI systems within the regulatory sandbox.
  • personal data: Providers must comply with data protection regulations when using personal data in the sandbox.

providers established in third countries market_actor

AI system providers located outside the Union who wish to make their systems available in the Union market.

providers of general-purpose AI models market_actor

Organizations that develop and place general-purpose AI models on the Union market for use by downstream providers.
  • Union law on copyright and related rights: Providers must comply with Union law on copyright and related rights.
  • Directive (EU) 2019/790: Providers are subject to the requirements of Directive (EU) 2019/790 regarding rights reservation.
  • training data summary: Providers must draw up and make publicly available a sufficiently detailed summary of training data.
  • Article 53(1), point (b): Article 53(1), point (b) requires providers of general-purpose AI models to provide technical documentation.
  • evaluation strategies: Providers must document detailed evaluation strategies including evaluation results and methodologies.
  • adversarial testing: Providers must describe measures for conducting adversarial testing and model adaptations.
  • system architecture: Providers must provide detailed descriptions of system architecture explaining software component integration.

providers of high-risk AI systems market_actor

Organizations that develop and place high-risk AI systems on the market or put them into service, responsible for registration in the EU database and ensuring conformity with regulatory requirements.
  • EU database: Providers of high-risk AI systems must register themselves and information about their systems in the EU database.

providers of intermediary services market_actor

Market actors that embed AI systems or models into their services, subject to obligations under Regulation (EU) 2022/2065.
  • Regulation (EU) 2022/2065: Providers of intermediary services that embed AI systems or models are subject to obligations regulated by Regulation (EU) 2022/2065.

provisional measures legal_obligation

Measures taken by market surveillance authorities to prohibit, restrict, withdraw, or recall non-compliant AI systems.

pseudonymisation technical_requirement

A state-of-the-art security and privacy-preserving measure required for processing special categories of personal data in high-risk AI systems.

public authorities market_actor

Government entities that deploy high-risk AI systems and must ensure compliance with Regulation requirements, including registration in the EU database.

public authorities in a third country market_actor

Authorities from countries outside the Union that may use AI systems in international cooperation frameworks.
  • This Regulation: The regulation applies to public authorities in third countries under specific conditions regarding international cooperation and adequate safeguards.

public authorities of a third country market_actor

Government entities of non-Union countries that may be exempt from regulation when acting within law enforcement and judicial cooperation frameworks.

public international law legal_article

The appropriate legal framework for regulating AI systems in the context of lethal force and military and defence activities.
  • Regulation: Public international law is identified as the more appropriate legal framework for regulating AI systems in military and defence contexts.

public safety and public health legal_obligation

Areas of substantial public interest for which AI systems may be developed using further processed personal data.
  • Article 59: Article 59 applies to AI systems developed for public safety and public health purposes.

publicly accessible space legal_article

A physical space accessible to an undetermined number of natural persons, regardless of ownership or activity type, including shops, transport stations, entertainment venues, and public areas.
  • Regulation: The Regulation provides a detailed definition of what constitutes a publicly accessible space.

publicly accessible space data_category

Any publicly or privately owned physical place accessible to an undetermined number of natural persons, regardless of access conditions or capacity restrictions.

Publicly accessible spaces data_category

Physical spaces that are accessible to the public with the permission of the person having authority over the space, excluding prisons, border control, and online spaces.

putting into service legal_obligation

The supply of an AI system for first use directly to the deployer or for own use in the Union for its intended purpose.

qualified alert legal_obligation

A duly reasoned alert indicating systemic risk concerns related to a general-purpose AI model provider.

quality management requirements technical_requirement

Requirements for quality management systems in notified bodies.
  • notified body: Notified bodies must satisfy quality management requirements.

quality management system technical_requirement

A documented system that providers of high-risk AI systems must establish and maintain to ensure compliance with regulatory requirements, including policies, procedures, and quality assurance measures, with proportionate aspects based on organization size.
  • high-risk AI system: Providers of high-risk AI systems must establish a sound quality management system.
  • this Regulation: The Regulation requires implementation of quality management systems for AI systems.
  • microenterprises: Microenterprises are subject to simplified quality management system requirements under Article 63.
  • high-risk AI systems: Quality management system is required for high-risk AI systems.
  • Commission: The Commission should develop guidelines specifying elements of the quality management system.
  • Directive 2013/36/EU: Directive 2013/36/EU establishes quality management system requirements with limited derogations for credit institutions.
  • Directive 2009/138/EC: Directive 2009/138/EC applies the same quality management system requirements as Directive 2013/36/EU to insurance undertakings.
  • Directive (EU) 2016/97: Directive (EU) 2016/97 applies quality management system requirements to insurance intermediaries.
  • high-risk AI systems: Providers of high-risk AI systems must implement a quality management system.
  • high-risk AI systems: Providers of high-risk AI systems must implement a quality management system.
  • Article 17: Article 17 establishes the requirements for providers to implement a quality management system.
  • high-risk AI system: The quality management system requirement applies to high-risk AI systems.
  • high-risk AI systems: Quality management systems are required to apply to high-risk AI systems.
  • Union financial services law: Union financial services law may contain equivalent quality management requirements that can fulfill Article 17 obligations.
  • Article 63: Article 63 allows simplified compliance with quality management system requirements for microenterprises.
  • Article 17: Quality management systems must satisfy requirements referred to in Article 17.
  • ANNEX VII: ANNEX VII contains conformity assessment procedures based on quality management system assessment.
  • notified body: The notified body assesses whether the quality management system satisfies the requirements referred to in Article 17.
  • provider: Providers must implement and maintain approved quality management systems for their AI systems.
  • Article 17: The quality management system must satisfy the requirements referred to in Article 17.
  • AI system: AI systems are covered by and subject to the quality management system.
  • provider: Providers must implement and maintain approved quality management systems for their AI systems.
  • notified body: The notified body assesses whether the quality management system satisfies the requirements referred to in Article 17.

quality management system approval documentation

Approval issued by notified bodies for quality management systems in accordance with Annex VII requirements.
  • notified body: Notified bodies issue quality management system approvals in accordance with Annex VII requirements.
  • Annex VII: Quality management system approvals are issued in accordance with Annex VII requirements.

quality or size of data set evaluation_criterion

Criterion measured through tokens for evaluating general-purpose AI models with systemic risk.

real world conditions testing technical_requirement

Testing of AI systems in actual operational environments, which may be conducted within the AI regulatory sandbox framework.
  • AI regulatory sandbox: AI regulatory sandboxes may permit testing of AI systems in real world conditions upon agreement.

real-time AI systems ai_system

AI systems that operate near-instantaneously or without significant delay, using 'live' or 'near-live' material such as video footage from cameras or similar devices.
  • Regulation: The Regulation establishes rules for the 'real-time' use of AI systems and prohibits circumvention through minor delays.
  • real-time use: Real-time AI systems must comply with the technical requirement of operating near-instantaneously without significant delay.

real-time biometric identification system ai_system

An AI system used by law enforcement authorities for real-time remote biometric identification of natural persons in publicly accessible spaces.

Real-time remote biometric identification ai_system

AI systems used for real-time remote biometric identification of natural persons in publicly accessible spaces for law enforcement purposes, which may be intrusive to rights and freedoms.
  • Law enforcement: Real-time remote biometric identification systems are used by law enforcement for identification purposes.
  • Technical accuracy: Biometric identification systems must meet technical accuracy requirements to avoid biased and discriminatory results.

real-time remote biometric identification system ai_system

An AI system performing real-time biometric identification in publicly accessible spaces for law enforcement purposes, subject to specific regulatory constraints and authorization requirements.
  • fundamental rights impact assessment: The system must be subject to a fundamental rights impact assessment before authorization.
  • database: The system must be registered in the database as set out in the Regulation.
  • judicial authority: The system's use requires express and specific authorization by a judicial authority.
  • independent administrative authority: The system's use requires express and specific authorization by an independent administrative authority of a Member State.
  • law enforcement authority: Law enforcement authorities use and operate real-time remote biometric identification systems.
  • reference database of persons: The reference database should be appropriate for each use case of the biometric identification system.
  • Regulation: The system is governed by the Regulation which sets out requirements and procedures.
  • remote biometric identification system: Real-time remote biometric identification system is a specific type of remote biometric identification system.
  • EU database: The biometric identification system must be registered in the EU database according to Article 49.
  • judicial authority: Judicial authorities grant binding authorization decisions for biometric system use.
  • independent administrative authority: Independent administrative authorities with binding decision power authorize biometric system use.
  • necessity and proportionality requirement: The system must satisfy necessity and proportionality criteria for law enforcement objectives.
  • law enforcement purposes: The system is applied for law enforcement purposes under specified conditions and limitations.
  • market surveillance authority: Use of the system must be notified to and supervised by the market surveillance authority.
  • national data protection authority: Use of the system must be notified to and supervised by the national data protection authority.
  • Member State: Member States establish national rules for authorization and use of real-time remote biometric identification systems.
  • notification requirement: The notification requirement applies to all uses of real-time remote biometric identification systems in publicly accessible spaces.
  • criminal offences: Specific criminal offences determine the authorized use cases for the biometric identification system.

real-time remote biometric identification systems ai_system

AI systems used in publicly accessible spaces for law enforcement to identify individuals in real-time using biometric data, subject to specific regulatory requirements and authorization.
  • criminal offences list: Real-time remote biometric identification systems apply to the criminal offences listed in the annex.
  • Regulation: The Regulation establishes rules for the deployment and use of real-time remote biometric identification systems.
  • law enforcement authorities: Real-time remote biometric identification systems are used by law enforcement authorities in publicly accessible spaces.
  • identity checks: Real-time remote biometric identification systems are subject to conditions and safeguards for identity checks.
  • this Regulation: The Regulation establishes a specific framework for the use of real-time remote biometric identification systems in law enforcement contexts.
  • biometric data: Real-time remote biometric identification systems process biometric data as part of their operation.
  • competent authorities: Competent authorities using real-time remote biometric identification systems for law enforcement are subject to the regulatory framework.
  • Directive (EU) 2016/680: Real-time remote biometric identification systems must comply with requirements from Directive (EU) 2016/680.
  • law enforcement: Real-time biometric identification systems are used by law enforcement for specific purposes.
  • targeted search for victims: Use of real-time biometric identification systems requires justification through legitimate objectives such as targeted victim search.
  • prevention of threat to life or safety: Use of real-time biometric identification systems requires justification through prevention of threats to life or safety.
  • criminal investigation and prosecution: Use of real-time biometric identification systems requires justification through criminal investigation purposes.
  • Regulation (EU) 2016/679: Use of biometric identification systems is subject to Article 9 of Regulation (EU) 2016/679 for non-law enforcement purposes.
  • Article 27: Biometric identification systems must comply with fundamental rights impact assessment requirements specified in Article 27.
  • Article 49: Biometric identification systems must be registered in the EU database according to Article 49.
  • biometric data: Real-time remote biometric identification systems process biometric data for law enforcement identification purposes.
  • law enforcement: Law enforcement authorities deploy and authorize the use of real-time remote biometric identification systems.
  • fundamental rights impact assessment: Biometric identification systems must be evaluated through a fundamental rights impact assessment before deployment.
  • competent judicial authorities: Use of real-time remote biometric identification systems requires authorization from competent judicial authorities.

real-time use technical_requirement

A requirement that AI systems operate instantaneously or near-instantaneously without significant delay to prevent circumvention of regulatory rules.
  • Regulation: The Regulation establishes requirements for real-time use of remote biometric identification systems to prevent circumvention through minor delays.
  • real-time AI systems: Real-time AI systems must comply with the technical requirement of operating near-instantaneously without significant delay.

real-world conditions testing technical_requirement

Testing of AI systems in real-world conditions subject to specific regulatory requirements and limitations.

real-world testing plan documentation

A required document submitted by providers to competent authorities that details the objectives, methodology, scope, monitoring, and conduct of real-world testing for high-risk AI systems.
  • competent authorities: Prospective providers must submit real-world testing plans to competent market surveillance authorities.
  • high-risk AI systems: Testing of high-risk AI systems must be documented in a real-world testing plan.
  • AI regulatory sandbox: The real-world testing plan describes the methodology and scope for testing within the AI regulatory sandbox.
  • Article 60: Article 60 requires providers to follow a real-world testing plan for high-risk AI systems.
  • The Commission: The Commission specifies detailed elements of the real-world testing plan through implementing acts.
  • market surveillance authority: Real-world testing plans must be submitted to and approved by the market surveillance authority.
  • Annex IX: Real-world testing plans must include information specified in Annex IX.

reasonably foreseeable misuse evaluation_criterion

Uses of AI systems that, while not explicitly covered by intended purpose, may result from predictable human behavior and must be considered in risk management.
  • provider: Providers must identify and address reasonably foreseeable misuse in their risk management processes.

reasonably foreseeable misuse legal_article

The use of an AI system in a way that is not in accordance with its intended purpose, but which may result from reasonably foreseeable human behaviour or interaction with other systems.
  • intended purpose: Reasonably foreseeable misuse is defined in contrast to intended purpose.

recall of an AI system legal_obligation

Any measure aiming to achieve the return to the provider or taking out of service or disabling the use of an AI system made available to deployers.

Recommendation 2003/361/EC directive

EU recommendation defining microenterprises and partner/linked enterprises for regulatory compliance purposes.
  • Article 34: Article 34 references Recommendation 2003/361/EC for definition of micro- and small enterprises.
  • Article 63: Article 63 references Recommendation 2003/361/EC for the definition of microenterprises.

record-keeping technical_requirement

A requirement for maintaining records of AI system operations and decisions as established by the Regulation.
  • Regulation (EU) 2019/1020: The regulation establishes specific requirements and obligations for record-keeping of AI systems.
  • Regulation: The Regulation requires record-keeping as a specific obligation for AI systems.

record-keeping legal_obligation

Systems and procedures required for maintaining documentation and information relevant to high-risk AI systems.
  • high-risk AI system: High-risk AI systems must maintain systems and procedures for record-keeping of all relevant documentation and information.

redress measures technical_requirement

Effective mechanisms provided by Union law to address risks posed by AI systems, excluding claims for damages.
  • Union law: Union law establishes effective measures of redress in relation to risks posed by AI systems.

reference database of persons documentation

A database containing persons' information that should be appropriate for each specific use case in law enforcement situations.

registered business users market_actor

Business users established in the Union who are registered, with a threshold of at least 10,000 for determining reach
  • 2024/1689: The regulation applies to registered business users established in the Union

registered end-users market_actor

End-users who are registered, whose number is tracked as a metric
  • 2024/1689: The regulation tracks and applies to registered end-users

registration requirement legal_obligation

Obligation for providers to register AI systems in the EU database.

regulated financial institutions market_actor

Financial institutions subject to Union financial services law and competent authority supervision.

Regulation regulation

The primary EU regulatory framework governing AI systems, establishing risk-based requirements, prohibitions for manipulative practices, obligations for providers and deployers, conformity assessment procedures, and enforcement mechanisms including administrative fines across the Union.
  • AI systems: The Regulation governs the development, use, enforcement, and market surveillance of AI systems through ethical principles and requirements.
  • transparency: The Regulation requires transparency as a specific obligation for AI systems.
  • technical documentation: The Regulation requires high-risk AI systems to maintain technical documentation.
  • record-keeping: The Regulation requires record-keeping as a specific obligation for AI systems.
  • Regulation (EU) 2016/679: The main Regulation has regard to Regulation (EU) 2016/679 in safeguarding personal data protection.
  • Regulation (EU) 2018/1725: The main Regulation has regard to Regulation (EU) 2018/1725 in safeguarding personal data protection.
  • Directive (EU) 2016/680: The main Regulation has regard to Directive (EU) 2016/680 in safeguarding personal data protection.
  • Directive 2002/58/EC: The main Regulation has regard to Directive 2002/58/EC which protects private life and confidentiality of communications.
  • UNCRC General Comment No 25 (2021): The Regulation references UNCRC General Comment No 25 (2021) regarding children's rights in the digital environment.
  • remote biometric identification system: The Regulation defines the notion and functional characteristics of remote biometric identification systems.
  • biometric data: The Regulation governs the processing and use of biometric data in AI systems.
  • biometric verification system: Biometric verification systems are excluded from certain rules of the Regulation due to their minor impact on fundamental rights.
  • real-time use: The Regulation establishes requirements for real-time use of remote biometric identification systems to prevent circumvention through minor delays.
  • real-time AI systems: The Regulation establishes rules for the 'real-time' use of AI systems and prohibits circumvention through minor delays.
  • emotion recognition system: The Regulation defines emotion recognition systems as AI systems for identifying or inferring emotions based on biometric data.
  • publicly accessible space: The Regulation provides a detailed definition of what constitutes a publicly accessible space.
  • facial expressions: The Regulation classifies facial expressions as a type of biometric data covered under its scope.
  • gestures: The Regulation classifies gestures as a type of biometric data covered under its scope.
  • voice characteristics: The Regulation classifies voice characteristics as a type of biometric data covered under its scope.
  • AI system: The Regulation governs AI systems placed on the market or put into service, with exclusions for military and national security purposes.
  • AI literacy: The Regulation requires AI literacy measures to ensure appropriate compliance and correct enforcement.
  • high-risk AI system: High-risk AI systems are subject to the requirements and restrictions established by the Regulation.
  • Union institutions, bodies, offices and agencies: The Regulation applies to Union institutions when acting as providers or deployers of AI systems.
  • AI system: The Regulation governs AI systems placed on the market or put into service, with exclusions for military and national security purposes.
  • fundamental rights and freedoms: The Regulation requires that international agreements include adequate safeguards for the protection of fundamental rights and freedoms.
  • Member States: Member States must align international agreements with the requirements of this Regulation.
  • Article 4(2) TEU: The exclusion of military and defence AI systems from the Regulation is justified by Article 4(2) TEU.
  • Chapter 2 of Title V TEU: The Regulation considers the specificities of Member States' and Union defence policy covered by Chapter 2 of Title V TEU.
  • public international law: Public international law is identified as the more appropriate legal framework for regulating AI systems in military and defence contexts.
  • AI system: AI systems placed on the market or put into service for civilian or law enforcement purposes must comply with the Regulation's requirements and obligations.
  • AI system: AI systems placed on the market or put into service for civilian or law enforcement purposes must comply with the Regulation's requirements and obligations.
  • military, defence or national security purposes: The Regulation excludes military, defence, and national security purposes from its scope.
  • scientific research and development: AI systems developed solely for scientific research and development are excluded from the Regulation's scope.
  • AI regulatory sandboxes and testing in real world conditions: The Regulation includes provisions on AI regulatory sandboxes and testing in real world conditions.
  • AI regulatory sandboxes: The Regulation includes provisions on AI regulatory sandboxes and testing in real world conditions.
  • high-risk AI systems: The Regulation establishes requirements for the development, assessment, testing, and placement on the market of high-risk AI systems, including explanation rights for affected persons.
  • unacceptable AI practices: The Regulation prohibits certain unacceptable AI practices.
  • 2019 Ethics guidelines for trustworthy AI: The Regulation references the 2019 Ethics guidelines for trustworthy AI as important context for the risk-based approach.
  • guidelines for trustworthy AI: The guidelines are developed without prejudice to the legally binding requirements of the Regulation.
  • Codes of conduct: The Regulation establishes the framework for drafting codes of conduct based on ethical principles.
  • AI systems: The Regulation governs the development, use, enforcement, and market surveillance of AI systems through ethical principles and requirements.
  • AI-enabled manipulative techniques: The Regulation prohibits AI-enabled manipulative techniques that contradict Union values and fundamental rights.
  • Codes of conduct: Codes of conduct are developed under and comply with the Regulation's framework.
  • Directive 2005/29/EC: The AI regulation's prohibitions on manipulative practices are complementary to and work alongside Directive 2005/29/EC provisions.
  • Manipulative or exploitative AI-enabled practices: The regulation establishes prohibitions for manipulative and exploitative AI-enabled practices.
  • Biometric categorisation systems: The regulation prohibits biometric categorisation systems that deduce or infer protected personal characteristics.
  • real-time remote biometric identification systems: The Regulation establishes rules for the deployment and use of real-time remote biometric identification systems.
  • information systems: The Regulation governs the use of information systems by authorities for identity identification purposes.
  • proportionate use requirement: The Regulation requires that biometric identification systems be used in a responsible and proportionate manner with specific safeguards.
  • real-time remote biometric identification system: The system is governed by the Regulation which sets out requirements and procedures.
  • real-time biometric identification system: The Regulation establishes requirements and restrictions for the use of real-time biometric identification systems.
  • Article 16 TFEU: The Regulation's rules on real-time biometric identification are based on Article 16 TFEU.
  • Directive (EU) 2016/680: The Regulation references Directive (EU) 2016/680 as lex specialis regarding biometric data processing and law enforcement AI systems.
  • law enforcement: The Regulation prohibits the use of AI systems for real-time remote biometric identification for law enforcement purposes, subject to certain exceptions.
  • high-risk AI systems: The Regulation establishes requirements for the development, assessment, testing, and placement on the market of high-risk AI systems, including explanation rights for affected persons.
  • material influence on decision-making: The Regulation establishes the criterion of material influence on decision-making to identify exceptions to high-risk classification.
  • high-risk AI systems: The Regulation establishes requirements for the development, assessment, testing, and placement on the market of high-risk AI systems, including explanation rights for affected persons.
  • criminal risk assessment: Criminal risk assessment systems are subject to prohibitions under the Regulation.
  • high-risk AI systems: The Regulation establishes requirements for the development, assessment, testing, and placement on the market of high-risk AI systems, including explanation rights for affected persons.
  • personal data protection: The Regulation does not provide legal ground for personal data processing unless specifically provided for within it.
  • Union harmonisation legislation: The Regulation complements existing Union harmonisation legislation listed in Section B of Annex I applicable to AI systems.
  • risk-management system: The risk-management system must be regularly reviewed and updated in accordance with the requirements of this Regulation.
  • risk management: The Regulation requires high-risk AI systems to implement risk management practices.
  • human oversight: The Regulation requires human oversight mechanisms for high-risk AI systems.
  • robustness, accuracy and cybersecurity: The Regulation requires high-risk AI systems to meet robustness, accuracy, and cybersecurity standards.
  • high-risk AI systems: The Regulation establishes requirements for the development, assessment, testing, and placement on the market of high-risk AI systems, including explanation rights for affected persons.
  • data protection by design and by default: The Regulation requires data protection by design and by default principles throughout the AI system lifecycle.
  • data minimisation: The Regulation requires data minimisation principles when processing personal data in AI systems.
  • technical documentation: Technical documentation is required to facilitate compliance verification with the Regulation.
  • provider: The Regulation establishes obligations that apply to providers of AI systems.
  • distributor: The Regulation applies to distributors who may assume provider obligations under certain conditions.
  • importer: The Regulation applies to importers who may assume provider obligations under certain conditions.
  • deployer: The Regulation applies to deployers who may assume provider obligations under certain conditions.
  • high-risk AI system: The Regulation establishes specific requirements and obligations for high-risk AI systems.
  • New Legislative Framework: The Regulation references Union harmonisation legislation based on the New Legislative Framework.
  • provider: Providers of high-risk AI systems must comply with obligations established in the Regulation.
  • high-risk AI systems: High-risk AI systems are subject to the requirements and conformity obligations established by the Regulation.
  • product manufacturer: Product manufacturers must ensure embedded AI systems comply with Regulation requirements when AI is a safety component.
  • provider: The provider must fully comply with the obligations set out in the Regulation.
  • information requirement: The information requirement is laid down in the Regulation.
  • high-risk AI systems: High-risk AI systems are subject to the requirements and conformity obligations established by the Regulation.
  • general-purpose AI models: The Regulation establishes rules and value chain obligations applicable to providers of general-purpose AI models placed on the market.
  • general-purpose AI models that pose systemic risks: The Regulation establishes specific rules for general-purpose AI models that pose systemic risks.
  • general-purpose AI models: General-purpose AI models are subject to the provisions and obligations of the Regulation once placed on the market.
  • general-purpose AI models with systemic risk: General-purpose AI models with systemic risk should always be subject to relevant obligations under the Regulation.
  • research, development and prototyping activities: The Regulation does not apply to AI models used solely for research, development and prototyping activities before placing on the market.
  • Documentation: The Regulation requires that minimal documentation elements be set out in specific annexes.
  • Free and open-source AI components: Free and open-source AI components are covered and regulated by the Regulation.
  • general-purpose AI models: The Regulation establishes rules and value chain obligations applicable to providers of general-purpose AI models placed on the market.
  • value chain obligations: Value chain obligations are provided in the Regulation.
  • general-purpose AI model with systemic risks: The Regulation establishes a methodology for classifying general-purpose AI models as those with systemic risks.
  • floating point operations: The Regulation requires setting an initial threshold of floating point operations to determine if a model presents systemic risks.
  • general-purpose AI model with systemic risks: The Regulation contains criteria and procedures for classifying and designating general-purpose AI models with systemic risks.
  • general-purpose AI model with systemic risk: General-purpose AI models with systemic risk are subject to enhanced obligations under the Regulation.
  • open-source model release: Open-source model releases may complicate implementation of compliance measures under the Regulation.
  • Codes of practice: Codes of practice serve as central tools for compliance with obligations provided under the Regulation.
  • conformity assessment: The Regulation requires high-risk AI systems to undergo conformity assessment before market placement or putting into service.
  • notified bodies: The regulation establishes the role and requirements for notified bodies in conducting third-party conformity assessments.
  • World Trade Organization Agreement on Technical Barriers to Trade: The regulation's mutual recognition provisions are based on Union commitments under the WTO Agreement.
  • biometric AI systems: The regulation requires third-party conformity assessment for biometric AI systems as an exception.
  • CE marking: The Regulation requires high-risk AI systems to bear CE marking to indicate conformity.
  • Commission: The Commission should explore international instruments in line with the Regulation's requirements.
  • EU database: The EU database is established by the Regulation.
  • transparency obligation: The Regulation establishes transparency obligations for deep fakes.
  • innovative AI systems: Innovative AI systems must ensure compliance with the primary Regulation during sandbox testing.
  • AI regulatory sandbox: AI regulatory sandboxes are established under and governed by the Regulation.
  • SMEs: SMEs and deployers must implement and comply with the AI Regulation.
  • AI-on-demand platform: The AI-on-demand platform contributes to the implementation of the Regulation.
  • European Digital Innovation Hubs: The European Digital Innovation Hubs contribute to the implementation of the Regulation.
  • testing and experimentation facilities: Testing and experimentation facilities contribute to the implementation of the Regulation.
  • Regulations (EU) 2017/745: The Regulation references EU 2017/745 regarding medical devices conformity assessment.
  • Regulations (EU) 2017/746: The Regulation references EU 2017/746 regarding in vitro diagnostic devices conformity assessment.
  • Board: The Board is responsible for advisory tasks, coordinates market surveillance authorities, and oversees the application and implementation of the Regulation.
  • Regulation (EU) 2019/1020: The Regulation references Article 30 and Article 33 of Regulation (EU) 2019/1020.
  • scientific panel: The scientific panel is established to support implementation and enforcement of the Regulation.
  • Member States: The Regulation applies to and must be enforced by Member States.
  • scientific panel: The Regulation requires that a pool of experts constituting a scientific panel provide support for enforcement activities.
  • Union AI testing support structures: The Regulation requires the establishment of Union AI testing support structures to support enforcement and reinforce Member State capacities.
  • notifying authority: Notifying authorities must exercise their powers independently and impartially to ensure application of the Regulation.
  • European Data Protection Supervisor: The European Data Protection Supervisor has the power to impose fines for violations of the Regulation.
  • Commission: The Commission has authority to request compliance measures under the Regulation.
  • market surveillance authority: The Regulation establishes the role of market surveillance authorities to receive complaints about infringements.
  • Article 2: The Regulation contains Article 2 which defines its scope.
  • Member States: The Regulation applies to and must be enforced by Member States.
  • Union harmonisation legislation: The Regulation complements existing Union harmonisation legislation listed in Section B of Annex I applicable to AI systems.
  • Article 20: Article 20 is contained within the Regulation establishing corrective action requirements.
  • Article 21: Article 21 is contained within the Regulation establishing cooperation requirements.
  • Article 26: Article 26 is contained within the Regulation and establishes specific obligations for deployers.
  • Article 27: Article 27 is contained within the Regulation and establishes fundamental rights impact assessment requirements.
  • codes of practice: Codes of practice contribute to the proper application of the Regulation.
  • Providers and prospective providers: Providers must observe specific plans, terms and conditions, and follow guidance from national competent authorities to avoid administrative fines under the Regulation.
  • conformity assessment obligations: The Regulation establishes conformity assessment obligations for AI providers.
  • Article 59: Article 59 is contained within the Regulation governing AI systems.
  • Testing in real world conditions: Testing in real world conditions must comply with provisions under this Regulation.
  • Member States: Member States are required to facilitate tasks entrusted to the AI Office and implement the Regulation.
  • Article 68(1): Article 68(1) is contained within the Regulation and establishes implementing acts for fees.
  • Article 70: Article 70 is contained within the Regulation and establishes national competent authorities.
  • Article 84: Article 84 is contained within the Regulation and establishes Union AI testing support.
  • Union AI testing support: Union AI testing support activities are coordinated with expert support under the Regulation.
  • National competent authorities: National competent authorities provide guidance on implementation of the Regulation.
  • European Data Protection Supervisor: The European Data Protection Supervisor acts as competent authority for Union institutions under the Regulation.
  • provider of the general-purpose AI model: Providers of general-purpose AI models are subject to compliance with the Regulation.
  • Operator: Operators must comply with the regulation; non-compliance results in administrative fines and evaluation of multiple criteria.
  • Article 5: Article 5 is contained within the Regulation and establishes prohibited AI practices.
  • European Data Protection Supervisor: The European Data Protection Supervisor enforces compliance with the Regulation through administrative proceedings.
  • Article 101: Article 101 is contained within the Regulation governing AI models.
  • large-scale IT systems: The Regulation applies to large-scale IT systems established by legal acts listed in Annex X.
  • Directive 2009/22/EC: Regulation repeals the previous directive.
  • Article 112: Article 112 is contained within the Regulation.
  • Article 99(1): The Regulation contains Article 99(1) which establishes administrative fines.
  • AI Office: The Regulation establishes the framework for the AI Office's implementation and enforcement.
  • Member States: Member States apply penalties and enforce the Regulation through their competent authorities.
  • harmonised standards: The Regulation requires the development of harmonised standards and common specifications.
  • administrative fines: The Regulation establishes administrative fines as penalties for infringements.
  • Article 5: The Regulation contains Article 5 which lists prohibited AI practices.
  • Article 50: The Regulation contains Article 50 which establishes transparency requirements.
  • Article 113: The Regulation contains Article 113 specifying entry into force and application.
  • European Commission: The Commission shall submit appropriate proposals to amend the Regulation.

Regulation (EC) No 1223/2009 regulation

EU regulation amended by Regulation (EU) 2017/745.

Regulation (EC) No 178/2002 regulation

EU regulation amended by Regulation (EU) 2017/745.

Regulation (EC) No 1986/2006 regulation

A prior regulation that was repealed by Regulation (EU) 2018/1862.

Regulation (EC) No 1987/2006 regulation

A prior regulation that was repealed by Regulation (EU) 2018/1861.

Regulation (EC) No 2006/2004 regulation

European regulation on unfair commercial practices enacted by the European Parliament and Council.

Regulation (EC) No 216/2008 regulation

EC regulation on common rules in the field of civil aviation that was repealed by Regulation 2024/1689.

Regulation (EC) No 2320/2002 regulation

A regulation on civil aviation security that was repealed by Regulation (EC) No 300/2008.

Regulation (EC) No 300/2008 regulation

An EU regulation of 11 March 2008 on common rules in the field of civil aviation security that governs safety components of products or systems, including high-risk AI systems.

Regulation (EC) No 45/2001 regulation

Previous EU regulation on data protection by Union institutions and bodies, repealed by Regulation (EU) 2018/1725.

Regulation (EC) No 552/2004 regulation

EC regulation on the interoperability of the European Air Traffic Management network that was repealed by Regulation 2024/1689.

Regulation (EC) No 595/2009 regulation

Earlier regulation amended by Regulation (EU) 2018/858.

Regulation (EC) No 661 regulation

Earlier regulation repealed by Regulation (EU) 2019/2144.

Regulation (EC) No 661/2009 regulation

Earlier European regulation that was repealed by subsequent legislative action.

Regulation (EC) No 715/2007 regulation

Earlier regulation amended by Regulation (EU) 2018/858.

Regulation (EC) No 765/2008 regulation

EU regulation enacted on 9 July 2008 establishing requirements for accreditation, market surveillance, and CE marking procedures, amended by Regulation (EU) 2019/1020 and referenced as part of the New Legislative Framework for harmonised rules on AI systems.

Regulation (EC) No 767/2008 regulation

Regulation amended by Regulation (EU) 2017/2226 concerning the Entry/Exit System.

Regulation (EC) No 78/2009 regulation

Earlier European regulation that was repealed by Regulation (EU) 2019/2144.

Regulation (EC) No 79/2009 regulation

Earlier European regulation that was repealed by Regulation (EU) 2019/2144.

Regulation (EC) No 810/2009 regulation

European regulation establishing procedural requirements for migration, asylum, and border control management, including visa issuance and management procedures.

Regulation (EEC) No 339/93 regulation

A prior regulation that was repealed by Regulation (EC) No 765/2008.

Regulation (EU) 2015/166 regulation

A European regulation that was repealed by subsequent legislative action.

Regulation (EU) 2016/1624 regulation

Regulation amended by Regulation (EU) 2018/1240.

Regulation (EU) 2016/399 regulation

EU regulation on border management and information systems, amended by Regulation (EU) 2018/1240.

Regulation (EU) 2016/424 regulation

EU regulation of 9 March 2016 on cableway installations, enacted by the European Parliament and Council.

Regulation (EU) 2016/425 regulation

EU regulation of 9 March 2016 on personal protective equipment, enacted by the European Parliament and Council.

Regulation (EU) 2016/426 regulation

EU regulation of 9 March 2016 on appliances burning gaseous fuels, enacted by the European Parliament and Council.

Regulation (EU) 2016/679 regulation

The General Data Protection Regulation (GDPR) enacted on 27 April 2016 governing the processing of personal data and protection of natural persons, establishing data protection standards and defining biometric data and profiling concepts applicable to AI systems.
  • Regulation: The main Regulation has regard to Regulation (EU) 2016/679 in safeguarding personal data protection.
  • European Parliament and Council: Regulation (EU) 2016/679 was enacted by the European Parliament and Council.
  • Directive 95/46/EC: Regulation (EU) 2016/679 repeals Directive 95/46/EC.
  • biometric data: Biometric data definition is referenced in Article 4, point (14) of this regulation.
  • biometric data: Regulation (EU) 2016/679 references biometric data definitions.
  • Article 9(1): Article 9(1) is contained within Regulation (EU) 2016/679.
  • biometric data: Regulation (EU) 2016/679 prohibits the processing of biometric data for purposes other than law enforcement.
  • Article 4, point (4): Article 4, point (4) is contained within Regulation (EU) 2016/679.
  • high-risk AI systems: High-risk AI systems must comply with profiling definitions and conditions laid down in Regulation (EU) 2016/679 when processing personal data.
  • high-risk AI system: High-risk AI systems must comply with Union data protection law including GDPR.
  • data governance and management practices: Data governance practices must comply with Union data protection law.
  • high-risk AI systems: High-risk AI systems must comply with profiling definitions and conditions laid down in Regulation (EU) 2016/679 when processing personal data.
  • personal data: The GDPR governs the processing and protection of personal data used in the AI regulatory sandbox and throughout the European Union.
  • AI regulatory sandbox: The AI regulatory sandbox operates subject to GDPR requirements and conditions.
  • Regulation /2024/1689/oj: The AI regulation does not affect and operates alongside the GDPR regulation on personal data protection.
  • special categories of personal data: Processing of special categories of personal data must comply with Regulation (EU) 2016/679.
  • personal data: Personal data is defined in Article 4, point (1), of Regulation (EU) 2016/679.
  • non-personal data: Non-personal data is defined in relation to Article 4, point (1), of Regulation (EU) 2016/679.
  • profiling: Profiling is defined in Article 4, point (4), of Regulation (EU) 2016/679.
  • real-time remote biometric identification systems: Use of biometric identification systems is subject to Article 9 of Regulation (EU) 2016/679 for non-law enforcement purposes.
  • Article 9: Article 9 is contained within Regulation (EU) 2016/679.
  • special categories of personal data: Processing of special categories of personal data must comply with Regulation (EU) 2016/679.
  • Article 13: Article 13 information supports compliance with data protection impact assessment requirements.
  • Regulation (EU) 2024/1689: The regulation references and respects the provisions of GDPR regarding biometric data processing.
  • fundamental rights impact assessment: Fundamental rights impact assessment complements data protection impact assessment under this regulation.
  • Article 35: Article 35 is contained within Regulation (EU) 2016/679.
  • information obligation for emotion recognition and biometric categorisation: The information obligation requires compliance with GDPR for personal data processing.
  • data protection supervisory authorities: GDPR establishes data protection supervisory authorities that may oversee high-risk AI systems.
  • market surveillance authorities: Market surveillance authorities are designated based on competent data protection supervisory authorities under this regulation.
  • AI system: AI systems processing personal data must comply with GDPR (Regulation 2016/679).
  • Article 26(8): Article 26(8) references requirements from Regulation (EU) 2016/679.
  • Article 26: Article 26 references Regulation (EU) 2016/679 as applicable requirement.

Regulation (EU) 2016/680 regulation

Union law on the protection of personal data that applies in conjunction with the AI regulation.
  • Regulation /2024/1689/oj: The AI regulation does not affect and operates alongside this regulation on personal data protection.

Regulation (EU) 2016/794 regulation

EU regulation that was amended by Regulation (EU) 2018/1241 for the purpose of establishing ETIAS.

Regulation (EU) 2017/2226 regulation

Regulation establishing the Entry/Exit System (EES) to register entry and exit data of third-country nationals crossing external borders of Member States.

Regulation (EU) 2017/2394 regulation

Regulation amended by the Data Act (Regulation (EU) 2023/2854).

Regulation (EU) 2017/745 regulation

An EU regulation of 5 April 2017 on medical devices that establishes requirements for medical devices and may incorporate high-risk AI systems as safety components.

Regulation (EU) 2017/746 regulation

An EU regulation of 5 April 2017 on in vitro diagnostic medical devices that may incorporate high-risk AI systems as safety components.

Regulation (EU) 2018/1139 regulation

European regulation establishing common rules in the field of civil aviation and the European Union Aviation Safety Agency, enacted on 4 July 2018, amended to incorporate requirements for AI systems as safety components.

Regulation (EU) 2018/1240 regulation

Regulation establishing the European Travel Information and Authorisation System (ETIAS) for third-country nationals.

Regulation (EU) 2018/1241 regulation

Regulation of the European Parliament and Council from 12 September 2018 that amends Regulation (EU) 2016/794 to establish the European Travel Information and Authorisation System (ETIAS).

Regulation (EU) 2018/1724 regulation

Regulation amended by the Data Governance Act (Regulation (EU) 2022/868).

Regulation (EU) 2018/1725 regulation

EU regulation of 23 October 2018 on the protection of personal data by Union institutions, bodies, offices and agencies, establishing data protection requirements and defining biometric data and profiling applicable to EU institutions managing AI systems.
  • Regulation: The main Regulation has regard to Regulation (EU) 2018/1725 in safeguarding personal data protection.
  • European Parliament and Council: Regulation (EU) 2018/1725 was enacted by the European Parliament and Council.
  • Regulation (EC) No 45/2001: Regulation (EU) 2018/1725 repeals Regulation (EC) No 45/2001.
  • Decision No 1247/2002/EC: Regulation (EU) 2018/1725 repeals Decision No 1247/2002/EC.
  • biometric data: Biometric data definition is referenced in Article 3, point (18) of this regulation.
  • biometric data: Regulation (EU) 2018/1725 Article 3, point (18) defines aspects of biometric data.
  • Article 10(1): Article 10(1) is contained within Regulation (EU) 2018/1725.
  • biometric data: Regulation (EU) 2018/1725 prohibits the processing of biometric data for purposes other than law enforcement.
  • Article 3, point (5): Article 3, point (5) is contained within Regulation (EU) 2018/1725.
  • high-risk AI systems: High-risk AI systems must comply with profiling definitions and conditions laid down in Regulation (EU) 2018/1725 when processing personal data.
  • Commission: The Commission's role as data controller complies with Regulation (EU) 2018/1725.
  • personal data: This regulation governs personal data processing by EU institutions and bodies in the context of the AI regulatory sandbox.
  • Regulation /2024/1689/oj: The AI regulation does not affect and operates alongside this regulation on personal data protection.
  • special categories of personal data: Special categories of personal data are defined by reference to Article 10(1) of this regulation.
  • special categories of personal data: Processing of special categories of personal data must comply with Regulation (EU) 2018/1725.
  • information obligation for emotion recognition and biometric categorisation: The information obligation requires compliance with EU regulation on data protection by Union institutions.
  • AI system: AI systems processing personal data must comply with Regulation 2018/1725.

Regulation (EU) 2018/1726 regulation

EU regulation establishing a framework for interoperability between EU information systems, amended by Regulation (EU) 2019/816 and 2019/818.

Regulation (EU) 2018/1860 regulation

EU regulation on the use of the Schengen Information System for the return of illegally staying third-country nationals.

Regulation (EU) 2018/1861 regulation

Regulation of the European Parliament and of the Council on the establishment, operation and use of the Schengen Information System (SIS) in the field of border checks, enacted on 28 November 2018.

Regulation (EU) 2018/1862 regulation

Regulation of the European Parliament and of the Council on the establishment, operation and use of the Schengen Information System (SIS) in the field of police cooperation and judicial cooperation in criminal matters, enacted on 28 November 2018.

Regulation (EU) 2018/858 regulation

European regulation on the approval and market surveillance of motor vehicles and their trailers, enacted on 30 May 2018, amended to incorporate requirements for high-risk AI systems used as safety components.

Regulation (EU) 2019/1020 regulation

EU regulation enacted on 20 June 2019 establishing market surveillance and compliance procedures for products, including enforcement powers and administrative cooperation mechanisms applicable to AI systems, with procedural rights that apply mutatis mutandis to general-purpose AI model providers.
  • Regulation 2024/1689: The regulation references the New Legislative Framework regulation for harmonised rules.
  • New Legislative Framework: Regulation 2019/1020 is part of and defines the New Legislative Framework.
  • data protection: The regulation is complementary to and without prejudice to existing Union law on data protection.
  • consumer protection: The regulation is complementary to and without prejudice to existing Union law on consumer protection.
  • fundamental rights: The regulation is complementary to and without prejudice to existing Union law on fundamental rights.
  • Council Directive 85/374/EEC: Rights and remedies provided by Council Directive 85/374/EEC remain unaffected and fully applicable.
  • transparency: The regulation establishes specific requirements for transparency of AI systems.
  • technical documentation: The regulation establishes specific requirements for technical documentation of AI systems.
  • record-keeping: The regulation establishes specific requirements and obligations for record-keeping of AI systems.
  • European Parliament and of the Council: The regulation was enacted by the European Parliament and Council.
  • Council Decision 93/465/EEC: Regulation (EU) 2019/1020 repeals Council Decision 93/465/EEC.
  • European Parliament and Council: Regulation (EU) 2019/1020 was enacted by the European Parliament and Council.
  • Directive 2004/42/EC: Regulation (EU) 2019/1020 amends Directive 2004/42/EC.
  • Regulation (EC) No 765/2008: Regulation (EU) 2019/1020 amends Regulation (EC) No 765/2008.
  • Regulation (EU) No 305/2011: Regulation (EU) 2019/1020 amends Regulation (EU) No 305/2011.
  • Regulation: The Regulation references Article 30 and Article 33 of Regulation (EU) 2019/1020.
  • standing subgroup for market surveillance: The standing subgroup acts as the administrative cooperation group within the meaning of Article 30 of Regulation (EU) 2019/1020.
  • This Regulation: This regulation applies the market surveillance and compliance system established by Regulation (EU) 2019/1020.
  • market surveillance authorities: Market surveillance authorities exercise powers laid down in Regulation (EU) 2019/1020.
  • market surveillance: Regulation 2019/1020 provides powers to competent authorities to enforce market surveillance requirements.
  • Article 9: Regulation (EU) 2019/1020 contains Article 9 which addresses serious risk AI systems and governs joint activities.
  • AI Office: The AI Office operates as a market surveillance authority under the powers provided by Regulation (EU) 2019/1020.
  • Article 18: Regulation (EU) 2019/1020 contains Article 18 which provides procedural rights.
  • market surveillance authority: Market surveillance authorities carry out activities pursuant to Regulation (EU) 2019/1020.
  • Board: The Board's market surveillance sub-group acts as the ADCO within the meaning of Article 30 of Regulation (EU) 2019/1020.
  • Article 19 of Regulation (EU) 2019/1020: Article 19 is contained within Regulation (EU) 2019/1020.
  • Article 74: Article 74 applies Regulation (EU) 2019/1020 to AI systems covered by this Regulation.
  • AI systems: Regulation (EU) 2019/1020 governs market surveillance and control of AI systems.
  • market surveillance authorities: Joint activities and investigations follow procedures outlined in Article 9 of this regulation.
  • AI Office: The AI Office operates as a market surveillance authority under the powers provided by Regulation (EU) 2019/1020.
  • Article 78: Article 78 references Regulation (EU) 2019/1020 in relation to the exercise of powers by authorities.
  • Article 79: Article 79 references the definition of 'product presenting a risk' from Regulation (EU) 2019/1020.
  • market surveillance authority: Article 18 of Regulation (EU) 2019/1020 applies to measures taken by market surveillance authorities.
  • Article 84: Article 84 references Regulation (EU) 2019/1020 for defining tasks of testing support structures.
  • Regulation (EU) 2024/1689: Regulation (EU) 2024/1689 references Regulation (EU) 2019/1020 regarding market surveillance procedures.
  • Article 94: Article 94 references and applies Article 18 of Regulation (EU) 2019/1020 to general-purpose AI model providers.

Regulation (EU) 2019/2144 regulation

European regulation on type-approval requirements for motor vehicles and their trailers regarding safety and protection, enacted on 27 November 2019, amended to include requirements for AI systems as safety components.

Regulation (EU) 2019/816 regulation

Regulation establishing ECRIS-TCN, a centralised system for identifying Member States holding conviction information on third-country nationals and stateless persons, amended by Regulation (EU) 2019/818.

Regulation (EU) 2019/817 regulation

Regulation of the European Parliament and Council from 20 May 2019 establishing a framework for interoperability between EU information systems in the field of borders and visa.

Regulation (EU) 2019/818 regulation

Regulation establishing a framework for interoperability between EU information systems in police and judicial cooperation, asylum and migration, amended by Regulation (EU) 2024/1358.

Regulation (EU) 2019/881 regulation

EU regulation establishing cybersecurity schemes and certification requirements under which high-risk AI systems can be certified or issued statements of conformity.
  • European Parliament and of the Council: The regulation was enacted by the European Parliament and of the Council on 17 April 2019.
  • ENISA: ENISA has knowledge, expertise, and tasks assigned under Regulation (EU) 2019/881.
  • cybersecurity certificate: Regulation (EU) 2019/881 establishes cybersecurity schemes under which certificates are issued.

Regulation (EU) 2021/1133 regulation

Regulation of the European Parliament and of the Council of 7 July 2021 amending multiple EU regulations regarding the establishment of conditions for accessing EU information systems for the Visa Information System.

Regulation (EU) 2021/1134 regulation

A regulation enacted by the European Parliament and Council on 7 July 2021 that amends multiple EU regulations related to the Visa Information System reform.

Regulation (EU) 2022/2065 regulation

EU regulation (Digital Services Act) enacted on 19 October 2022 that governs the liability and obligations of providers of intermediary services, online platforms, and search engines, establishing a risk-management framework for very large platforms and addressing illegal content, systemic risks, and artificially generated content.

Regulation (EU) 2022/868 regulation

The Data Governance Act enacted on 30 May 2022 establishing safeguards for non-personal data transfers to third countries and amending Regulation (EU) 2018/1724.

Regulation (EU) 2023/2854 regulation

The Data Act enacted on 13 December 2023 establishing harmonised rules on fair access to and use of data, amending Regulation (EU) 2017/2394 and Directive (EU) 2020/1828.

Regulation (EU) 2023/988 regulation

EU regulation on general product safety enacted on 10 May 2023 that serves as a safety net for non-high-risk AI products to ensure they remain safe when placed on the market or put into service.

Regulation (EU) 2024/1315 regulation

An EU regulation that Eurodac is established to effectively apply.

Regulation (EU) 2024/1350 regulation

An EU regulation that Eurodac is established to effectively apply.

Regulation (EU) 2024/1358 regulation

Regulation enacted on 14 May 2024 establishing Eurodac for biometric data comparison and amending regulations concerning law enforcement access to Eurodac data.

REGULATION (EU) 2024/1689 regulation

The Artificial Intelligence Act enacted by the European Parliament and Council on 13 June 2024, laying down harmonised rules on artificial intelligence and establishing requirements for high-risk AI systems, general-purpose AI systems, market surveillance, and transparency obligations while amending multiple prior EU regulations and directives.

Regulation (EU) 2024/900 regulation

EU regulation enacted by the European Parliament and Council on 13 March 2024 addressing transparency and targeting of political advertising and rules related to external interference with voting rights and democratic processes.

Regulation (EU) No 1024/2013 regulation

EU regulation establishing the Single Supervisory Mechanism and prudential supervisory tasks for the European Central Bank, including market surveillance activities.
  • European Central Bank: The European Central Bank operates under the framework established by Regulation (EU) No 1024/2013 for prudential supervisory tasks.
  • Single Supervisory Mechanism: Regulation establishes the Single Supervisory Mechanism for credit institution supervision.

Regulation (EU) No 1025/2012 regulation

EU regulation governing European standardisation, establishing requirements for balanced stakeholder representation and the development and publication of harmonised standards in the Official Journal of the European Union.
  • harmonised standards: Harmonised standards are defined in Regulation (EU) No 1025/2012.
  • European Parliament and of the Council: Regulation (EU) No 1025/2012 was enacted by the European Parliament and Council.
  • Articles 5 and 6 of Regulation (EU) No 1025/2012: Articles 5 and 6 are contained within Regulation (EU) No 1025/2012.
  • harmonised standard: Harmonised standards are defined in Regulation (EU) No 1025/2012.
  • common specification: Common specifications are defined in Regulation (EU) No 1025/2012.
  • Article 40: Article 40 references Regulation (EU) No 1025/2012 regarding standardisation and publication of harmonised standards.
  • European standardisation organisations: European standardisation organisations must provide evidence of best efforts in accordance with Article 24 of Regulation 1025/2012.
  • Article 5: Article 5 is contained within Regulation (EU) No 1025/2012.
  • Article 6: Article 6 is contained within Regulation (EU) No 1025/2012.
  • Article 7: Article 7 is contained within Regulation (EU) No 1025/2012.
  • Article 10(1): Article 10(1) is contained within Regulation (EU) No 1025/2012.
  • harmonised standard: Harmonised standards are assessed by the Commission in accordance with Regulation (EU) No 1025/2012.
  • Commission: The Commission applies the procedure provided in Regulation (EU) No 1025/2012 when addressing shortcomings in harmonised standards.

Regulation (EU) No 1077/2011 regulation

Regulation amended by multiple regulations including Regulation (EU) 2017/2226 and Regulation (EU) 2018/1240.

Regulation (EU) No 1093/2010 regulation

Regulation amended by Directive 2014/17/EU.

Regulation (EU) No 167/2013 regulation

European regulation on the approval and market surveillance of agricultural and forestry vehicles, enacted on 5 February 2013, which is amended by the Artificial Intelligence Act regarding AI systems as safety components.

Regulation (EU) No 168/2013 regulation

European regulation on the approval and market surveillance of two- or three-wheel vehicles and quadricycles, enacted on 15 January 2013, amended by the Artificial Intelligence Act for high-risk AI systems used as safety components.
  • high-risk AI systems: High-risk AI systems that are safety components fall within the scope of this regulation.
  • European Parliament and of the Council: Regulation (EU) No 168/2013 was enacted by the European Parliament and of the Council.
  • Article 104: Article 104 amends Regulation (EU) No 168/2013 by adding requirements to Article 22(5).

Regulation (EU) No 182/2011 regulation

EU regulation enacted on 16 February 2011 laying down rules and principles concerning mechanisms for Member State control of the Commission's exercise of implementing powers.
  • Commission: The Commission's implementing powers are exercised in accordance with Regulation (EU) No 182/2011.
  • European Parliament and of the Council: Regulation (EU) No 182/2011 was enacted by the European Parliament and of the Council.
  • Article 98: Article 98 references Regulation (EU) No 182/2011 for committee procedures.

Regulation (EU) No 305/2011 regulation

Regulation amended by Regulation (EU) 2019/1020.

Regulation (EU) No 515/2014 regulation

Regulation amended by Regulation (EU) 2018/1240.

Regulation (EU) No 575/2013 regulation

EU regulation on prudential requirements for credit institutions and investment firms, enacted by the European Parliament and Council on 26 June 2013.

Regulation (EU) No 603/2013 regulation

Regulation repealed by Regulation (EU) 2024/1358 concerning Eurodac data.

Regulation /2024/1689/oj regulation

A Union regulation that establishes rules for AI systems and models, with specific scope limitations and exemptions outlined in articles 6-12.
  • AI systems: The regulation applies to AI systems with specific exemptions for scientific research, pre-market development, and personal non-professional use.
  • AI models: The regulation applies to AI models with specific exemptions for scientific research, pre-market development, and certain licensing contexts.
  • Regulation (EU) 2016/679: The AI regulation does not affect and operates alongside the GDPR regulation on personal data protection.
  • Regulation (EU) 2018/1725: The AI regulation does not affect and operates alongside this regulation on personal data protection.
  • Directive 2002/58/EC: The AI regulation does not affect and operates alongside this directive on privacy and confidentiality.
  • Regulation (EU) 2016/680: The AI regulation does not affect and operates alongside this regulation on personal data protection.
  • Article 10(5): The regulation contains Article 10(5) which provides exceptions to the non-applicability of data protection regulations.
  • Article 59: The regulation contains Article 59 which provides exceptions to the non-applicability of data protection regulations.
  • Article 5: The regulation contains Article 5 which defines certain AI systems subject to the regulation.
  • Article 50: The regulation contains Article 50 which defines certain AI systems subject to the regulation.
  • personal data: The regulation governs the processing of personal data in connection with its rights and obligations.
  • high-risk AI systems: The regulation applies to high-risk AI systems even when released under free and open-source licenses.
  • free and open-source licences: The regulation exempts AI systems released under free and open-source licenses unless they are high-risk or fall under specific articles.

Regulation 2014/53/EU regulation

EU regulation of the European Parliament and Council referenced in relation to aircraft design and market placement.

Regulation 2018/30/EU regulation

EU regulation referenced as being amended by Regulation 2024/1689.

Regulation 2024/1689 regulation

The EU regulation enacted on 13 June 2024 and published in the Official Journal on 12 July 2024 that establishes uniform obligations for AI system operators, requirements for high-risk and general-purpose AI systems, market surveillance procedures, and conformity assessment requirements with staggered application dates beginning 2 February 2025.
  • Article 114 TFEU: The regulation is based on Article 114 TFEU for establishing uniform obligations on AI systems within the internal market.
  • Article 16 TFEU: Regulation 2024/1689 is adopted on the basis of Article 16 TFEU for personal data protection rules concerning AI systems in law enforcement.
  • European Data Protection Board: The European Data Protection Board was consulted regarding the specific rules on personal data protection in the regulation.
  • remote biometric identification for law enforcement: The regulation contains specific rules restricting the use of AI systems for remote biometric identification in law enforcement.
  • risk assessments of natural persons for law enforcement: The regulation contains specific rules restricting the use of AI systems for risk assessments of natural persons in law enforcement.
  • biometric categorisation for law enforcement: The regulation contains specific rules restricting the use of AI systems for biometric categorisation in law enforcement.
  • personal data protection: The regulation establishes obligations for protecting individuals' personal data regarding AI system use in law enforcement.
  • European Council: The regulation is based on conclusions from the European Council regarding human-centric AI approach.
  • European Parliament: The regulation incorporates ethical principles protection as requested by the European Parliament.
  • high-risk AI systems: The regulation establishes requirements, classifications, and specific responsibilities for the use and deployment of high-risk AI systems.
  • ethical principles: The regulation ensures protection of ethical principles.
  • Regulation (EC) No 765/2008: The regulation references the New Legislative Framework regulation for harmonised rules.
  • Decision No 768/2008/EC: The regulation references the New Legislative Framework decision for harmonised rules.
  • Regulation (EU) 2019/1020: The regulation references the New Legislative Framework regulation for harmonised rules.
  • AI literacy: AI literacy requirements are established within Regulation 2024/1689.
  • Fundamental rights protection: The regulation requires protection of fundamental rights, health and safety in AI systems.
  • Remote biometric identification systems: The regulation establishes rules for the use of remote biometric identification systems.
  • Law enforcement: The regulation applies to the use of remote biometric identification systems by law enforcement authorities.
  • Remote biometric identification systems: The regulation prohibits the use of remote biometric identification systems for law enforcement except in exhaustively listed and narrowly defined situations.
  • Search for crime victims: The regulation permits remote biometric identification for searching crime victims including missing persons.
  • Threats to life or physical safety: The regulation permits remote biometric identification for addressing threats to life, physical safety, or terrorist attacks.
  • Criminal offence identification: The regulation permits remote biometric identification for localization or identification of perpetrators or suspects of listed criminal offences.
  • Denmark: Denmark is not bound by specific provisions of Regulation 2024/1689 as outlined in Protocol No 22.
  • Ireland: Ireland is not bound by rules governing judicial cooperation in criminal matters and police cooperation under certain conditions.
  • high-risk AI systems: The regulation establishes requirements, classifications, and specific responsibilities for the use and deployment of high-risk AI systems.
  • deployers: The regulation sets specific responsibilities and obligations that deployers must fulfill when using high-risk AI systems.
  • Directive 2002/14/EC: The regulation acknowledges and respects existing worker information and consultation obligations under the directive.
  • worker information and consultation: The regulation requires information of workers and their representatives on planned deployment of high-risk AI systems at the workplace.
  • fundamental rights impact assessment: The regulation requires deployers of high-risk AI systems to conduct fundamental rights impact assessments.
  • remote biometric identification: The regulation prohibits and restricts real-time remote biometric identification with strict exceptions.
  • general-purpose AI model: The regulation establishes requirements and transparency measures for general-purpose AI models.
  • Commission: The Commission is empowered to amend annexes to the Regulation through delegated acts.
  • notification requirement: The regulation establishes the obligation for providers to notify the AI Office of systemic risk classification.
  • open-source model release: Open-source model releases are subject to special consideration under the regulation due to compliance implementation challenges.
  • Marking in machine readable format: The regulation requires AI system providers to embed technical solutions for machine-readable marking.
  • Content origin detection: The regulation requires detection capabilities to identify AI-generated or manipulated content.
  • AI Office: Regulation 2024/1689 establishes the AI Office as the institution responsible for monitoring compliance.
  • testing in real-world conditions: Testing in real-world conditions must comply with requirements of the regulation.
  • Article 57: Article 57 is contained within the regulation.
  • Article 60: Article 60 is contained within the regulation.
  • high-risk AI system: High-risk AI systems are subject to the conformity and market placement requirements of the regulation.
  • Article 25: Article 25 is contained within Regulation 2024/1689.
  • Article 25: Article 25 references Article 6 and Article 16 of the same regulation.
  • Notified bodies: The regulation establishes requirements and obligations for notified bodies.
  • High-risk AI system: The regulation governs the conformity assessment of high-risk AI systems.
  • Article 43: Article 43 is part of the AI Regulation establishing conformity assessment procedures.
  • general-purpose AI models: The regulation establishes obligations that apply to providers of general-purpose AI models.
  • Commission: The regulation establishes competences and powers for the Commission to exercise oversight.
  • provider: Providers must comply with the obligations established in the regulation.
  • Article 55: Article 55 is part of Regulation 2024/1689 and establishes obligations for providers of general-purpose AI models with systemic risk.
  • Article 17: Regulation 2024/1689 contains Article 17 which establishes quality management system requirements.
  • Article 63: The Regulation contains Article 63 providing derogations for specific operators.
  • Article 64: The Regulation contains Article 64 establishing the AI Office.
  • Article 65: The Regulation contains Article 65 on the European Artificial Intelligence Board.
  • AI Office: Regulation 2024/1689 establishes the AI Office as the institution responsible for monitoring compliance.
  • European Artificial Intelligence Board: The Regulation establishes the European Artificial Intelligence Board through Article 65.
  • Annex I: The regulation references Union harmonisation legislation listed in Annex I.
  • Chapter III, Section 2: Regulation 2024/1689 contains Chapter III, Section 2 which sets out essential requirements and technical specifications for AI systems.
  • Article 40: The regulation contains Article 40 regarding harmonised standards or common specifications.
  • Article 41: The regulation contains Article 41 regarding harmonised standards or common specifications.
  • Commission: The Commission is responsible for promoting AI literacy and implementing aspects of the regulation.
  • market operators: Market operators must develop common criteria and shared understanding of concepts provided in the regulation.
  • competent authorities: Competent authorities are responsible for implementing and enforcing the regulation.
  • scientific panel of independent experts: The scientific panel supports enforcement activities under the regulation.
  • Article 68: Article 68 is contained within Regulation 2024/1689.
  • Member States: The regulation applies to and establishes obligations for Member States regarding competent authorities.
  • national competent authorities: The regulation governs the tasks and requirements for national competent authorities.
  • Article 78: The regulation contains Article 78 which establishes confidentiality obligations.
  • Article 58: Regulation 2024/1689 contains Article 58 on AI regulatory sandboxes.
  • Article 60: Regulation 2024/1689 contains Article 60 on testing conditions.
  • Article 61: Regulation 2024/1689 contains Article 61 on additional testing conditions.
  • Article 89: Article 89 is contained within Regulation 2024/1689.
  • Article 90: Article 90 is contained within Regulation 2024/1689.
  • Monitoring actions: The Regulation requires the AI Office to take monitoring actions to ensure compliance.
  • AI systems: The regulation governs the development and deployment of AI systems.
  • European Parliament: Regulation 2024/1689 was enacted by the European Parliament.
  • Council: Regulation 2024/1689 was enacted by the Council.
  • Article 113: Regulation 2024/1689 contains Article 113 which establishes its entry into force and application dates.
  • European Parliament and of the Council: Regulation 2024/1689 was enacted by the European Parliament and Council.
  • Regulation (EC) No 552/2004: Regulation 2024/1689 repeals Regulation (EC) No 552/2004.
  • Regulation (EC) No 216/2008: Regulation 2024/1689 repeals Regulation (EC) No 216/2008.
  • Council Regulation (EEC) No 3922/91: Regulation 2024/1689 repeals Council Regulation (EEC) No 3922/91.
  • unmanned aircraft: Regulation 2024/1689 applies to the design, production and placing on the market of unmanned aircraft.
  • Article 2(1): Regulation 2024/1689 contains Article 2(1) which defines the scope of unmanned aircraft covered.
  • Article 5(1): Regulation 2024/1689 contains Article 5(1) which references criminal offences in Annex II.
  • Article 11(1): Regulation 2024/1689 contains Article 11(1) which establishes documentation requirements.
  • AI systems for migration and border control: Regulation 2024/1689 governs AI systems used for migration, asylum and border control management.
  • AI systems for judicial assistance: Regulation 2024/1689 governs AI systems intended for judicial assistance and alternative dispute resolution.
  • AI systems for election influence: Regulation 2024/1689 governs AI systems intended to influence election outcomes or voting behaviour.
  • EU declaration of conformity: The EU declaration of conformity certifies conformity with Regulation 2024/1689.
  • Article 47: Regulation 2024/1689 contains Article 47 which establishes EU declaration of conformity requirements.
  • Article 72: Regulation 2024/1689 contains Article 72 addressing post-market monitoring requirements.
  • AI system: AI systems must be in conformity with Regulation 2024/1689.
  • EU declaration of conformity: EU declaration of conformity references compliance with Regulation 2024/1689.
  • Article 53(1), point (b): Regulation 2024/1689 contains Article 53(1), point (b) regarding transparency requirements.

Regulation 2024/900 regulation

A Union regulation establishing requirements for high-risk AI systems and their compliance with harmonised legislation.
  • high-risk AI system: Regulation 2024/900 establishes requirements that apply to high-risk AI systems.
  • Union harmonised legislation: Regulation 2024/900 is applied simultaneously and complementarily with Union harmonised legislation.
  • provider: Providers of products containing high-risk AI systems are subject to compliance requirements under Regulation 2024/900.

Regulation on horizontal cybersecurity requirements for products with digital elements regulation

A regulation of the European Parliament and Council establishing essential and horizontal cybersecurity requirements applicable to products with digital elements.
  • cybersecurity requirements: The regulation establishes essential cybersecurity requirements for products with digital elements.
  • high-risk AI systems: High-risk AI systems can demonstrate compliance with cybersecurity requirements by fulfilling the essential requirements of the horizontal cybersecurity regulation.
  • Regulation (EU) 2024/1689: The regulation applies conformity assessment provisions from the horizontal cybersecurity regulation for essential cybersecurity requirements.

Regulations (EU) 2017/745 regulation

Union harmonisation legislation governing medical devices with provisions for third-party conformity assessment of medium-risk and high-risk products.
  • This Regulation: This Regulation references EU 2017/745 regarding medical device classification and conformity assessment.
  • Regulation: The Regulation references EU 2017/745 regarding medical devices conformity assessment.

Regulations (EU) 2017/746 regulation

Union harmonisation legislation governing in vitro diagnostic medical devices with provisions for third-party conformity assessment of medium-risk and high-risk products.
  • This Regulation: This Regulation references EU 2017/746 regarding in vitro diagnostic medical device classification and conformity assessment.
  • Regulation: The Regulation references EU 2017/746 regarding in vitro diagnostic devices conformity assessment.

regulatory learning evaluation_criterion

The process of authorities and undertakings gaining evidence-based understanding of AI opportunities, risks, and impacts to inform future legal framework adaptations.
  • AI regulatory sandbox: AI regulatory sandboxes aim to facilitate regulatory learning for authorities and undertakings through evidence-based experimentation.

remote biometric identification ai_system

AI system for identifying individuals through biometric data at a distance, subject to strict conditions and prohibitions.
  • Regulation 2024/1689: The regulation prohibits and restricts real-time remote biometric identification with strict exceptions.

remote biometric identification for law enforcement ai_system

An AI system used by law enforcement authorities for identifying individuals through biometric data.
  • Regulation 2024/1689: The regulation contains specific rules restricting the use of AI systems for remote biometric identification in law enforcement.

remote biometric identification system ai_system

An AI system intended for the identification of natural persons without their active involvement, typically at a distance, through comparison of biometric data with reference databases, classified as high-risk due to risks of bias and discrimination.
  • Regulation: The Regulation defines the notion and functional characteristics of remote biometric identification systems.
  • biometric data: Remote biometric identification systems identify persons through comparison of biometric data from reference databases.
  • data deletion obligation: Remote biometric identification system output cannot solely form the basis for adverse legal decisions.
  • high-risk AI system: Remote biometric identification systems are classified as high-risk due to risks of bias and discriminatory effects.
  • real-time remote biometric identification system: Real-time remote biometric identification system is a specific type of remote biometric identification system.
  • post-remote biometric identification system: Post-remote biometric identification system is a specific type of remote biometric identification system.

Remote biometric identification systems ai_system

High-risk AI systems designed to identify natural persons through biometric data at a distance in real-time, subject to authorization and oversight requirements.
  • Bias and discriminatory effects: Remote biometric identification systems can produce biased results and discriminatory effects.
  • Regulation 2024/1689: The regulation establishes rules for the use of remote biometric identification systems.
  • Regulation 2024/1689: The regulation prohibits the use of remote biometric identification systems for law enforcement except in exhaustively listed and narrowly defined situations.
  • high-risk classification: Remote biometric identification systems are classified as high-risk due to risks of biased results and discriminatory effects.
  • ANNEX III: Remote biometric identification systems are classified as high-risk AI systems in ANNEX III.

reporting and documentation processes technical_requirement

Processes required for improving AI systems' resource performance and energy efficiency.

research, development and prototyping activities legal_obligation

Activities for which AI models may be used before placing on the market without triggering regulatory obligations.
  • Regulation: The Regulation does not apply to AI models used solely for research, development and prototyping activities before placing on the market.

residual risk evaluation_criterion

The remaining risk associated with each hazard and the overall risk of high-risk AI systems after mitigation measures are applied.
  • high-risk AI system: The residual risk of high-risk AI systems must be judged as acceptable.

Resilience against errors, faults, or inconsistencies technical_requirement

Requirement for high-risk AI systems to be resilient regarding errors, faults, or inconsistencies that may occur within the system or its operating environment.
  • High-risk AI systems: High-risk AI systems shall be as resilient as possible regarding errors, faults, or inconsistencies.

right not to be discriminated against legal_obligation

Fundamental right that may be violated by AI systems that perpetuate historical patterns of discrimination based on gender, age, disability, race, ethnicity, or sexual orientation.
  • high-risk AI systems: High-risk AI systems may violate the right not to be discriminated against and perpetuate historical discrimination patterns.
  • AI systems in employment and worker management: Employment AI systems may perpetuate historical patterns of discrimination against protected groups throughout recruitment and evaluation processes.

right of defence and presumption of innocence legal_obligation

Procedural fundamental rights that must be protected in law enforcement contexts involving AI systems.
  • AI systems in law enforcement: Inadequately designed AI systems may hamper the exercise of the right of defence and presumption of innocence.

right to data protection and privacy legal_obligation

Fundamental right that may be undermined by AI systems used to monitor performance and behavior of workers.

right to education and training legal_obligation

Fundamental right that may be violated by improperly designed and used AI systems in educational assessment.
  • high-risk AI systems: High-risk AI systems in education may violate the right to education and training when improperly designed and used.

right to effective remedy and fair trial legal_obligation

Procedural fundamental right that could be hampered by non-transparent AI systems in law enforcement.

right to explanation of individual decision-making legal_obligation

Obligation for deployers to provide clear and meaningful explanations of the role of high-risk AI systems in decision-making procedures.
  • high-risk AI system: The right to explanation applies to decisions made using high-risk AI systems.

right to international protection legal_obligation

A legal right ensuring safe and effective legal avenues into Union territory for those seeking international protection.

right to obtain an explanation legal_obligation

An obligation requiring clear and meaningful explanations for high-risk AI system decisions that produce legal effects or significantly affect persons' health, safety, or fundamental rights.
  • Regulation (EU) 2024/1689: Regulation (EU) 2024/1689 requires clear and meaningful explanations for high-risk AI decisions.

right to privacy legal_obligation

Fundamental right to privacy recognized in Union law.

rightsholders' authorization requirement legal_obligation

A requirement that providers of general-purpose AI models must obtain authorization from rightsholders to carry out text and data mining over works where rights to opt out have been expressly reserved.
  • large generative AI models: Providers of general-purpose AI models must obtain authorization from rightsholders for text and data mining when rights to opt out have been expressly reserved.

risk evaluation_criterion

Defined as the combination of the probability of an occurrence of harm and the severity of that harm.
  • Article 3: Article 3 defines risk as the combination of probability and severity of harm.

Risk analytics for financial fraud detection ai_system

AI systems using risk analytics to assess likelihood of financial fraud by undertakings based on suspicious transactions, not based on individual profiling.

Risk analytics for narcotics localization ai_system

Risk analytic tools used by customs authorities to predict likelihood of localization of narcotics or illicit goods based on known trafficking routes.

risk assessment technical_requirement

Required evaluation of the incident and its relationship to the AI system concerned.
  • provider: Provider must perform risk assessment of the serious incident.

risk assessment of criminal victimization ai_system

AI systems designed to assess the risk of a natural person becoming a victim of criminal offences.

Risk assessments based on profiling ai_system

AI systems that assess likelihood of offending or predict criminal offence based solely on profiling individuals or assessing personality traits and characteristics.

risk assessments of natural persons for law enforcement ai_system

An AI system used by law enforcement to assess risks associated with natural persons.
  • Regulation 2024/1689: The regulation contains specific rules restricting the use of AI systems for risk assessments of natural persons in law enforcement.

risk level evaluation methodology technical_requirement

An objective and participative methodology for evaluating AI system risk levels based on specified criteria.
  • AI Office: The AI Office shall develop an objective and participative methodology for evaluation of risk levels.
  • Annex III: The methodology guides evaluation for inclusion of systems in Annex III.
  • Article 5: The methodology guides evaluation for the list of prohibited practices in Article 5.
  • Article 50: The methodology guides evaluation for AI systems requiring additional transparency measures.

risk management legal_obligation

A requirement for high-risk AI system providers to implement processes and measures to identify, assess, and mitigate risks to health, safety, and fundamental rights throughout the system lifecycle.
  • Regulation: The Regulation requires high-risk AI systems to implement risk management practices.
  • Directive 2013/36/EU: Directive 2013/36/EU contains obligations regarding risk management for credit institutions.

risk management measures legal_obligation

Appropriate and targeted measures designed to address, eliminate, or reduce risks identified in high-risk AI systems.
  • high-risk AI system: Risk management measures are designed to address risks identified in high-risk AI systems.
  • Article 13: Risk management measures require provision of information as specified in Article 13.
  • deployer: Risk management measures require provision of information and training to deployers.

risk management system technical_requirement

A continuous iterative process required for high-risk AI systems to identify, assess, and mitigate risks to health, safety, fundamental rights, and discrimination throughout their entire lifecycle.
  • Article 8: Article 8 requires that the risk management system referred to in Article 9 be taken into account when ensuring compliance.
  • high-risk AI system: High-risk AI systems are required to implement a risk management system as referenced in Article 9.
  • Article 9: The risk management system requirement is established in Article 9.
  • Article 9: The risk management system must be implemented in accordance with Article 9.
  • Article 9: The risk management system must comply with requirements established in Article 9.

risk mitigation measures technical_requirement

Specific actions and safeguards implemented by providers to reduce or eliminate identified risks to health, safety, and fundamental rights.

Risk prevention and minimization legal_obligation

Obligation for human oversight to prevent or minimize risks to health, safety, or fundamental rights that may emerge from high-risk AI system use.
  • Human oversight: Human oversight measures aim to prevent or minimize risks to health, safety, and fundamental rights.

risk prevention measures technical_requirement

Effective measures established by Union law to prevent or substantially minimize risks posed by AI systems.
  • Union law: Union law establishes effective measures to prevent or substantially minimize risks from AI systems.

risk-management framework technical_requirement

A framework provided for in Regulation (EU) 2022/2065 that applies to AI systems embedded in very large online platforms or search engines.
  • very large online platforms: Very large online platforms are obliged to assess potential systemic risks and take appropriate mitigating measures within the risk-management framework.
  • very large online search engines: Very large online search engines are obliged to assess potential systemic risks and take appropriate mitigating measures within the risk-management framework.

risk-management policies legal_obligation

Policies including accountability and governance processes that providers of systemic risk models must implement to mitigate risks.
  • provider: Providers must implement risk-management policies including accountability and governance processes.

risk-management system technical_requirement

A continuous, iterative process required throughout the lifecycle of high-risk AI systems to identify, assess, and mitigate risks to health, safety, and fundamental rights based on intended purpose and context of use.
  • provider: Providers must establish and maintain a risk-management system to identify and mitigate risks for high-risk AI systems.
  • high-risk AI system: High-risk AI systems require implementation of a risk-management system throughout their lifecycle.
  • testing and reporting processes: The risk-management system includes testing and reporting processes as part of compliance documentation.
  • Regulation: The risk-management system must be regularly reviewed and updated in accordance with the requirements of this Regulation.
  • high-risk AI systems: The risk-management system applies specifically to high-risk AI systems to ensure their safety and compliance.
  • risk mitigation measures: The risk-management system incorporates and implements appropriate risk mitigation measures.
  • technical documentation: Technical documentation must include documentation on the relevant risk-management system.

robustness evaluation_criterion

Performance metric that high-risk AI systems should meet in accordance with their intended purpose and state of the art.
  • High-risk AI systems: High-risk AI systems are required to meet an appropriate level of robustness.
  • AI regulatory sandbox: AI systems in the sandbox are assessed on robustness as a relevant dimension.

robustness and accuracy technical_requirement

Technical requirements ensuring that AI systems operate reliably and produce correct outputs without erroneous or biased decisions.
  • high-risk AI systems: High-risk AI systems are required to meet robustness and accuracy standards.

robustness and accuracy requirement legal_obligation

Technical requirements for high-risk AI systems to maintain robustness and accuracy standards.
  • high-risk AI systems: High-risk AI systems are subject to robustness and accuracy requirements.

robustness, accuracy and cybersecurity technical_requirement

Technical requirements for high-risk AI systems to ensure they perform reliably, accurately, and securely against threats.
  • Regulation: The Regulation requires high-risk AI systems to meet robustness, accuracy, and cybersecurity standards.

safety component technical_requirement

A component of a product or AI system that fulfills a safety function, or whose failure or malfunctioning endangers health, safety of persons, or property.
  • AI system: Safety components are components of AI systems that fulfill safety functions.
  • ai_system: Safety components are part of AI systems and products.

safety components technical_requirement

Components within AI systems that directly protect physical integrity of critical infrastructure or health and safety of persons and property, subject to specific requirements under Regulation (EU) 2024/1689.
  • critical infrastructure: Safety components are used to directly protect the physical integrity of critical infrastructure.
  • cybersecurity components: Components intended solely for cybersecurity purposes should not qualify as safety components.
  • artificial intelligence systems: Artificial intelligence systems can be classified as safety components under the Artificial Intelligence Act.

sandbox plan documentation

An agreement between AI providers and competent authorities describing objectives, conditions, timeframe, methodology, and requirements for testing and validation activities.
  • AI regulatory sandbox: The sandbox plan describes the objectives, conditions, timeframe, methodology and requirements for activities within the AI regulatory sandbox.
  • competent authorities: Competent authorities agree sandbox plans with AI providers specifying conditions for testing.
  • implementing acts: Implementing acts require sandbox plans as part of participation procedures in AI regulatory sandboxes.

Schengen Information System institution

Large-scale IT system in the area of Freedom, Security and Justice governed by Regulation (EU) 2018/1860.
  • ANNEX X: ANNEX X lists the Schengen Information System as a large-scale IT system.
  • Regulation (EU) 2018/1860: Schengen Information System is governed by Regulation (EU) 2018/1860.

Schengen Information System ai_system

An information system established for border checks and police cooperation in the field of criminal matters.
  • Regulation (EU) 2018/1861: Regulation (EU) 2018/1861 establishes the operation and use of the Schengen Information System in the field of border checks.
  • Regulation (EU) 2018/1862: Regulation (EU) 2018/1862 establishes the operation and use of the Schengen Information System in the field of police cooperation and judicial cooperation.

scientific panel institution

An independent expert panel established to support implementation and enforcement of the AI Regulation, providing qualified alerts to the AI Office regarding general-purpose AI models posing systemic risks and advising on classification and market surveillance.
  • AI Office: The scientific panel provides qualified alerts to the AI Office regarding systemic risks in AI models and supports its monitoring activities.
  • Regulation: The scientific panel is established to support implementation and enforcement of the Regulation.
  • Member States: Member States can request support from the scientific panel for enforcement activities.
  • Regulation: The Regulation requires that a pool of experts constituting a scientific panel provide support for enforcement activities.
  • Commission: The scientific panel can request the Commission to require documentation or information from providers.
  • Commission: The scientific panel can issue qualified alerts to the Commission regarding AI model capabilities.
  • Commission: The scientific panel issues qualified alerts to the Commission regarding systemic risks in AI models.
  • Article 90(1), point (a): Article 90(1) point (a) establishes the procedure for qualified alerts from the scientific panel.
  • Commission: The Commission selects experts and determines the composition of the scientific panel.
  • AI Office: The scientific panel provides qualified alerts to the AI Office regarding systemic risks in AI models and supports its monitoring activities.
  • Article 98(2): Article 98(2) establishes the examination procedure for adopting the implementing act establishing the scientific panel.
  • general-purpose AI models: The scientific panel contributes to development of tools and methodologies for evaluating capabilities of general-purpose AI models.
  • systemic risk: The scientific panel provides advice on classification of general-purpose AI models with systemic risk.
  • Board: The Commission consults with the Board in determining the number of experts on the scientific panel.
  • general-purpose AI models: The scientific panel provides advice on the classification of general-purpose AI models with systemic risk.
  • market surveillance authorities: The scientific panel supports the work of market surveillance authorities at their request.
  • AI Office: The scientific panel provides qualified alerts to the AI Office regarding systemic risks in AI models and supports its monitoring activities.
  • Member States: Member States may call upon experts of the scientific panel to support their enforcement activities.
  • declaration of interests: Each expert on the scientific panel shall draw up a declaration of interests, which shall be made publicly available.
  • Member States: Experts from the scientific panel support Member States' enforcement activities under the Regulation.
  • qualified alert: The scientific panel issues qualified alerts regarding systemic risk concerns.

scientific panel of independent experts institution

A panel of independent experts established by the Commission to support enforcement activities under AI regulation.
  • Commission: The Commission establishes the scientific panel through implementing acts.
  • Regulation 2024/1689: The scientific panel supports enforcement activities under the regulation.

scientific research and development legal_obligation

AI systems and models developed solely for scientific research and development purposes may be exempted from the Regulation's scope and application.
  • Regulation: AI systems developed solely for scientific research and development are excluded from the Regulation's scope.
  • This Regulation: The regulation does not apply to AI systems specifically developed for scientific research and development purposes.

Search for crime victims legal_obligation

Permitted use case for remote biometric identification systems in law enforcement involving missing persons and crime victims.
  • Regulation 2024/1689: The regulation permits remote biometric identification for searching crime victims including missing persons.

SECTION 2 legal_article

Section 2 contains technical requirements that high-risk AI systems and general-purpose AI models must comply with throughout their lifecycle.

Section 2 documentation

Section containing requirements applicable to high-risk AI systems covered by Union harmonisation legislation.
  • high-risk AI system: Requirements in Section 2 apply to high-risk AI systems covered by Union harmonisation legislation.

Section 2 of Chapter documentation

Section containing requirements that must be covered by harmonised standards or common specifications.

Section 2 of this Chapter legal_article

Section containing requirements that high-risk AI systems and general-purpose AI models must comply with.

Section 3 legal_article

Section 3 contains obligations for providers and deployers of AI systems under Chapter V of the Regulation.
  • Provider: Providers have obligations set out in Section 3 regarding high-risk AI systems.
  • Deployer: Deployers have obligations set out in Section 3 regarding high-risk AI systems.

Section A of Annex I documentation

Section A of Annex I lists Union harmonisation legislation applicable to high-risk AI systems and specifies derogations from conformity assessment procedures.
  • high-risk AI systems: High-risk AI systems covered by Union harmonisation legislation listed in Section A of Annex I may integrate existing post-market monitoring systems.

Section B of Annex I documentation

Lists Union harmonisation legislation applicable to products covered by high-risk AI system requirements.

Sections 2 and 3 of Chapter V legal_article

Sections containing obligations applicable to high-risk AI systems and general-purpose AI models.
  • general-purpose AI models: General-purpose AI models must comply with obligations set out in Sections 2 and 3 of Chapter V.

sectoral Union law regulation

Existing Union regulations applicable to specific sectors that may contain quality management system obligations.
  • this Regulation: Existing sectoral Union law should be considered in relation to this Regulation's quality management requirements.

security clearance requirement technical_requirement

Requirement that only staff holding appropriate security clearance levels may access technical documentation of high-risk AI systems.

semantic and technical interoperability technical_requirement

Technical standards ensuring that different types of data can be exchanged and understood across different systems and platforms for AI development.
  • European Commission: The Commission develops initiatives to promote semantic and technical interoperability of different types of data for cross-border AI development.

sensitive operational data data_category

Operational data related to activities of prevention, detection, investigation or prosecution of criminal offences, the disclosure of which could jeopardise the integrity of criminal proceedings.

serious incident evaluation_criterion

An incident caused by development or use of a general-purpose AI model that must be tracked and reported without undue delay to market surveillance authorities.
  • Commission: Serious incidents must be reported to the Commission without undue delay.
  • Article 73: Serious incidents identified during testing must be reported in accordance with Article 73.
  • provider: Providers must report serious incidents and adopt immediate mitigation measures or suspend testing.
  • prospective provider: Prospective providers must report serious incidents and adopt immediate mitigation measures or suspend testing.
  • Article 73: Article 73 requires providers to report serious incidents to market surveillance authorities.
  • deployer: Deployers may become aware of serious incidents and trigger reporting obligations.

serious incident legal_obligation

An incident or malfunctioning of an AI system that directly or indirectly leads to death, serious harm, critical infrastructure disruption, or legal infringement, requiring reporting to competent authorities.
  • provider: Provider must report serious incidents within specified timeframes.
  • deployer: Deployer must report serious incidents when applicable within specified timeframes.
  • Article 3, point (49)(b): Serious incident definition is referenced in Article 3, point (49)(b).
  • high-risk AI system: High-risk AI systems are subject to serious incident reporting obligations.
  • high-risk AI systems: High-risk AI systems are subject to serious incident notification obligations.
  • Commission: The Commission shall develop guidance to facilitate compliance with serious incident notification obligations.

serious incident reporting legal_obligation

A mandatory procedure for reporting serious incidents related to high-risk AI systems in accordance with Article 73.
  • high-risk AI system: High-risk AI systems must have procedures for reporting serious incidents in accordance with Article 73.
  • Article 73: Serious incident reporting procedures are established in Article 73.

serious incident reports documentation

Reports documenting serious incidents involving AI systems as referenced in Article 73.
  • the Board: The Board evaluates and reviews serious incident reports as part of its tasks.

serious incidents reporting system technical_requirement

A system requiring providers to report to relevant authorities any serious incidents resulting from AI system use, including death, health damage, or critical infrastructure disruption.
  • This Regulation: The regulation requires providers to have a system to report serious incidents to relevant authorities.

significant changes in design or intended purpose technical_requirement

A condition triggering regulatory compliance for high-risk AI systems already on the market, equivalent to substantial modification.
  • This Regulation: The Regulation requires that high-risk AI systems already on the market comply if they undergo significant changes.

Significant harm evaluation_criterion

Harm that is caused or reasonably likely to be caused to persons or groups, including harms accumulated over time.

simplified technical documentation form documentation

A form established by the Commission targeted at the needs of small and microenterprises for providing technical documentation in a simplified manner.
  • Annex IV: The simplified form is based on and implements the requirements specified in Annex IV.
  • Commission: The Commission establishes the simplified technical documentation form for SMEs.
  • SMEs: SMEs may use the simplified form to provide required technical documentation.
  • Notified bodies: Notified bodies shall accept the simplified form for conformity assessment purposes.

Single information platform institution

Platform for public availability of exit reports when agreed by provider and competent authority.
  • Exit report: Exit reports may be made publicly available through the single information platform.

single information platform technical_requirement

A platform through which exit reports from AI regulatory sandboxes may be made publicly available.

single point of contact institution

Contact entity designated by Member States for the regulation, with identity to be notified to the Commission.

Single Supervisory Mechanism institution

Mechanism established for centralized supervision of credit institutions in the EU.

SMEs market_actor

Small and medium-sized enterprises, including start-ups, that are eligible for priority access to regulatory sandboxes, simplified technical documentation requirements, reduced administrative fines, and support measures for regulatory compliance.
  • AI regulatory sandbox: AI regulatory sandboxes should be widely accessible with particular attention to accessibility for SMEs and start-ups.
  • AI regulatory sandboxes: SMEs and start-ups are eligible to access AI regulatory sandboxes with priority access.
  • AI systems: SMEs are providers and deployers of AI systems.
  • Member States: Member States should develop initiatives and provide support channels for SMEs throughout their development path.
  • Regulation: SMEs and deployers must implement and comply with the AI Regulation.
  • Commission Recommendation 2003/361/EC: The Commission Recommendation defines the classification of small and medium-sized enterprises.
  • simplified technical documentation form: SMEs may use the simplified form to provide required technical documentation.
  • AI regulatory sandboxes: AI regulatory sandboxes provide free access to SMEs, including start-ups.
  • AI regulatory sandboxes: AI regulatory sandboxes facilitate participation of SMEs and start-ups with simplified procedures and clear communication.
  • Provider obligations: SMEs are subject to reduced administrative fines for non-compliance with provider obligations.
  • Authorised representative obligations: SMEs are subject to reduced administrative fines for non-compliance with authorised representative obligations.
  • Importer obligations: SMEs are subject to reduced administrative fines for non-compliance with importer obligations.
  • Distributor obligations: SMEs are subject to reduced administrative fines for non-compliance with distributor obligations.
  • Deployer obligations: SMEs are subject to reduced administrative fines for non-compliance with deployer obligations.
  • Notified body requirements: SMEs are subject to reduced administrative fines for non-compliance with notified body requirements.
  • Transparency obligations: SMEs are subject to reduced administrative fines for non-compliance with transparency obligations.
  • Administrative fine: Administrative fines apply to SMEs and start-ups at reduced percentages or amounts.

SMEs and start-ups market_actor

Small and medium-sized enterprises and start-ups whose specific interests and needs should be considered in AI regulation, with allowance for simplified compliance methods without excessive costs.
  • copyright compliance policy: SMEs and start-ups should be allowed simplified compliance methods that do not represent excessive cost.
  • AI Office: The AI Office takes into account the specific interests and needs of SMEs and start-ups when facilitating codes of conduct.

Social and environmental well-being evaluation_criterion

A principle requiring AI systems to be developed sustainably and environmentally friendly, benefiting all humans while monitoring long-term impacts on individuals, society, and democracy.
  • AI systems: The principle of social and environmental well-being applies to sustainable AI development and deployment.

social network services market_actor

Online platforms that provide content sharing services and may include ancillary features such as filters for modifying pictures or videos.

social score data_category

Evaluation or classification of natural persons based on social behavior or personal characteristics used for detrimental treatment.
  • AI system: AI systems evaluate and classify natural persons based on social behavior resulting in social scores.

Social scoring prohibition legal_obligation

The requirement that AI systems entailing unacceptable social scoring practices leading to detrimental or unfavourable outcomes should be prohibited.

Social scoring systems ai_system

AI systems that provide social scoring of natural persons by public or private actors, potentially leading to discriminatory outcomes and exclusion of groups.

societal and environmental well-being evaluation_criterion

An ethical principle for trustworthy AI systems.

solely automated individual decision-making legal_obligation

A data processing activity including profiling that is subject to rights and guarantees under Union law.
  • AI systems: AI systems are subject to rights and guarantees related to solely automated individual decision-making including profiling.

Source code technical_requirement

Code of high-risk AI systems that market surveillance authorities may access upon reasoned request when necessary for conformity assessment.
  • Market surveillance authorities: Market surveillance authorities may request access to source code of high-risk AI systems upon reasoned request when necessary for conformity assessment.

special categories of personal data data_category

Sensitive personal data referred to in Articles 9(1) of Regulation (EU) 2016/679, Article 10 of Directive (EU) 2016/680, and Article 10(1) of Regulation (EU) 2018/1725, which may be exceptionally processed for bias detection in high-risk AI systems subject to strict safeguards.

stand-alone AI system ai_system

A high-risk AI system that is not a safety component of another product and is not itself a product, classified based on intended purpose and risk assessment.
  • This Regulation: Stand-alone AI systems are classified as high-risk under this Regulation based on risk of harm criteria.

Standardisation technical_requirement

Process providing technical solutions to ensure compliance with the regulation in line with state of the art, promoting innovation and competitiveness.

standardisation development process legislative_procedure

Process for developing standards applicable to AI systems, with facilitated participation of SMEs and stakeholders.

standardisation organisations institution

Organizations that develop and maintain technical standards for AI systems and their evaluation.

standing subgroup for market surveillance institution

A standing sub-group of the Board that acts as the administrative cooperation group for the Regulation.
  • Regulation (EU) 2019/1020: The standing subgroup acts as the administrative cooperation group within the meaning of Article 30 of Regulation (EU) 2019/1020.
  • Commission: The Commission supports the activities of the standing subgroup for market surveillance through market evaluations and studies.
  • Board: The Board establishes standing sub-groups including one for market surveillance.

standing subgroup for notified bodies institution

A standing sub-group of the Board providing a platform for cooperation among notifying authorities.
  • Board: The Board establishes a standing sub-group for notified bodies.

statement of conformity documentation

A statement issued under a cybersecurity scheme that can serve as presumption of conformity with cybersecurity requirements.

stop button procedure technical_requirement

Technical mechanism allowing intervention in the operation of high-risk AI systems or interruption through a stop button or similar procedure enabling safe halt.
  • high-risk AI systems: High-risk AI systems must be equipped with a stop button or similar procedure allowing safe interruption and halt.

storage or transport conditions technical_requirement

Conditions under which high-risk AI systems must be stored or transported to maintain compliance.

Subliminal components technical_requirement

Audio, image, video stimuli or other manipulative techniques deployed by AI systems that operate beyond human perception or conscious awareness.

subliminal techniques technical_requirement

Techniques that operate beyond a person's consciousness, prohibited when used to distort behavior.
  • Prohibited AI practices: Prohibited AI practices prohibit the deployment of subliminal techniques in AI systems.

substantial modification evaluation_criterion

A significant change to a high-risk AI system affecting its compliance with the Regulation, such as changes to operating system, software architecture, or intended purpose, requiring new conformity assessment.
  • conformity assessment: When substantial modification occurs, a new conformity assessment must be conducted.
  • High-risk AI systems: High-risk AI systems that undergo substantial modification must undergo a new conformity assessment procedure.

substantial modification legal_obligation

A change to an AI system that may require a new conformity assessment procedure.
  • AI systems: AI systems may undergo substantial modifications that trigger new conformity assessment requirements.
  • conformity assessment procedure: Substantial modifications to AI systems require a new conformity assessment procedure.

substantial modification legal_article

A change to an AI system after its placing on the market or putting into service that is not foreseen in the initial conformity assessment and affects compliance with Chapter III, Section 2 requirements or modifies the intended purpose.
  • Chapter III, Section 2: Substantial modifications affect compliance with Chapter III, Section 2 requirements.

synthetic data data_category

Artificially generated data that mimics real data characteristics without containing actual personal information.
  • Chapter III, Section 2: Synthetic data is referenced as an alternative to personal data for fulfilling Chapter III, Section 2 requirements.

System architecture technical_requirement

Detailed description of how software components integrate and process information within an AI system.
  • AI system: System architecture describes the structural design and component integration of an AI system.

System output interpretation technical_requirement

Capability for natural persons to correctly interpret high-risk AI system outputs using interpretation tools and methods provided by the provider.
  • Human oversight: Natural persons must be enabled to correctly interpret high-risk AI system outputs using provided interpretation tools.

systemic risk evaluation_criterion

A criterion for evaluating whether a general-purpose AI model presents significant risks to the Union market due to its reach or negative effects on public health, safety, security, and fundamental rights, requiring regulatory designation and oversight.
  • general-purpose AI models: General-purpose AI models are evaluated for systemic risk to determine if transparency exceptions apply.
  • general-purpose AI model: Systemic risk is a specific risk associated with high-impact capabilities of general-purpose AI models.
  • general-purpose AI model: General-purpose AI models are evaluated based on whether they present systemic risks.
  • Annex XIII: Annex XIII sets out the criteria for evaluating systemic risks in general-purpose AI models.
  • scientific panel: The scientific panel provides advice on classification of general-purpose AI models with systemic risk.
  • Article 92: Article 92 establishes evaluation procedures that assess systemic risks in general-purpose AI models.

systemic risk assessment and mitigation legal_obligation

Obligation to assess and mitigate possible systemic risks at Union level stemming from development, placing on market, or use of general-purpose AI models.
  • Article 55: Article 55 requires providers to assess and mitigate possible systemic risks at Union level.

systemic risk at Union level evaluation_criterion

A criterion for assessing serious and substantiated concerns about AI models that may warrant regulatory intervention.
  • Article 92: Systemic risk assessment is based on evaluation carried out in accordance with Article 92.

systemic risk investigation legal_obligation

An obligation to investigate systemic risks at Union level posed by general-purpose AI models.

systemic risks evaluation_criterion

Risks associated with general-purpose AI models that increase with capabilities and reach, including impacts on critical infrastructure, democratic processes, control of physical systems, bias, disinformation, and privacy harm, requiring providers to assess, mitigate, and report serious incidents.

Systemic risks at Union level evaluation_criterion

Risks to be identified, assessed and managed through codes of practice, including their sources and materialization along the AI value chain.
  • Codes of practice: Codes of practice address identification, assessment and management of systemic risks at Union level.

targeted search for victims evaluation_criterion

Legitimate objective for using real-time biometric identification systems in law enforcement, including search for abduction, trafficking, or sexual exploitation victims.

tax and customs authorities institution

Administrative authorities responsible for tax and customs enforcement.
  • high-risk AI systems: AI systems used by tax and customs authorities should not be classified as high-risk law enforcement systems.

Technical accuracy technical_requirement

Requirement that AI systems for remote biometric identification should be technically accurate to avoid biased results and discriminatory effects.

technical documentation documentation

Records containing information necessary to assess AI system compliance with requirements, including system characteristics, algorithms, data, training, testing, and validation processes, which providers must maintain and update for high-risk systems and general-purpose AI models.
  • Regulation (EU) 2019/1020: The regulation establishes specific requirements for technical documentation of AI systems.
  • high-risk AI systems: Providers of high-risk AI systems are required to maintain technical documentation containing information necessary to assess compliance.
  • risk-management system: Technical documentation must include documentation on the relevant risk-management system.
  • Regulation: Technical documentation is required to facilitate compliance verification with the Regulation.
  • general-purpose AI model: Providers of general-purpose AI models must prepare and maintain technical documentation.
  • AI Office: Technical documentation must be made available upon request to the AI Office.
  • national competent authorities: Technical documentation must be made available upon request to national competent authorities.
  • model modification and fine-tuning: Modifications or fine-tuning of models require updating technical documentation with information on changes and new training data sources.
  • value chain obligations: Technical documentation is complemented to comply with value chain obligations.
  • Article 11: Article 11 establishes the requirement for technical documentation of high-risk AI systems.
  • high-risk AI system: Technical documentation must demonstrate compliance of high-risk AI systems with regulatory requirements.
  • Article 18: Article 18 requires providers to keep technical documentation for high-risk AI systems.
  • Union financial services law: Union financial services law requires financial institutions to maintain technical documentation as part of their regulatory obligations.
  • Member State: Member States determine conditions under which documentation remains at the disposal of national competent authorities.
  • authorised representative: Authorised representatives must verify that technical documentation specified in Annex XI has been drawn up.
  • importer: Importers must ensure technical documentation is made available to competent authorities upon request.
  • Annex IV: Technical documentation is referenced in point 2(f) of Annex IV.
  • Obligations for providers of general-purpose AI models: Providers must draw up and maintain technical documentation including training, testing, and evaluation results.
  • Annex XI: Technical documentation must contain minimum information set out in Annex XI.
  • AI system: Complete descriptions of AI system training, testing, and validation are documented in technical documentation.
  • Annex IV: Technical documentation requirements are specified in Annex IV.
  • high-risk AI system: High-risk AI systems must have technical documentation available for compliance verification.
  • provider: Providers must prepare and submit technical documentation for each AI system, including evidence as required by the notified body.
  • notified body: The notified body examines the technical documentation relating to the AI system.
  • Annex IV: The technical documentation is referred to in Annex IV.
  • notified body: The notified body requires examination of technical documentation to assess AI system conformity.
  • Article 53(1), point (a): Article 53(1), point (a) requires the provision of technical documentation by general-purpose AI model providers.
  • general-purpose AI model: Technical documentation must describe the general-purpose AI model and its characteristics.
  • acceptable use policies: Technical documentation must contain information about acceptable use policies applicable to the model.
  • training data: Technical documentation must contain information on data used for training, testing, and validation.
  • design specifications: Technical documentation must include detailed design specifications of the model and training process.
  • computational resources: Technical documentation must document computational resources used to train the model.
  • Article 53(1), point (b): Article 53(1), point (b) requires providers to supply technical documentation with specified information.
  • model architecture: Technical documentation must include the architecture and number of parameters of the model.
  • training data: Technical documentation must describe data used for training, testing and validation including type and provenance.
  • input/output modality: Technical documentation must specify the modality and format of inputs and outputs.

technical documentation technical_requirement

A requirement for high-risk AI systems and general-purpose AI model providers to maintain detailed documentation of their design, development, and performance characteristics.
  • Regulation: The Regulation requires high-risk AI systems to maintain technical documentation.
  • European Commission: Commission has power to amend provisions regarding technical documentation.

technical documentation for providers of general-purpose AI models documentation

Documentation required to be provided by all providers of general-purpose AI models containing information appropriate to model size and risk profile.
  • Article 53(1), point (a): Article 53(1), point (a) requires technical documentation for providers of general-purpose AI models.
  • ANNEX XI: ANNEX XI contains technical documentation requirements for providers of general-purpose AI models.
  • general-purpose AI models: Technical documentation requirements apply to general-purpose AI models.

Technical redundancy solutions technical_requirement

Technical measures including backup or fail-safe plans to achieve robustness in high-risk AI systems.
  • High-risk AI systems: Robustness of high-risk AI systems may be achieved through technical redundancy solutions including backup or fail-safe plans.

Technical robustness technical_requirement

A key requirement for high-risk AI systems ensuring they are resilient against harmful or undesirable behaviour from system limitations or environmental factors.
  • Commission: The Commission should ensure development of benchmarks and measurement methodologies for AI systems including technical robustness.
  • High-risk AI systems: High-risk AI systems must comply with technical robustness requirements.

technical robustness and safety evaluation_criterion

An ethical principle requiring AI systems to be developed with robustness against problems, resilience against misuse, and minimization of unintended harm.

terrorism legal_obligation

Criminal offence listed in Annex II as a serious crime under the regulation.

testing and experimentation facilities institution

Infrastructure established by the Commission and Member States at Union or national level to support AI testing, benchmarking, conformity assessment, and implementation compliance.
  • this Regulation: Testing and experimentation facilities contribute to implementation and compliance with the regulation.
  • Regulation: Testing and experimentation facilities contribute to the implementation of the Regulation.
  • Commission: The Commission establishes testing and experimentation facilities at Union level.
  • Member States: Member States establish testing and experimentation facilities at national level.

testing and reporting processes legal_obligation

Documentation and procedural requirements that providers must fulfill to demonstrate compliance with regulatory requirements.
  • risk-management system: The risk-management system includes testing and reporting processes as part of compliance documentation.

testing data data_category

Data used for providing an independent evaluation of the AI system to confirm expected performance before placing on the market or putting into service.

testing data sets data_category

Data sets used for testing high-risk AI systems that do not involve model training techniques.
  • high-risk AI system: Requirements for high-risk AI systems not using model training techniques apply to testing data sets.

Testing in real world conditions legal_obligation

A regulatory requirement for providers and deployers to conduct testing of AI systems under specified conditions with oversight and compliance measures.
  • Article 13: Testing in real world conditions must comply with instructions specified in Article 13.
  • Regulation: Testing in real world conditions must comply with provisions under this Regulation.
  • Provider or prospective provider: Providers must oversee and ensure compliance with testing in real world conditions requirements.
  • Deployer or prospective deployer: Deployers must oversee testing in real world conditions through qualified personnel.
  • Personal data deletion: Personal data deletion is required after testing in real world conditions is performed.
  • Market surveillance authorities: Market surveillance authorities are empowered to perform checks on the conduct of testing in real world conditions.

testing in real world conditions technical_requirement

Testing activities conducted in actual operational environments under controlled conditions that require informed consent, registration, and documentation compliance.
  • market surveillance authority: Market surveillance authorities use their competences and powers to ensure testing in real world conditions complies with regulations and is conducted safely.
  • high-risk AI systems: High-risk AI systems are subject to testing in real world conditions under market surveillance.
  • Article 61: Article 61 establishes informed consent requirements for participation in testing in real world conditions.
  • informed consent: Informed consent obligation applies to testing in real world conditions outside AI regulatory sandboxes.
  • Article 60: Article 60 defines conditions and requirements for testing AI systems in real world conditions.
  • Article 61: Article 61 sets additional conditions for testing AI systems in real world conditions.
  • Article 60: Article 60 requires registration of high-risk AI systems undergoing testing in real world conditions.

testing in real-world conditions technical_requirement

Temporary testing of an AI system for its intended purpose in real-world conditions outside a laboratory to gather reliable data and assess conformity with regulatory requirements.
  • Regulation 2024/1689: Testing in real-world conditions must comply with requirements of the regulation.
  • informed consent: Testing in real-world conditions requires informed consent from subjects.

testing plan technical_requirement

Plan according to which AI systems are tested and evaluated for compliance.

TEU treaty

Treaty on European Union, a foundational treaty to which Protocol No 22 is annexed.

text and data mining technical_requirement

A technical process used in training AI models involving retrieval and analysis of content that may require authorization from rightsholders when copyright and related rights protections apply.

TFEU treaty

The Treaty on the Functioning of the European Union, which serves as the foundational legal basis for EU regulations and Court of Justice review.

The 'Blue Guide' on the implementation of EU product rules 2022 documentation

A Commission notice providing clarification on the New Legislative Framework and implementation of EU product rules, published in OJ C 247, 29.6.2022.

the Board institution

An institutional oversight body responsible for advising and assisting the Commission and Member States in applying the Regulation and requesting standardised templates from the Commission.
  • Commission: The Board can request that the Commission provide standardised templates for areas covered by the regulation.
  • the Commission: The Board is required to advise and assist the Commission.
  • Member States: The Board is required to advise and assist Member States.
  • this Regulation: The Board's tasks are governed by the Regulation and its rules of procedure.
  • national competent authorities: The Board contributes to coordination among national competent authorities.
  • general-purpose AI models: The Board provides advice on enforcement of rules for general-purpose AI models.
  • codes of conduct: The Board issues opinions on the development and application of codes of conduct.
  • codes of practice: The Board issues opinions on the development and application of codes of practice.
  • serious incident reports: The Board evaluates and reviews serious incident reports as part of its tasks.
  • EU database: The Board evaluates and reviews the functioning of the EU database.

The Commission legislative_body

The European Commission is the EU institution empowered to adopt delegated and implementing acts, designate AI testing structures, and supervise general-purpose AI model providers.
  • Article 97: The Commission is empowered to adopt delegated acts in accordance with Article 97.
  • health, safety and fundamental rights: The Commission must ensure amendments maintain the level of protection of health, safety and fundamental rights.
  • Annex VI: The Commission is empowered to amend Annex VI through delegated acts.
  • Annex VII: The Commission is empowered to amend Annex VII through delegated acts.
  • Article 97: The Commission's power to adopt delegated acts is based on Article 97.
  • real-world testing plan: The Commission specifies detailed elements of the real-world testing plan through implementing acts.
  • Union AI testing support structures: The Commission designates one or more Union AI testing support structures.
  • Chapter V: The Commission has exclusive powers to supervise and enforce Chapter V.
  • AI Office: The Commission entrusts the AI Office with implementation of supervision and enforcement tasks.
  • market surveillance authorities: Market surveillance authorities may request the Commission to exercise enforcement powers under the Regulation.

The Commission institution

The European Commission is empowered to adopt delegated acts and establish conformity assessment procedures for high-risk AI systems.
  • delegated acts: The Commission is empowered to adopt delegated acts to amend Annex III.
  • special categories of personal data: The Commission assesses whether special categories of personal data are processed by AI systems as a criterion for high-risk classification.
  • Article 97: The Commission is empowered to adopt delegated acts in accordance with Article 97.

third parties market_actor

Entities that supply tools, services, components, or processes that are used or integrated into high-risk AI systems, playing an important role in the AI value chain.
  • high-risk AI system: Third parties supply tools, services, components, or processes that are integrated into high-risk AI systems.
  • provider: The provider requires third parties to provide necessary information, capabilities, technical access and assistance based on the state of the art.

third party supplier market_actor

An entity that supplies AI systems, tools, services, components, or processes integrated into a high-risk AI system.
  • provider: Providers must establish written agreements with third parties specifying necessary information and technical access.
  • high-risk AI systems: Third party suppliers of tools, services, and components used in high-risk AI systems are subject to regulatory obligations.
  • general-purpose AI models: General-purpose AI models made available under free and open-source licenses are exempt from certain third-party supplier obligations.

third-party conformity assessment technical_requirement

Mandatory evaluation procedure required for products or AI systems before placing on market or putting into service under Union harmonisation legislation.
  • high-risk AI systems: High-risk AI systems are required to undergo third-party conformity assessment before placing on market or putting into service.

third-party conformity assessment body institution

An independent body responsible for conducting conformity assessment procedures for products under Union harmonisation legislation.

this Regulation regulation

An AI regulation establishing requirements and obligations for providers and deployers of AI systems placed on the market or put into service in the Union, with phased application starting February 2025, aimed at preventing circumvention and protecting health, safety, and fundamental rights.
  • AI system: The regulation applies to AI systems whose output is intended for use in the Union, with exemptions for free and open-source systems unless they are high-risk.
  • providers and deployers of AI systems: The regulation applies to providers and deployers of AI systems established in third countries when output is intended for Union use.
  • natural persons located in the Union: The regulation aims to ensure effective protection of natural persons located in the Union.
  • high-risk AI system: High-risk AI systems are classified and regulated under this Regulation with specific requirements and restrictions.
  • public authorities of a third country: Public authorities of third countries are exempt from the regulation when acting within law enforcement and judicial cooperation frameworks.
  • international organisations: International organizations are exempt from the regulation when acting in cooperation or international agreements for law enforcement and judicial cooperation.
  • Directive (EU) 2016/680: The Regulation acts as lex specialis in respect of rules on biometric data processing contained in Directive (EU) 2016/680.
  • Article 10 of Directive (EU) 2016/680: The Regulation specifically references and regulates biometric data processing rules contained in Article 10 of Directive (EU) 2016/680.
  • real-time remote biometric identification systems: The Regulation establishes a specific framework for the use of real-time remote biometric identification systems in law enforcement contexts.
  • biometric data: The Regulation regulates the processing of biometric data involved in real-time remote biometric identification systems.
  • authorisation requirement: The Regulation establishes that use of real-time remote biometric identification systems for law enforcement must be subject to authorization.
  • quality management system: The Regulation requires implementation of quality management systems for AI systems.
  • high-risk AI systems: The Regulation applies to high-risk AI systems placed on the market or put into service in the Union.
  • sectoral Union law: Existing sectoral Union law should be considered in relation to this Regulation's quality management requirements.
  • Commission: The Commission adopts standardisation activities and guidance related to the Regulation.
  • New Legislative Framework: The Regulation is aligned with the New Legislative Framework for clarifying operator roles and obligations.
  • importers and distributors: The Regulation clarifies specific obligations for importers and distributors in the AI value chain.
  • providers: The regulation establishes obligations and requirements that apply to AI system providers.
  • deployers: The regulation establishes obligations and requirements that apply to AI system deployers.
  • Commission: The Commission must provide standardised templates and information platforms for compliance, and evaluate and review the regulation by 2 August 2029 and every four years thereafter.
  • AI-on-demand platform: The AI-on-demand platform contributes to achieving the objectives of the regulation.
  • Digital Europe Programme: The Digital Europe Programme should contribute to achieving the objectives of the regulation.
  • Horizon Europe: Horizon Europe funding programme should contribute to achieving the regulation's objectives.
  • European Digital Innovation Hubs: European Digital Innovation Hubs contribute to implementation of the regulation.
  • testing and experimentation facilities: Testing and experimentation facilities contribute to implementation and compliance with the regulation.
  • Member States: Member States are responsible for ensuring compliance with the regulation within their jurisdictions.
  • market surveillance authorities: Market surveillance authorities have enforcement powers laid down in this Regulation.
  • European Data Protection Supervisor: European Data Protection Supervisor is designated as competent market surveillance authority under this Regulation.
  • AI systems: The regulation applies to AI systems placed on the market or put into service in the Union, establishing requirements and obligations with specific exemptions.
  • high-risk AI systems: The Regulation applies to high-risk AI systems placed on the market or put into service in the Union.
  • prohibited practices: The regulation lays down prohibited practices that AI systems must not violate.
  • transparency requirements: The regulation establishes transparency requirements for AI systems.
  • prohibited systems: Prohibited systems placed on market or put into service in violation are subject to enforcement action.
  • AI regulatory sandboxes: AI regulatory sandboxes operate under and must comply with the regulation.
  • Article 66: Article 66 is part of the Regulation and defines the Board's tasks.
  • the Board: The Board's tasks are governed by the Regulation and its rules of procedure.
  • Article 46: The Regulation contains Article 46 on conformity assessment procedures.
  • Article 57: The Regulation contains Article 57 on testing in real world conditions.
  • Article 59: The Regulation contains Article 59 on testing in real world conditions.
  • Article 60: The Regulation contains Article 60 on testing in real world conditions.
  • Article 73: The Regulation contains Article 73 on serious incident reports.
  • Article 71: The Regulation contains Article 71 establishing the EU database.
  • Article 112: The Regulation contains Article 112 on evaluation and review procedures.

Threats to life or physical safety legal_obligation

Permitted use case for remote biometric identification systems involving threats to natural persons or terrorist attacks.
  • Regulation 2024/1689: The regulation permits remote biometric identification for addressing threats to life, physical safety, or terrorist attacks.

traceability technical_requirement

Requirement to maintain comprehensible information on how high-risk AI systems are developed and perform throughout their lifetime.
  • high-risk AI systems: High-risk AI systems must maintain comprehensible information on their development and performance for traceability.

traceability and transparency evaluation_criterion

Key principles requiring providers to document AI system assessments and provide documentation to competent authorities.

trade secrets and confidential business information data_category

Sensitive information that should be protected when disclosing training data summaries.
  • training data summary: Training data summaries must protect trade secrets and confidential business information.

trained models ai_model

AI models that have been trained and are subject to cyberattacks such as adversarial attacks or membership inference.
  • high-risk AI systems: Trained models within high-risk AI systems are vulnerable to adversarial attacks and membership inference attacks.

training and trained models ai_model

The underlying models and parameters of an AI system that may be accessed by the notified body for conformity assessment.

training data data_category

Data used for training, testing, and validation of AI models, including information about type, provenance, and curation methodologies.
  • technical documentation: Technical documentation must contain information on data used for training, testing, and validation.
  • technical documentation: Technical documentation must describe data used for training, testing and validation including type and provenance.

training data set quality and size evaluation_criterion

A criterion for assessing whether a general-purpose AI model should be designated as having systemic risk.

training data sets data_category

Data used to train AI systems, requiring documentation of provenance, scope, characteristics, and selection methodologies, and vulnerable to attacks such as data poisoning.
  • high-risk AI systems: Training data sets used in high-risk AI systems are vulnerable to cyberattacks such as data poisoning.

training data summary documentation

A publicly available summary documenting the main data collections and sources used to train an AI model, required by model providers for transparency.

Training, validation and testing data sets data_category

Data sets used for developing and evaluating high-risk AI systems that must meet quality criteria and be accessible to market surveillance authorities.
  • High-risk AI systems: High-risk AI systems that use training techniques must be developed using training, validation and testing data sets meeting quality criteria.
  • Market surveillance authorities: Market surveillance authorities are granted full access to training, validation and testing data sets used for high-risk AI system development.

training, validation, and testing data sets data_category

Data sets used in AI system development that the notified body must have access to for conformity assessment.
  • notified body: The notified body must be granted access to training, validation, and testing data sets for conformity assessment.

transparency technical_requirement

A requirement for AI systems to provide clear information about their operation and impact, ensuring appropriate type and degree of transparency for high-risk systems to achieve compliance with provider and deployer obligations.
  • Regulation (EU) 2019/1020: The regulation establishes specific requirements for transparency of AI systems.
  • Regulation: The Regulation requires transparency as a specific obligation for AI systems.

transparency evaluation_criterion

An ethical principle requiring AI systems to allow appropriate traceability and explainability, with awareness to humans and disclosure to deployers and affected persons.

Transparency legal_obligation

Regulatory requirement that high-risk AI systems be accompanied by transparent information and instructions to enable deployers to interpret outputs and use systems appropriately.
  • Instructions of use: Instructions of use serve as a mechanism to fulfill transparency obligations for high-risk AI systems.
  • Article 13: Article 13 requires that high-risk AI systems ensure sufficient transparency in their operation.

transparency about original purpose of data collection legal_obligation

Requirement for data governance practices to include transparency regarding the original purpose for which personal data was collected.
  • high-risk AI system: High-risk AI systems require transparency about the original purpose of data collection in their data governance practices.

Transparency and explainability evaluation_criterion

A principle requiring that AI systems operate in ways that allow appropriate traceability and explainability, with humans aware of their interaction with AI.
  • AI systems: The principle of transparency and explainability applies to how AI systems should operate and communicate with humans.
  • AI models: AI models should incorporate ethical principles including transparency and explainability in their design.

transparency and explainability requirement technical_requirement

Requirement that AI systems in law enforcement must be sufficiently transparent and explainable to protect procedural fundamental rights.

transparency information documentation

Information requirements for providers of general-purpose AI models.

transparency measures legal_obligation

Proportionate measures requiring documentation and information provision on general-purpose AI models for downstream providers.

transparency obligation legal_obligation

A legal requirement for deployers to disclose the artificial origin of AI-generated or manipulated content and information about AI system operations to affected individuals.
  • deep fakes: Deep fakes are subject to transparency obligations requiring disclosure of artificial creation.
  • disclosure of artificial origin: The transparency obligation requires disclosure of the artificial origin of AI-generated content.
  • Regulation: The Regulation establishes transparency obligations for deep fakes.
  • Charter: Compliance with transparency obligations should not impede rights guaranteed in the Charter.

transparency obligations legal_obligation

Requirements for providers and deployers to disclose information about AI system operations, artificial origin of generated or manipulated content, and biometric data processing to natural persons in a clear manner.
  • AI systems intended to interact with natural persons: Certain AI systems are subject to specific transparency obligations regarding notification of natural persons.
  • notification requirement: Transparency obligations include the requirement to notify natural persons of AI system interaction.
  • Regulation (EU) 2024/1689: The regulation imposes transparency obligations on AI systems.
  • This Regulation: The regulation establishes transparency obligations for certain AI systems within its scope.
  • AI system: Transparency obligations apply to deployers of AI systems that generate or manipulate content.
  • Chapter III: Transparency obligations shall not affect the requirements and obligations set out in Chapter III.

transparency requirement legal_obligation

Obligation requiring high-risk AI systems to be designed to enable deployers to understand how the system works and evaluate its functionality before placement on the market.
  • high-risk AI systems: High-risk AI systems are subject to transparency requirements before being placed on the market or put into service.

transparency requirements legal_obligation

Requirements mandating disclosure and documentation of AI systems made available on the market, violation of which triggers enforcement action.
  • this Regulation: The regulation establishes transparency requirements for AI systems.
  • AI systems: Transparency requirements mandate disclosure for AI systems made available on the market.

Treaty on European Union treaty

A foundational EU treaty that enshrines Union values and fundamental rights applicable to AI regulation.
  • AI: AI regulatory framework is based on Union values and fundamental rights enshrined in the Treaty on European Union.

Treaty on the Functioning of the European Union treaty

The foundational treaty of the European Union that provides the legal basis for the AI Regulation, particularly through Articles 16 and 114.
  • REGULATION (EU) 2024/1689: The regulation has regard to the Treaty on the Functioning of the European Union, particularly Articles 16 and 114.

trustworthy AI evaluation_criterion

A standard for AI systems that ensures they are safe, developed and used in accordance with fundamental rights obligations, and protect against harmful effects.
  • AI systems: AI systems are evaluated against the criterion of being trustworthy and safe.

UN Convention relating to the Status of Refugees treaty

An international treaty done at Geneva on 28 July 1951, as amended by the Protocol of 31 January 1967, establishing international obligations regarding refugee protection.

unacceptable AI practices legal_obligation

Certain AI practices that are prohibited under the Regulation.
  • Regulation: The Regulation prohibits certain unacceptable AI practices.

UNCRC General Comment No 25 documentation

General comment on the United Nations Convention on the Rights of the Child addressing children's rights in the digital environment.

UNCRC General Comment No 25 (2021) treaty

United Nations Convention on the Rights of the Child general comment addressing children's rights in relation to the digital environment.
  • Regulation: The Regulation references UNCRC General Comment No 25 (2021) regarding children's rights in the digital environment.

Unfair Commercial Practices Directive directive

Directive regulating unfair commercial practices in the European Union.

Union institution

The European Union, the territorial and institutional scope for the application of the regulation regarding AI systems and data protection.

Union AI testing support institution

Support mechanism for AI testing activities coordinated at the Union level.
  • Regulation: Union AI testing support activities are coordinated with expert support under the Regulation.

Union AI testing support structures institution

Commission-designated structures established at the Union level to assist Member States in testing, evaluating, and providing independent technical or scientific advice on AI systems.
  • Regulation: The Regulation requires the establishment of Union AI testing support structures to support enforcement and reinforce Member State capacities.
  • Article 84: Article 84 establishes the framework for designating Union AI testing support structures.
  • Article 21(6): Union AI testing support structures perform tasks listed under Article 21(6) of Regulation (EU) 2019/1020.
  • The Commission: The Commission designates one or more Union AI testing support structures.

Union and national law regulation

Legal framework governing the use of AI systems in law enforcement contexts.
  • high-risk AI systems: High-risk AI systems must be permitted under relevant Union and national law.

Union and national liability law regulation

Legal framework governing liability for damages caused to third parties during AI testing in real-world conditions.
  • Providers and prospective providers: Providers remain liable under applicable Union and national liability law for damages inflicted on third parties during sandbox experimentation.
  • provider: Providers are liable under applicable Union and national liability law for damages caused during testing.
  • prospective provider: Prospective providers are liable under applicable Union and national liability law for damages caused during testing.

Union data protection law regulation

Legal framework governing the protection of personal data within the European Union, establishing principles of data minimisation and data protection by design and by default applicable to personal data processing.
  • data protection by design and by default: Data protection by design and by default principles are set out in Union data protection law.
  • data minimisation: Data minimisation principle is set out in Union data protection law.
  • high-risk AI systems: High-risk AI systems must comply with principles of data minimisation and data protection by design and by default.
  • AI system: AI systems used in real-world testing must comply with Union data protection law regarding data subject rights.
  • personal data: Personal data transfers and processing are governed by Union data protection law.
  • personal data: Personal data processing must comply with Union data protection law requirements.

Union ethical guidelines for trustworthy AI directive

Ethical guidelines established by the Union to promote trustworthy AI development and deployment.
  • codes of conduct: Codes of conduct incorporate applicable elements from Union ethical guidelines for trustworthy AI.

Union financial services law regulation

Collective body of EU legal acts governing financial services, including internal governance and risk-management rules applicable to regulated financial institutions using AI systems.
  • regulated financial institutions: Union financial services law applies to regulated financial institutions in the provision of services.
  • AI systems: Union financial services law applies to AI systems used by regulated financial institutions.
  • quality management system: Union financial services law may contain equivalent quality management requirements that can fulfill Article 17 obligations.
  • technical documentation: Union financial services law requires financial institutions to maintain technical documentation as part of their regulatory obligations.
  • logs automatically generated by high-risk AI systems: Logs must be maintained under Union financial services law requirements.
  • financial institutions: Financial institutions are subject to internal governance and process requirements under Union financial services law.

Union harmonisation law regulation

Union law that establishes standards or technical specifications for harmonisation across member states.
  • Commission: The Commission updates guidelines pursuant to Union harmonisation law.

Union harmonisation legislation regulation

Existing EU legislation listed in Annex I that establishes harmonised requirements for products and AI systems across the internal market, ensuring only safe and compliant products are placed on the market.
  • AI system: Union harmonisation legislation addresses safety risks generated by products including AI systems as digital components.
  • high-risk AI systems: High-risk AI systems that are safety components or products must comply with Union harmonisation legislation listed in Annex I.
  • conformity assessment procedure: Union harmonisation legislation establishes conformity assessment procedures for various product categories.
  • Regulation: The Regulation complements existing Union harmonisation legislation listed in Section B of Annex I applicable to AI systems.
  • AI systems: AI systems are subject to Union harmonisation legislation and complementary regulatory requirements.
  • New Legislative Framework: Union harmonisation legislation is based on the New Legislative Framework.
  • Article 8: Article 8 references Union harmonisation legislation listed in Annex I Section A that applies to products containing AI systems.
  • Annex I: Union harmonisation legislation is listed in Section A of Annex I.
  • high-risk AI systems: High-risk AI systems that are safety components or products must comply with Union harmonisation legislation listed in Annex I.
  • Regulation: The Regulation complements existing Union harmonisation legislation listed in Section B of Annex I applicable to AI systems.
  • high-risk AI system: High-risk AI systems related to products covered by Union harmonisation legislation must comply with those legal acts.
  • conformity assessment body: Conformity assessment bodies may be designated under other Union harmonisation legislation.
  • high-risk AI system: Union harmonisation legislation listed in Annex I applies to certain high-risk AI systems.

Union harmonised legislation regulation

Sectoral legislation based on the New Legislative Framework that applies to products containing AI systems.
  • Regulation 2024/900: Regulation 2024/900 is applied simultaneously and complementarily with Union harmonised legislation.
  • New Legislative Framework: Union harmonised legislation is based on the New Legislative Framework.

Union institution, body, office or agency market_actor

Public entities of the European Union subject to administrative fines and proceedings by the European Data Protection Supervisor for infringements of AI regulations.
  • administrative fine: Union institutions are subject to administrative fines for non-compliance with AI practice prohibitions.

Union institutions institution

Bodies, offices or agencies of the European Union that may use AI systems for migration, asylum or border control management.

Union institutions, bodies, offices and agencies institution

Entities of the European Union that may act as providers or deployers of AI systems and remain accountable for compliance with Union law.
  • Regulation: The Regulation applies to Union institutions when acting as providers or deployers of AI systems.

Union institutions, bodies, offices and agencies market_actor

EU entities that may participate in AI regulatory sandboxes and are subject to administrative fines by the European Data Protection Supervisor.
  • Article 100: Article 100 applies to Union institutions, bodies, offices and agencies falling within the scope of the Regulation.

Union institutions, bodies, offices or agencies institution

European Union institutions and bodies responsible for migration, asylum, border control management, and law enforcement support functions.

Union institutions, bodies, offices, or agencies institution

EU-level institutions supporting law enforcement authorities.
  • high-risk AI systems: High-risk AI systems are intended to be used by Union institutions in support of law enforcement.

Union law regulation

The overarching legal framework of the European Union governing AI systems, including data protection, non-discrimination, consumer protection, and competition law, which establishes compliance requirements and risk prevention mechanisms.
  • AI system: Research and development activities involving AI systems must be conducted in accordance with applicable Union law.
  • High-risk AI systems: High-risk AI systems must comply with mandatory requirements established by Union law.
  • data protection law: Union law includes data protection law as a component.
  • non-discrimination law: Union law includes non-discrimination law as a component.
  • consumer protection law: Union law includes consumer protection law as a component.
  • competition law: Union law includes competition law as a component.
  • innovative AI systems: Innovative AI systems must comply with relevant Union law in addition to the primary Regulation.
  • widespread infringement: Widespread infringement is defined as acts or omissions contrary to Union law.
  • redress measures: Union law establishes effective measures of redress in relation to risks posed by AI systems.
  • risk prevention measures: Union law establishes effective measures to prevent or substantially minimize risks from AI systems.
  • Risk management system: Risk management procedures for high-risk AI systems may be combined with requirements established under other relevant provisions of Union law.
  • bias detection and mitigation: Bias mitigation measures ensure compliance with Union law prohibitions on discrimination.
  • authorization: Authorizations must comply with Union law as assessed by the Commission.

Union law on competition rules regulation

Legal framework governing competition practices within the Union that market surveillance authorities must consider during their activities.

Union law on the protection of intellectual property and trade secrets regulation

Existing Union law that governs access to AI system models and parameters during conformity assessment.
  • training and trained models: Access to training and trained models is subject to existing Union law on intellectual property and trade secrets protection.

Union law on the protection of personal data regulation

Legal framework governing personal data protection within the European Union, applicable to AI system development and may provide alternative requirements for log retention.
  • logs: Log retention requirements are subject to Union law on personal data protection which may provide alternative requirements.
  • AI regulatory sandbox: The sandbox operates under and must comply with Union data protection law.
  • personal data processing: Personal data processing in AI regulatory sandboxes must comply with Union law on data protection.

Union law protecting fundamental rights regulation

Legal framework protecting fundamental rights including non-discrimination in relation to AI system use.
  • Article 77: Article 77 is based on Union law protecting fundamental rights including non-discrimination.

Union market market_actor

The market within the European Union where high-risk AI systems are made available.

Union safeguard procedure legal_obligation

A procedural mechanism under which the AI Office carries out duties related to AI system oversight.
  • AI Office: The AI Office carries out duties in the context of the Union safeguard procedure pursuant to Article 81.

Union technical documentation assessment certificate documentation

Certificate issued by a notified body confirming that an AI system meets technical documentation and regulatory requirements for high-risk systems.
  • notified body: Notified bodies issue Union technical documentation assessment certificates when conformity is established in accordance with Annex VII.
  • Annex VII: Union technical documentation assessment certificates are issued in accordance with Annex VII requirements.
  • Chapter III, Section 2: The certificate is issued when the AI system meets the requirements in Chapter III, Section 2.
  • AI system: The certificate contains information necessary to evaluate the conformity and control of the AI system.

Union technical documentation assessment certificates documentation

Technical documentation certificates issued by notified bodies for AI systems that can be refused, withdrawn, suspended or restricted.
  • notified body: Notified bodies issue, refuse, withdraw, suspend or restrict Union technical documentation assessment certificates.

Union values legal_obligation

Fundamental principles including respect for human dignity, freedom, equality, democracy, rule of law, and fundamental rights enshrined in the Charter.

Union's Ethics Guidelines for Trustworthy AI documentation

Guidelines providing ethical standards for trustworthy AI that providers and deployers are encouraged to apply voluntarily.
  • general-purpose AI models: Providers and deployers of all AI systems and models are encouraged to apply elements from the Union's Ethics Guidelines for Trustworthy AI on a voluntary basis.

Union's Ethics Guidelines for Trustworthy AI directive

Guidelines established by the European Union that provide ethical principles and requirements for trustworthy AI systems, including considerations for environmental sustainability, AI literacy, and inclusive design.
  • AI systems: AI models are encouraged to apply additional requirements related to the Union's Ethics Guidelines for Trustworthy AI on a voluntary basis.

Union-wide unique single identification number documentation

Unique identifier for testing in real world conditions that must be communicated to subjects as part of informed consent requirements.
  • informed consent: Informed consent requirements include communication of the Union-wide unique identification number for the testing.

United Nations Convention on the Rights of Persons with Disabilities treaty

International treaty to which the Union and Member States are signatories, establishing legal obligations to protect persons with disabilities from discrimination and ensure equal access to information and communications technologies.
  • accessibility requirements: The treaty establishes legal obligations for accessibility and non-discrimination that inform accessibility requirements for AI systems.

United Nations Convention on the Rights of the Child treaty

International treaty establishing rights for children, further developed through UNCRC General Comment No 25 regarding the digital environment.
  • UNCRC General Comment No 25: UNCRC General Comment No 25 further develops the United Nations Convention on the Rights of the Child regarding the digital environment.

unmanned aircraft data_category

Aircraft without onboard pilots, subject to design, production and market placement requirements under the regulation.
  • Regulation 2024/1689: Regulation 2024/1689 applies to the design, production and placing on the market of unmanned aircraft.

Validation and testing procedures technical_requirement

Required procedures for validating and testing AI systems, including documentation of validation data and metrics for measuring accuracy, robustness, and regulatory compliance.
  • AI system: Validation and testing procedures are used to evaluate AI system compliance with requirements.

validation data data_category

Data used for evaluating trained AI systems and tuning non-learnable parameters to prevent underfitting or overfitting.

validation data set data_category

A separate data set or part of the training data set, either as a fixed or variable split.

value chain obligations legal_obligation

Obligations imposed throughout the lifecycle of AI models to manage risks and ensure compliance with regulatory requirements.
  • Regulation: Value chain obligations are provided in the Regulation.
  • technical documentation: Technical documentation is complemented to comply with value chain obligations.

very large online platforms institution

Designated platforms that may embed AI systems or models and are subject to risk-management frameworks under Regulation (EU) 2022/2065.
  • AI systems: AI systems embedded into designated very large online platforms are subject to the risk-management framework provided in Regulation (EU) 2022/2065.
  • risk-management framework: Very large online platforms are obliged to assess potential systemic risks and take appropriate mitigating measures within the risk-management framework.

very large online platforms market_actor

Providers of very large online platforms subject to obligations to identify and mitigate systemic risks from artificially generated or manipulated content.

very large online search engines institution

Designated search engines that may embed AI systems or models and are subject to risk-management frameworks under Regulation (EU) 2022/2065.
  • AI systems: AI systems embedded into designated very large online search engines are subject to the risk-management framework provided in Regulation (EU) 2022/2065.
  • risk-management framework: Very large online search engines are obliged to assess potential systemic risks and take appropriate mitigating measures within the risk-management framework.

very large online search engines market_actor

Providers of very large online search engines subject to obligations to identify and mitigate systemic risks from dissemination of artificially generated or manipulated content.

Virtual reality ai_system

Technology that can facilitate AI-enabled manipulation by controlling stimuli presented to persons in ways that may materially distort behavior.

Visa Information System ai_system

An EU information system for which access conditions were established through amendments to multiple EU regulations.

Visa Information System institution

An EU information system for visa-related purposes that was reformed through Regulation (EU) 2021/1134.

voice characteristics data_category

Biometric data including characteristics of a person's voice such as raised voice or whispering.
  • Regulation: The Regulation classifies voice characteristics as a type of biometric data covered under its scope.

voluntary codes of conduct documentation

Self-regulatory frameworks that AI models and developers can voluntarily adopt to comply with ethical and technical requirements beyond mandatory regulations, particularly for high-risk AI systems.
  • AI systems: Voluntary codes of conduct are developed for and applied to AI systems to ensure effectiveness through clear objectives and key performance indicators.

voluntary codes of conduct regulation

Non-binding guidelines designed to foster application of AI system requirements, particularly for non-high-risk AI systems and environmental sustainability.
  • Chapter III, Section 2: Voluntary codes of conduct foster application of requirements set out in Chapter III, Section 2.
  • AI systems: Voluntary codes of conduct apply to AI systems other than high-risk AI systems.
  • European Commission: The Commission evaluates the impact and effectiveness of voluntary codes of conduct.

voluntary model terms documentation

Optional contractual terms developed by the AI Office for agreements between providers of high-risk AI systems and third parties supplying tools and services.
  • AI Office: The AI Office develops and recommends voluntary model terms for contracts.

vulnerability exploitation legal_obligation

Prohibited practice of exploiting vulnerabilities of persons based on age, disability, or social/economic situation.
  • Prohibited AI practices: Prohibited AI practices prohibit exploitation of vulnerabilities of natural persons.

vulnerable groups data_category

Groups of persons requiring additional safeguards and particular attention during AI system testing and risk assessment due to heightened vulnerability.
  • high-risk AI systems: Additional safeguards are required for vulnerable groups during AI system testing.
  • Article 79: Article 79 requires particular attention to be given to AI systems presenting a risk to vulnerable groups.

vulnerable groups protection legal_obligation

Requirement to appropriately protect subjects of testing who are persons belonging to vulnerable groups due to age or disability.

Vulnerable persons data_category

Persons susceptible to exploitation due to age, disability, extreme poverty, or membership in ethnic or religious minorities.
  • AI systems: AI systems are restricted in their use with vulnerable persons to prevent exploitation.

vulnerable persons or groups data_category

Individuals or groups requiring special protection from negative impacts of AI systems, including persons with disabilities.
  • codes of conduct: Codes of conduct require assessing and preventing negative impacts of AI systems on vulnerable persons or groups.

Watermarks technical_requirement

One of the techniques for marking and detecting AI-generated content, suitable for implementation in AI systems.

whistleblower protection legal_obligation

Protection for persons reporting infringements of AI regulations under Union law.

widespread infringement legal_article

Any act or omission contrary to Union law protecting the interest of individuals, which has harmed or is likely to harm the collective interests of individuals.

widespread infringement legal_obligation

Any act or omission contrary to Union law protecting individual interests, harming collective interests across multiple Member States or committed concurrently by the same operator in at least three Member States.
  • Union law: Widespread infringement is defined as acts or omissions contrary to Union law.

withdrawal of an AI system legal_obligation

Any measure aiming to prevent an AI system in the supply chain from being made available on the market.

worker information and consultation legal_obligation

Obligation to inform or consult workers and their representatives on decisions to deploy high-risk AI systems at the workplace.
  • Regulation 2024/1689: The regulation requires information of workers and their representatives on planned deployment of high-risk AI systems at the workplace.

workers' representatives market_actor

Representatives of workers who must be informed before deployment of high-risk AI systems in the workplace.
  • deployers: Deployers who are employers must inform workers' representatives before putting high-risk AI systems into service.

World Trade Organization Agreement on Technical Barriers to Trade treaty

International agreement under which the Union commits to facilitate mutual recognition of conformity assessment results.
  • Regulation: The regulation's mutual recognition provisions are based on Union commitments under the WTO Agreement.