This list of FAQs has been compiled based on queries received during the AI Pact webinars as well as submissions from stakeholders. This list will be updated regularly and as needed.
The Commission is committed to a clear, simple and innovation friendly implementation of the AI Act, as set out in the AI Continent Action Plan and the Apply AI Strategy. Commission’s proposal brings the AI Act in line with this approach by:
Linking when rules apply to the availability of support
- Linking the application of the rules for high-risk AI to the availability of support tools like standards. The Commission is adjusting the timeline for the application of high-risk rules to a maximum of 16 months.
Introducing simplification:
- Extending certain simplified modalities of fulfilling the legal obligations from SMEs to small mid cap companies (SMCs), such as simplified technical documentation;
- Requiring the Commission and Member States to foster AI literacy, and ensure continuous support to companies by building on existing efforts (such as the AI Office’s repository of AI literacy practices) instead of enforcing unspecified obligations on operators, while keeping training obligations for high-risk deployers in place remain.
- Removing the prescription of a harmonised post-market monitoring plan, giving businesses more flexibility;
- Reducing the registration burden for AI systems used in high-risk areas for tasks that are not considered high-risk.
Improving the effectiveness of the AI Act’s governance:
- Centralising the oversight of AI systems built on general-purpose AI models with the AI Office, to reduce governance fragmentation for developers of these models and systems;
- Concentrating the oversight of AI embedded in very large online platforms and search engines at Commission level by assigning this oversight to the AI Office.
Extending measures in support compliance:
- Allowing providers and deployers to process special categories of personal data for ensuring bias detection and correction, subject to appropriate safeguards;
- Broadening the use of AI regulatory sandboxes and real-world testing so more innovators can benefit from these tools. This includes setting up an EU-level regulatory sandbox from 2028 to support real-world testing.
Improving the AI Act’s procedures and operation:
- Clarifying the interplay between the AI Act and other EU laws. Simplifying procedures to foster the timely availability of conformity assessment bodies.
Article 16 of the AI Act gives an overview of the obligations that providers of high-risk AI systems need to follow in order to comply with the AI Act.
Prior to placing their high-risk AI system or putting it into services, providers of such systems need to ensure that their high-risk AI system is compliant with Articles 8-15 of the AI Act and has undergone a conformity assessment. This conformity should be demonstrated upon a reasonable request by a competent national authority.
Following this conformity assessment, providers need to draw up an EU declaration of conformity and affix the CE marking to the system (or on its packaging or accompanying documentation), as well as indicate their contact information on the high-risk AI system so they can be contacted. Providers also need to register their high-risk AI system in the EU database.
In addition, providers need to comply with Articles 17-20 AI Act by setting up a quality management system, keep documentation for 10 years, keep automatically generated logs by the high-risk AI system, and take necessary corrective actions and provide information related to these actions.
Finally, providers of high-risk AI systems need to ensure that their system complies with relevant accessibility requirements.
Most biometric AI systems are not prohibited under the AI Act. The prohibited AI practices regarding biometric systems are limited and include in particular: emotion recognition in the workplace, certain biometric categorisation systems, and real-time remote biometric identification in publicly accessible spaces for law enforcement purposes. These are subject to narrowly defined exceptions.
There are, however, some additional rules that apply to permitted biometric systems. In particular, certain biometric applications are considered high-risk (remote biometric identification systems, certain AI systems intended for biometric categorisation, and AI systems intended for emotion recognition). The AI Act sets out specific requirements for high-risk AI systems. These requirements relate to, among others, data and data governance, documentation and record-keeping, transparency and the provision of information to users, human oversight, robustness, accuracy, and security.
The use of AI systems for biometric verification - that is, confirming that an individual is who they claim to be - is not prohibited under the AI Act and does not fall within the category of high-risk systems.
For more information, please refer to Guidelines on prohibited artificial intelligence practices.
The AI Act introduces a number of support measures tailored to small and medium-sized enterprises (SMEs). These include the possibility to prepare simplified technical documentation and to implement simplified quality management systems for high-risk AI systems. SMEs are also granted free-of-charge access to AI regulatory sandboxes, enabling them to test and develop AI systems in a controlled environment. In addition, the European Commission and national authorities are required under Article 62 to provide specific support measures to SMEs. Their interests are further represented through a special membership category in the advisory forum, ensuring that their needs are taken into account in the regulatory process. Finally, SMEs benefit from special consideration when penalties are imposed, reflecting their particular position in the market and the potential impact of sanctions.
Providers of general-purpose AI models must (Article 53 AI Act):
draw up and maintain technical documentation about the model, including information about the development process, to provide to the AI Office upon request; national competent authorities can also ask the AI Office to request information on their behalf when this information is necessary for them to exercise their supervisory tasks;
provide information and documentation to downstream AI system providers to help them understand the model's capabilities and limitations and comply with their own obligations;
implement a policy to comply with Union copyright law and related rights, including identifying and respecting rights reservations through state-of-the-art technologies; publish a sufficiently detailed summary of the content used for training the model; if they are established outside the EU, appoint an authorised representative in the Union before placing their model on the market.
Providers can demonstrate compliance through the General-Purpose AI Code of Practice which was assessed as adequate, or via alternative adequate means.
In addition to the standard obligations for providers of all general-purpose AI models, providers of general-purpose AI models with systemic risk must (Article 55 AI Act):
perform model evaluation using standardised protocols and state-of-the-art tools, including conducting and documenting adversarial testing to identify and mitigate systemic risks;
assess and mitigate possible systemic risks at Union level, including their sources, that may stem from the development, placing on market, or use of these models;
track, document, and report relevant information about serious incidents and possible corrective measures to the AI Office and, as appropriate, national authorities without undue delay;
ensure adequate cybersecurity protection for both the model and its physical infrastructure, to prevent unauthorised access, theft, or leakage.
Providers of such models can demonstrate compliance through adhering to the General-Purpose AI Code of Practice or show alternative adequate means of compliance. If providers choose to comply via alternative means, they must present arguments for why such means are adequate, for assessment by the European Commission.
Providers of general-purpose AI models released as open-source may be exempt from certain obligations (Articles 53(2), 54(6) AI Act), specifically:
the requirement to maintain technical documentation for authorities;
the requirement to provide documentation to downstream AI system providers;
the requirement to appoint an authorised representative (for non-EU providers).
These exemptions require that the general-purpose AI model:
is released under a free and open-source license, allowing access, use, modification, and distribution without monetisation;
has its parameters, including weights, architecture, and usage information publicly available;
is not a general-purpose AI model with systemic risk — providers of general-purpose AI models with systemic risk must comply with all the obligations for providers of general-purpose AI models, regardless of whether the model is released as open-source.
These exemptions recognise that open-source models contribute to research and innovation while already providing transparency through their open nature. Nevertheless, providers whose models meet the above open-source requirements are not exempt from the copyright policy obligation or the requirement to publish a training data summary (Article 53(1)(c)(d) AI Act), since their open-source nature does not necessarily make available information on the data used for training or modifying the model, nor on how compliance with copyright law was ensured.
Section 4 of the European Commission Guidelines on the scope of the obligations for general-purpose AI models provides further guidance on the open-source exemption.
Pursuant to Article 53(1)(a) and Annex XI, Section 1, point 2(e), of the AI Act, providers of general-purpose AI models must document the known or estimated energy consumption of their model. If it is unknown, this estimation may be based on the computational resources used. Moreover, the European Commission is empowered to adopt a delegated act to detail measurement and calculation methodologies that providers should use to measure or estimate the energy consumption of their models, to allow for comparable and verifiable documentation. For the period until such a delegated act is adopted, the Model Documentation Form in conjunction with the Transparency Chapter of the General-Purpose AI Code of Practice (Code) provides guidance on how providers may demonstrate compliance with this requirement, under ‘Energy consumption (during training and inference)’.
In particular, if a Signatory to the Code does not know the energy consumption of training their model, they commit to reporting an estimated amount unless they are lacking critical information about compute or hardware which prevents them from being able to make an estimate. In this case, the Signatory commits to documenting the information they lack. For the purpose of estimating energy consumption, the AI Office will use the information available about the computational resources used for training to derive an estimate. To do so, the AI Office will rely on knowledge of what critical information the provider is lacking, available scientific resources, as well as on preliminary results from the ongoing European Commission study on energy-efficient and low-emission AI and draw on expertise from the scientific panel.