This list of FAQs has been compiled based on queries received during the AI Pact webinars as well as submissions from stakeholders. This list will be updated regularly and as needed.
Article 16 of the AI Act gives an overview of the obligations that providers of high-risk AI systems need to follow in order to comply with the AI Act.
Prior to placing their high-risk AI system or putting it into services, providers of such systems need to ensure that their high-risk AI system is compliant with Articles 8-15 of the AI Act and has undergone a conformity assessment. This conformity should be demonstrated upon a reasonable request by a competent national authority.
Following this conformity assessment, providers need to draw up an EU declaration of conformity and affix the CE marking to the system (or on its packaging or accompanying documentation), as well as indicate their contact information on the high-risk AI system so they can be contacted. Providers also need to register their high-risk AI system in the EU database.
In addition, providers need to comply with Articles 17-20 AI Act by setting up a quality management system, keep documentation for 10 years, keep automatically generated logs by the high-risk AI system, and take necessary corrective actions and provide information related to these actions.
Finally, providers of high-risk AI systems need to ensure that their system complies with relevant accessibility requirements.
Most biometric AI systems are not prohibited under the AI Act. The prohibited AI practices regarding biometric systems are limited and include in particular: emotion recognition in the workplace, certain biometric categorisation systems, and real-time remote biometric identification in publicly accessible spaces for law enforcement purposes. These are subject to narrowly defined exceptions.
There are, however, some additional rules that apply to permitted biometric systems. In particular, certain biometric applications are considered high-risk (remote biometric identification systems, certain AI systems intended for biometric categorisation, and AI systems intended for emotion recognition). The AI Act sets out specific requirements for high-risk AI systems. These requirements relate to, among others, data and data governance, documentation and record-keeping, transparency and the provision of information to users, human oversight, robustness, accuracy, and security.
The use of AI systems for biometric verification - that is, confirming that an individual is who they claim to be - is not prohibited under the AI Act and does not fall within the category of high-risk systems.
For more information, please refer to Guidelines on prohibited artificial intelligence practices.
The AI Act introduces a number of support measures tailored to small and medium-sized enterprises (SMEs). These include the possibility to prepare simplified technical documentation and to implement simplified quality management systems for high-risk AI systems. SMEs are also granted free-of-charge access to AI regulatory sandboxes, enabling them to test and develop AI systems in a controlled environment. In addition, the European Commission and national authorities are required under Article 62 to provide specific support measures to SMEs. Their interests are further represented through a special membership category in the advisory forum, ensuring that their needs are taken into account in the regulatory process. Finally, SMEs benefit from special consideration when penalties are imposed, reflecting their particular position in the market and the potential impact of sanctions.
Providers of general-purpose AI models must (Article 53 AI Act):
draw up and maintain technical documentation about the model, including information about the development process, to provide to the AI Office upon request; national competent authorities can also ask the AI Office to request information on their behalf when this information is necessary for them to exercise their supervisory tasks;
provide information and documentation to downstream AI system providers to help them understand the model's capabilities and limitations and comply with their own obligations;
implement a policy to comply with Union copyright law and related rights, including identifying and respecting rights reservations through state-of-the-art technologies; publish a sufficiently detailed summary of the content used for training the model; if they are established outside the EU, appoint an authorised representative in the Union before placing their model on the market.
Providers can demonstrate compliance through the General-Purpose AI Code of Practice which was assessed as adequate, or via alternative adequate means.
Providers of general-purpose AI models released as open-source may be exempt from certain obligations (Articles 53(2), 54(6) AI Act), specifically:
the requirement to maintain technical documentation for authorities;
the requirement to provide documentation to downstream AI system providers;
the requirement to appoint an authorised representative (for non-EU providers).
These exemptions require that the general-purpose AI model:
is released under a free and open-source license, allowing access, use, modification, and distribution without monetisation;
has its parameters, including weights, architecture, and usage information publicly available;
is not a general-purpose AI model with systemic risk — providers of general-purpose AI models with systemic risk must comply with all the obligations for providers of general-purpose AI models, regardless of whether the model is released as open-source.
These exemptions recognise that open-source models contribute to research and innovation while already providing transparency through their open nature. Nevertheless, providers whose models meet the above open-source requirements are not exempt from the copyright policy obligation or the requirement to publish a training data summary (Article 53(1)(c)(d) AI Act), since their open-source nature does not necessarily make available information on the data used for training or modifying the model, nor on how compliance with copyright law was ensured.
Section 4 of the European Commission Guidelines on the scope of the obligations for general-purpose AI models provides further guidance on the open-source exemption.
Pursuant to Article 53(1)(a) and Annex XI, Section 1, point 2(e), of the AI Act, providers of general-purpose AI models must document the known or estimated energy consumption of their model. If it is unknown, this estimation may be based on the computational resources used. Moreover, the European Commission is empowered to adopt a delegated act to detail measurement and calculation methodologies that providers should use to measure or estimate the energy consumption of their models, to allow for comparable and verifiable documentation. For the period until such a delegated act is adopted, the Model Documentation Form in conjunction with the Transparency Chapter of the General-Purpose AI Code of Practice (Code) provides guidance on how providers may demonstrate compliance with this requirement, under ‘Energy consumption (during training and inference)’.
In particular, if a Signatory to the Code does not know the energy consumption of training their model, they commit to reporting an estimated amount unless they are lacking critical information about compute or hardware which prevents them from being able to make an estimate. In this case, the Signatory commits to documenting the information they lack. For the purpose of estimating energy consumption, the AI Office will use the information available about the computational resources used for training to derive an estimate. To do so, the AI Office will rely on knowledge of what critical information the provider is lacking, available scientific resources, as well as on preliminary results from the ongoing European Commission study on energy-efficient and low-emission AI and draw on expertise from the scientific panel.