Article 54: Authorised representatives of providers of general-purpose AI models
The article outlines the requirements for non-EU providers of general-purpose AI models. These providers must appoint an authorised representative in the EU to ensure compliance with the AI Act. The representative verifies documentation, keeps records, and cooperates with authorities. The representative can terminate the mandate if the provider fails to comply with the AI Act. Open-source general-purpose AI models are exempt from these requirements unless they are classified as general-purpose AI models with systemic risks.
1. Prior to placing a general-purpose AI model on the Union market, providers established in third countries shall, by written mandate, appoint an authorised representative which is established in the Union.
2. The provider shall enable its authorised representative to perform the tasks specified in the mandate received from the provider.
3. The authorised representative shall perform the tasks specified in the mandate received from the provider. It shall provide a copy of the mandate to the AI Office upon request, in one of the official languages of the institutions of the Union. For the purposes of this Regulation, the mandate shall empower the authorised representative to carry out the following tasks:
4. The mandate shall empower the authorised representative to be addressed, in addition to or instead of the provider, by the AI Office or the competent authorities, on all issues related to ensuring compliance with this Regulation.
5. The authorised representative shall terminate the mandate if it considers or has reason to consider the provider to be acting contrary to its obligations pursuant to this Regulation. In such a case, it shall also immediately inform the AI Office about the termination of the mandate and the reasons therefor.
6. The obligation set out in this Article shall not apply to providers of general-purpose AI models that are released under a free and open-source licence that allows for the access, usage, modification, and distribution of the model, and whose parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available, unless the general-purpose AI models present systemic risks.