Find legal and policy documents as well as communication materials such as fact sheets related to the EU AI Act.
The Commission has launched a stakeholder consultation to support the implementation of the AI Act’s obligation for providers of general-purpose AI models to identify and comply with reservation of rights expressed by rightsholders.
The consultation is open until 9 January 2026.
Access the Stakeholder consultation and call for expression of interest – Measure 1.3 of the GPAI CoP.
You can also find further information about this consultation in the Questions & Answers.
The AI Act Whistleblower Tool empowers individuals to securely submit a report and contribute directly to make AI in Europe safe transparent, and trustworthy.
Whistleblowers play a vital role in identifying potential violations of the law that could endanger fundamental rights, health, or public trust, and which might otherwise go undetected. By reporting potential violations, whistleblowers can support the AI Office in detecting them early on, thereby contributing to the safe and transparent development of AI technologies.
For more information:
Commission launches whistleblower tool for AI Act | Shaping Europe’s digital future
AI Act Whistleblower Tool | Shaping Europe’s digital future
Check also our FAQs:
European Commission | FAQs
As part of the digital omnibus package presented on 19 November 2025, the Commission has proposed to simplify existing rules on Artificial Intelligence, cybersecurity, and data.
Regarding the AI Act, the Commission proposes linking the entry into application of the rules governing high-risk AI systems to the availability of support tools, including the necessary standards.
The timeline for applying high-risk rules is adjusted to a maximum of 16 months, so the rules start applying once the Commission confirms the needed standards and support tools are available, giving companies support tools they need.
See all details in the press materials:
Press release Simpler EU digital rules and new digital wallets to save billions for businesses
Q&A Digital Package | Shaping Europe’s digital future
Factsheet Factsheet: Digital Package
AI Act webpage: AI Act | Shaping Europe’s digital future
The publication of this template promotes consistent and transparent reporting and helps providers demonstrate compliance with the commitments set out in Commitment 9 of the GPAI Code of Practice.
This Code of Practice will aim to support compliance with the AI Act transparency obligations related to marking and labelling of AI-generated content.
The obligations under Article 50 of the AI Act (transparency obligations for providers and deployers of generative AI systems) aim to ensure transparency of AI-generated or manipulated content, such as deep fakes. The article addresses risks of deception and manipulation, fostering the integrity of the information ecosystem. These transparency obligations will complement other rules like those for high-risk AI systems or general-purpose AI models.
Here is technical guidance on how to securely provide documents to the AI Office in line with the obligations for providers of general-purpose AI models under the AI Act. EU SEND is the designated secure channel for submitting these documents to the European Commission, including notifications, reassessments, incident reports, and submissions related to the General-Purpose AI Code of Practice. All transmissions via EU SEND are encrypted and protected to ensure the confidentiality, integrity, and authenticity of the information shared.
The Commission has issued guidelines to clarify the scope of the obligations for providers of general-purpose AI models under the AI Act. These obligations enter into application on 2 August 2025. The guidelines on the scope of obligations for providers of general-purpose AI (GPAI) models help actors in the AI ecosystem understand whether the obligations apply to them and what is expected of them, ensuring they can innovate with confidence.