05-11-2026 Article

AI Compliance 2026: What the Digital Omnibus Act Really Means for Businesses

Update Data Protection No. 250

With the political agreement reached in the trilogue on May 7, 2026, regarding the so-called AI Omnibus, the European Union is responding to growing criticism of the practical feasibility of the European AI Regulation (AI Reg). The aim of the proposed amendments is, in particular, to simplify regulatory requirements, reduce overlaps with sector-specific laws, and give companies more time to implement key provisions. At the same time, the agreement also provides for new safeguards and additional prohibitions, particularly in connection with abusive AI-generated content. The planned changes thus address not only issues related to the practical application of the AI Regulation but also set important priorities for the future balance between promoting innovation, ensuring competitiveness, and protecting fundamental rights in European digital law. Below, we provide an overview of the key elements of the trilogue agreement and the resulting practical implications for businesses.

I. Background and Objectives of the AI Omnibus

The European Commission presented the so-called AI Omnibus in November 2025 as part of its European “Simplification Agenda.” The background to this was, in particular, concerns expressed early on by industry, Member States, and practitioners regarding the practical feasibility of the AI Regulation. Criticism focused primarily on the high administrative burden, short implementation deadlines for high-risk AI systems, and overlaps with existing sector-specific regulations, such as in the area of product safety law. Furthermore, it was argued that certain requirements of the AI Regulation entail significant legal uncertainty and additional compliance costs, particularly for small and medium-sized enterprises.

Against this backdrop, the Commission’s proposal aimed in particular to better align the application of the AI Regulation with existing sectoral regulatory frameworks and to avoid double regulation. Among other things, the proposal included longer transition periods for high-risk AI systems, selective relief from documentation and compliance obligations, and closer integration with sector-specific harmonization legislation. At the same time, the division of responsibilities between national authorities and the European AI Office was to be clarified, and regulatory fragmentation within the EU reduced.

However, the proposals sparked controversial discussions even during the legislative process. While parts of the industry and individual member states welcomed the planned simplifications as a necessary step to ensure European competitiveness, other stakeholders warned of a potential watering down of the AI Regulation’s risk-based approach and increasing legal fragmentation.

II. The Key Points of Agreement

The political agreement of May 7, 2026, essentially adheres to the objectives of the original Commission proposal but includes important refinements and compromise solutions in several areas.

1. New Deadlines

A central component of the trilogue agreement is the adjustment of the application deadlines for high-risk AI systems. This is based in particular on the assessment that the harmonized standards and technical tools required for practical implementation are unlikely to be available in time. The Council and Parliament have therefore agreed on a phased postponement of the relevant application dates.

For so-called stand-alone high-risk AI systems, the relevant provisions of the AI Regulation will now apply only from December 2, 2027. For high-risk AI systems that are part of regulated products, such as in the fields of machinery, elevators, or toys, a later start date of August 2, 2028, is planned.

In contrast, the deadlines for transparency obligations related to AI-generated content have been shortened. Providers are to implement the necessary technical solutions for labeling artificially generated content, such as watermarks or machine-readable markers, as early as December 2, 2026. This is intended, in particular, to curb the misuse of generative AI systems more quickly.

2. New Prohibitions

The trilogue agreement also provides for an expansion of the prohibitions on certain AI practices previously set forth in the AI Regulation. In particular, an explicit ban on AI systems used to create non-consensual sexual or intimate content, as well as depictions of sexualized violence against children (CSAM), has been newly included.

This addition underscores that, despite the simplifications sought, European lawmakers are maintaining the AI Regulation’s protection framework rooted in fundamental rights. At the same time, the agreement responds to the increasing prevalence of abusive deepfake applications and growing political pressure in the area of child and personal data protection.

3. Sector-Specific Solutions

Another key focus of the agreement concerns the relationship between the AI Regulation and existing sector-specific regulatory frameworks. Particularly for regulated product areas such as medical devices, machinery, toys, watercraft, or elevators, there was concern that parallel requirements from the AI Regulation and sectoral harmonization acts could lead to double regulation and additional conformity assessment procedures.

Against this backdrop, a mechanism was agreed upon that is intended to allow for the targeted resolution of overlaps between the AI Regulation and sectoral regulations through subsequent implementing acts. In addition, the Machinery Regulation is to be partially exempted from the direct application of certain AI Regulation requirements. At the same time, the Commission is given the option to adopt supplementary health and safety requirements specifically for AI systems within the scope of the Machinery Regulation.

III. Recommendations for Action

While the trilogue agreement provides companies with additional time and greater regulatory flexibility in key areas, there is no reason to suspend ongoing AI compliance projects. Rather, companies should make targeted use of the transition periods granted to further develop existing governance and risk structures in a robust manner and to integrate the foreseeable adjustments into their compliance strategy at an early stage.

1. Do not pause existing AI compliance projects

Despite the extended transition periods, the fundamental requirements of the AI Regulation remain in place. Companies should therefore continue ongoing implementation projects, particularly in the areas of risk classification, documentation, governance, and internal responsibilities. The additional deadlines primarily provide greater planning certainty, but do not alter the fact that the regulatory requirements must be fully implemented in the medium term.

2. Prioritize transparency obligations for generative AI

There is an immediate need for action regarding AI-generated content. Companies that use generative AI systems or publish such content should implement technical solutions for labeling requirements – such as watermarks or machine-readable tags – at an early stage. Since the transition period is significantly shorter in this regard, this area is likely to become relevant before many traditional high-risk requirements.

3. Assess the relationship to sector-specific law early on

For companies in regulated industries – such as medical devices, mechanical engineering, mobility, or consumer products – the intersection between the AI Regulation and sector-specific product law is becoming increasingly important. Affected companies should therefore analyze early on which regulatory requirements will apply in parallel in the future and in which areas potential exemptions or special provisions might apply. Particularly in light of the announced further implementing acts, additional adjustments are to be expected in this area in the future.

4. Integrate prohibited AI practices and deepfake risks into existing governance

The new prohibitions make it clear that regulatory attention is increasingly focused on abusive applications of generative AI. Companies should therefore supplement existing AI governance structures with clear internal guidelines for handling synthetic media, deepfakes, and sensitive content. This applies not only to their own AI developments but also to the use of external AI tools by employees, service providers, or marketing departments.

IV. Conclusion and Outlook

The trilogue agreement on the AI Omnibus makes it clear that the European Union is placing greater emphasis on the practical implementability of the AI Regulation while simultaneously responding to increasing economic and regulatory pressure to adapt. In particular, the extended transition periods and the greater consideration of sector-specific characteristics are likely to provide many companies with additional planning certainty. At the same time, however, the agreement also shows that the EU is sticking to the AI Regulation’s fundamental risk-based regulatory approach and is even expanding it further in certain areas.

The political agreement must now be formally confirmed by the Council and the European Parliament and undergo a final legal-linguistic review. The AI Omnibus is expected to be finally adopted in the coming weeks. Companies should therefore closely monitor further developments and use the additional transition periods to adapt existing AI compliance structures to the foreseeable changes at an early stage.

This article was created in collaboration with our student employee Emily Bernklau.

Download as PDF

Contact persons

You are currently using an outdated and no longer supported browser (Internet Explorer). To ensure the best user experience and save you from possible problems, we recommend that you use a more modern browser.