02-23-2026 Article

German Implementation Act for the AI Regulation (KI-MIG)

Update Data Protection No. 235, Update IP, Media & Technology No. 136

With Regulation (EU) 2024/1689 on artificial intelligence (AI Regulation), the European Union created a comprehensive legal framework for the development, marketing, and use of AI systems for the first time in the summer of 2024. While the substantive requirements for AI systems follow directly from the regulation (we reported on this in Data Protection Update No. 185, 208), member states are required to establish national supervisory and enforcement structures. To this end, following a draft bill dated September 12, 2025 (we reported), the German federal government presented a government draft bill for a law on market surveillance and innovation promotion of artificial intelligence (AI-MIG) on February 11, 2026.

The draft regulates in particular the responsibilities of the market surveillance authorities, their cooperation, and the national structure of the fine procedure. At the same time, it places emphasis on promoting innovation, for example through the establishment of an AI real-world laboratory and coordinating competence structures. Even though the implementing law does not establish any new material obligations for companies, future supervisory practice will be significantly influenced by the proposed regulatory architecture.

In the following, we present the key contents of the government draft and highlight the structural changes that companies should already be keeping an eye on.

I. Background and objectives

The AI Regulation applies directly in all member states as an EU regulation, creating a uniform legal framework with a risk-based approach that provides for graduated requirements ranging from transparency obligations to comprehensive specifications for high-risk AI systems.

Despite its direct applicability, the AI Regulation requires national support. In particular, member states are required to determine which authorities are responsible for application and enforcement, how market surveillance and notification are organized, which bodies conduct fine proceedings and handle complaints, and how existing administrative structures, such as federal ones, are integrated into the new supervisory architecture.

This is precisely where the government draft of the AI-MIG, which has now been adopted, comes in. It specifies the distribution of responsibilities among authorities, regulates cooperation mechanisms, and creates the procedural basis for sanctions. At the same time, the draft aims to ensure that the AI Regulation is implemented in a way that is conducive to innovation and conserves resources, and to systematically integrate existing sector-specific expertise.

II. Key content

1. Federal Network Agency as central market surveillance authority

The core of the draft is the designation of the Federal Network Agency as the central market surveillance authority for compliance with the AI Regulation, unless special legal responsibilities apply (Section 2 (1) AI-MIG). The legislator has thus opted for a largely centralized model with a clear nationwide point of contact for AI-related supervisory issues.

The explanatory memorandum to the draft explicitly refers to efficiency and coherence considerations: the aim is to avoid fragmentation of responsibilities, prevent divergent interpretations of the AI Regulation, and pool scarce AI expertise. At the same time, the choice of the Federal Network Agency ties in with its growing role as a digital supervisory authority.

For companies, this means that in all constellations not explicitly related to a specific sector, the Federal Network Agency will in future be the primary point of contact for market surveillance issues, supervisory measures, and the enforcement of obligations under the AI Regulation.

To support this central role, the draft provides for the establishment of a coordination and competence center (KoKIVO) at the Federal Network Agency (Section 5 AI-MIG). The KoKIVO is intended to structure coordination between the authorities involved, pool expertise, and ensure uniform application of the AI Regulation.

2. Sectoral responsibilities

Despite the central role of the Federal Network Agency, existing supervisory structures will remain in place in certain areas. Authorities that already act as market surveillance authorities under EU harmonization legislation will also assume this function for AI systems related to the respective products (Section 2 (2) AI-MIG).

This applies in particular to regulated product sectors such as machinery, medical devices, motor vehicles, or other areas covered by Annex I of the AI Regulation. The aim is to leverage existing sector-specific expertise and not to impose completely new supervisory structures on companies.

A special regulation also applies to the financial sector: for AI systems directly related to regulated financial services – such as creditworthiness checks, credit ratings, or actuarial risk assessments – the Federal Financial Supervisory Authority (BaFin) is to be responsible as the market surveillance authority. This integrates AI supervision into existing financial market supervision.

Federal peculiarities are also taken into account: insofar as AI systems are placed on the market or used by public authorities of the federal states, market surveillance is the responsibility of the authorities designated under state law. The draft thus preserves the federal division of powers and integrates the federal states into the new supervisory architecture. At this point, however, a fragmentation of competences and differing interpretations of individual points of the AI Regulation could arise.

3. Promoting innovation: AI real-world laboratory and testing opportunities

In addition to market surveillance, the draft explicitly focuses on instruments that promote innovation. The Federal Network Agency is to be responsible in particular for the establishment and operation of a national AI real-world laboratory. This instrument ties in with the provisions of the AI Regulation, which encourages member states to create regulatory real-world laboratories.

The AI real-world laboratory is intended to give companies – especially start-ups and SMEs – the opportunity to develop and test AI systems under official supervision. The aim is to clarify regulatory requirements at an early stage and not to hamper innovation through legal uncertainties.

In addition, the draft provides for testing opportunities for high-risk AI systems. These are intended to allow certain systems to be tested under controlled conditions and regulatory issues to be addressed before widespread market deployment. Unfortunately, the guidelines planned for the treatment and classification of such systems are still pending, even though the AI Regulation requires them to be in place by February 2, 2026.

III. What does this mean for companies?

Even though the AI-MIG does not create any new material obligations, it marks an important transition from the normative level of the AI Regulation to practical enforcement in Germany. Companies should take the now concretized supervisory architecture as an opportunity to strategically review their internal structures and processes.

1. Conduct a responsibility analysis at an early stage

First, it is advisable to carefully analyze future regulatory responsibilities. Depending on the business model, either the Federal Network Agency as the central market surveillance authority, an existing sector-specific market supervisory authority, or, in the financial sector, BaFin may be responsible. Parallel responsibilities may arise, particularly in the case of technology-open platform models or corporate structures with different product lines. Early clarification makes subsequent coordination with the authorities much easier and reduces the risk of delays in approval or review processes.

2. Adapting AI governance to the new supervisory structure

In addition, companies should adapt their AI governance to the new enforcement reality. The AI Regulation already requires providers to have structured risk management, comprehensive documentation requirements, and a system for monitoring high-risk AI systems after they have been placed on the market. However, operators of such systems also have certain obligations, such as monitoring and information requirements.

With the supervisory structure now clearly defined, the likelihood of coordinated audits and cross-sector coordination between authorities is increasing. It is therefore advisable to clearly define internal responsibilities, systematically map interfaces between data protection, product safety law, IT security, and regulatory compliance, and review existing control mechanisms for their resilience.

3. Examine real-world laboratory and testing options

At the same time, the planned instruments for promoting innovation should be strategically evaluated. The planned AI real-world laboratory opens up the possibility of testing new or regulatory complex AI applications under official supervision. For companies with innovative high-risk systems, this can be a suitable instrument for obtaining legal certainty at an early stage and integrating regulatory requirements into product development. Consciously embedding such test phases in the development strategy can not only minimize risks but also safeguard investment decisions.

4. Preparation for supervisory and fine proceedings

Finally, structured preparation for possible supervisory and fine proceedings is recommended. Clear internal processes for dealing with regulatory inquiries, defined escalation mechanisms, and coordinated communication strategies are central components of a robust compliance system. In view of the planned evaluations of the regulatory structure at the national and European level, ongoing monitoring of further legal developments is also advisable. It can be assumed that fines will be lower if it can be documented that at least an effort was made to meet the existing requirements.

IV. Conclusion and outlook

With the AI-MIG, the legislator is creating the organizational conditions for the effective enforcement of the AI Regulation in Germany. The substantive obligations continue to arise directly from the European regulation; however, the now planned authority architecture with the Federal Network Agency as the central market surveillance authority and clearly defined sectoral responsibilities will be decisive for practice.

The government draft of February 11, 2026, will now be introduced into the parliamentary legislative process. Following deliberation in the Bundestag and referral to the Bundesrat, a swift conclusion is expected in view of the deadlines under EU law. Major structural changes appear rather unlikely at present.

Companies should closely monitor further developments and already align their AI compliance with the foreseeable supervisory structure. With the AI-MIG coming into force, the enforcement of the AI Regulation in Germany will be specified in more detail, and supervisory practice will thus also gain noticeable momentum.

Download as PDF

Contact persons

You are currently using an outdated and no longer supported browser (Internet Explorer). To ensure the best user experience and save you from possible problems, we recommend that you use a more modern browser.