Use of artificial intelligence – What are the current legal considerations?
Update Data Protection No. 238
With the rapid advancement of artificial intelligence, regulatory dynamics are also continuing to increase. Even before the central obligations of the European AI Regulation (AI-VO) for high-risk AI systems are due to take effect in August 2026, discussions are already underway in Brussels about postponing the deadlines. At the same time, initial rulings by German courts on copyright law relating to AI-generated content are creating facts that are directly relevant to business practice. In Germany, the AI Market Surveillance and Innovation Act (AI-MIG) is the subject of debate as the institutional framework for AI supervision, and outside Europe – for example, in the US with the Colorado AI Act – separate regulatory approaches are also emerging. For companies, this raises the question of which legal developments are currently particularly relevant and what requirements can already be derived from them today. The following overview highlights key regulatory and case law developments in AI law and outlines what companies should be prepared for.
I. Digital Omnibus: New deadlines for the AI Regulation
The European AI Regulation (AI Regulation) currently stipulates that key requirements for high-risk AI systems will become applicable from August 2, 2026. Particularly relevant are Sections 1 to 3 of Chapter III of the Regulation, which define the classification of AI systems as high-risk AI, the technical requirements, and the obligations of providers and operators. These provisions are of practical importance for many companies, for example when AI systems are used in human resources, credit decisions, or safety-related products.
Against this background, adjustments to the AI Regulation are currently being discussed at European level as part of the so-called Digital Omnibus Package. In addition to a general Digital Omnibus (we reported on this in Data Protection Update No. 221, No. 223, and No. 236), the Digital Omnibus on AI in particular contains proposed amendments specifically relating to the AI Regulation. The central consideration is to postpone the application of the regulations for high-risk AI.
The background to the discussion is primarily the currently limited practical feasibility of the requirements. Many of the requirements of the AI Regulation presuppose technical standards, certification procedures, and regulatory guidelines that are only partially available at present. Without these specifications, companies face considerable legal uncertainty in implementing the extensive compliance obligations for high-risk AI. The omnibus proposal therefore links the applicability of the relevant regulations to the availability of supporting guidelines and standards from the European Commission.
According to the current proposal, the rules for high-risk AI would only become applicable in the future after a corresponding decision by the European Commission, with staggered transition periods. For AI systems under Article 6(2) and Annex III of the AI Regulation (e.g., systems in the personnel or education sector), the obligations would take effect six months after such a decision, while for AI systems under Article 6(1) and Annex I (e.g., safety-related components of certain products), a twelve-month transition period is provided for. Irrespective of this, the proposal provides for absolute maximum deadlines: December 2, 2027, and August 2, 2028, respectively.
The legislative process is not yet complete. The omnibus package is on the agenda of the Committee of Permanent Representatives in the Council of the European Union for March 13, 2026, with a view to laying the groundwork for negotiations with the European Parliament. Formally, therefore, August 2, 2026, remains the date of application for the time being. However, in the political debate, a postponement of the deadlines is now considered likely.
II. AI and copyright
1. New case law from German courts
Parallel to regulatory developments, the copyright classification of AI applications is also becoming increasingly important. German courts are increasingly dealing with the question of how classic copyright protection mechanisms should be applied to content created or used in connection with AI systems. Initial rulings show that the use of AI does not negate the copyright protection of human works.
A recent example is a ruling by the Frankfurt Regional Court on December 17, 2025 (Ref. 2-06 O 301/25). The subject of the proceedings was a piece of music that had been created using generative AI from lyrics previously written by a natural person. The lyrics were written by a private author and set to music by a third party using an AI music generator. The song was then released through a music distributor and promoted on social media. The author of the lyrics subsequently asserted claims for injunctive relief.
The court clarified that the copyright protection of a text created by a human being does not lapse when it is integrated or edited into a new work, such as a piece of music, with the help of AI. The decisive factor is whether the original text continues to be a personal intellectual creation within the meaning of copyright law. The court affirmed this. Even though individual passages had been revised or reworded, the author's individual expression remained recognizable. The use of the text in the AI-generated song therefore constituted use of the protected work.
The court considered the distribution of the song to be an infringement of the lyricist's copyright, in particular the right of reproduction under Section 16 of the German Copyright Act (UrhG). Although the lyrics in the song had been partially altered, the basic structure and central passages had been retained. The distribution of the AI-generated piece of music could therefore be prohibited.
The decision is one of a series of recent cases in which German courts are beginning to apply copyright principles to AI constellations. For example, the Regional Court of Munich I had already ruled on copyright issues in connection with AI-generated content in its judgment of November 11, 2025 (Ref. 42 O 14139/24). There, too, it became clear that the decisive factor for copyright protection remains whether a human creative achievement is involved.
2. New EU initiatives for AI and copyright
At the European level, too, the copyright dimension of generative AI is increasingly coming into focus. On March 10, 2026, the European Parliament adopted a resolution on "Copyright and generative artificial intelligence – opportunities and challenges", which was largely based on an initiative by CDU MEP Axel Voss.
The resolution addresses the current uncertainties surrounding the interaction between generative AI and European copyright law. In the Parliament's view, the training of AI models with copyright-protected content, transparency regarding the training data used, and the remuneration of rights holders raise key legal questions. At the same time, it emphasizes that innovation in the field of AI and the protection of creative works should not be seen as opposites, but that both areas must be developed together.
In terms of content, Parliament advocates, among other things, greater transparency in the use of copyright-protected content for training AI systems. In the future, providers of generative AI should disclose which protected content has been used in training data sets. In addition, Parliament calls for the development of functioning licensing mechanisms to ensure fair remuneration for authors while enabling access to high-quality training data.
Another focus is on strengthening the position of rights holders, particularly those in the cultural and media industries. They should be given effective means of objecting to the use of their works for AI training purposes or of licensing such uses. In this context, the role of the European Union Intellectual Property Office (EUIPO) as a possible mediator for transparency and licensing mechanisms is also being discussed.
The resolution does not yet result in legally binding changes. Nevertheless, the initiative clarifies the political direction of further European regulation. The European Parliament expressly calls on the Commission to examine whether the existing copyright legal framework, in particular the rules on text and data mining, should be adapted or supplemented in view of the development of generative AI. It is therefore foreseeable that the copyright regulation of AI systems is likely to be further specified at EU level in the coming years.
III. National implementation of the AI Regulation: AI-MIG
In addition to developments at the European level, the national implementation of the AI Regulation is also progressing. Although the AI Regulation applies directly in all member states, supplementary national regulations are required, in particular to determine the competent authorities and to organize market surveillance and supervision. In Germany, this implementation is to be carried out by the planned AI Market Surveillance and Innovation Act (AI-MIG) (we reported).
The draft law essentially provides for the existing market surveillance structures for regulated products to be transferred to AI systems. In particular, it envisages a coordinating role for the Federal Network Agency, which is to act as a central point of contact for AI supervision issues and coordinate cooperation between the various specialist authorities. In addition, the respective competent supervisory authorities will remain responsible for certain areas of application, such as the financial sector. The aim of the law is to create an efficient supervisory structure while making the most of existing responsibilities.
The current draft of the law is on the agenda of the Bundesrat on March 11, 2026. In political and economic discussions, the project has so far been largely viewed as a necessary step toward the organizational implementation of European requirements. At the same time, isolated concerns have been expressed, for example regarding the complexity of the planned supervisory structure and the practical coordination between the authorities involved.
IV. International regulation: The Colorado AI Act
In addition to European regulation, specific legal frameworks for the use of AI systems are also increasingly emerging outside the European Union. One particularly noteworthy example is the Colorado AI Act, which was passed in May 2024 and is considered one of the first comprehensive AI regulations in the United States. The law will come into force gradually from June 30, 2026, and is aimed in particular at companies that develop or use AI systems that can have a significant impact on individuals.
Similar to the EU, the law focuses on so-called "high-risk artificial intelligence systems." These include, in particular, AI applications that are used in sensitive areas such as employment, lending, housing, healthcare, or education and can make automated decisions with potentially significant consequences for the individuals affected. The Colorado AI Act imposes a number of risk management and transparency obligations on developers and operators of such systems.
Among other things, companies must conduct risk assessments, analyze potential discriminatory effects, and implement appropriate risk mitigation measures. In addition, there are transparency obligations toward users and, in some cases, toward affected individuals, for example, when automated systems are used for decision support. The aim of the law is, in particular, to prevent algorithmic discrimination and ensure the responsible use of AI systems.
Compared to the European AI Regulation, the Colorado AI Act takes a more sector- and risk-oriented approach, but focuses primarily on avoiding discrimination risks and less on comprehensive technical requirements for AI systems. Nevertheless, the law shows that regulatory approaches to AI are also increasingly developing in the United States, albeit primarily at the individual state level so far.
V. Recommendations for action
The developments described above show that the legal framework for the use of AI is currently taking shape in several areas in parallel. Even though individual regulations are still in flux, concrete measures can already be derived that companies can use to reduce their legal risks when using AI.
1. Check AI systems early on for possible high-risk classifications
Against the backdrop of the AI Regulation, companies should analyze their existing or planned AI applications to determine whether they could potentially be classified as high-risk AI systems within the meaning of Art. 6 AI Regulation. This applies in particular to applications in the area of human resources, automated decision-making processes, or safety-related product contexts. Even if the application of the relevant obligations may be delayed due to the digital omnibus procedure currently under discussion, it is already to be expected that extensive requirements for risk management, documentation, and governance will take effect. Companies should therefore establish internal processes for classifying and evaluating AI systems at an early stage.
2. Review the use of copyright-protected content in AI applications
Recent case law from German courts and current initiatives at the EU level show that the handling of copyright-protected content in connection with AI systems is increasingly coming into focus. Companies should therefore review whether and to what extent protected content is processed when using generative AI for marketing, content creation, or software development, for example. This applies in particular to cases wh , texts, images, or music are fed into AI systems or processed by them. If necessary, appropriate rights of use or licenses should be obtained to avoid copyright risks.
3. Monitor the development of national and international AI regulation
In addition to the European AI Regulation, national supervisory structures and international regulatory models are also emerging. In Germany, the planned AI-MIG is currently creating the institutional framework for the supervision of AI systems. At the same time, other jurisdictions are developing their own regulatory approaches, such as the Colorado AI Act, particularly with regard to the risks of discrimination in AI-supported decisions. Companies that use or develop AI systems internationally should therefore align their compliance structures in such a way that the different regulatory requirements of various jurisdictions can be taken into account.
VI. Conclusion and outlook
The legal framework for the use of artificial intelligence is currently undergoing intensive development. In addition to the gradual implementation of the AI Regulation and possible adjustments within the framework of the Digital Omnibus procedure, initial guidelines are taking shape, particularly in copyright law, through case law and political initiatives at the European level. At the same time, national implementing laws such as the planned AI-MIG are creating the institutional basis for supervision and enforcement, while separate regulatory approaches are also increasingly emerging outside Europe.
For companies, this means that the legal framework for AI applications will continue to consolidate in the coming years. It is therefore to be expected that, in addition to new guidelines and technical standards, court decisions and further European legislative initiatives in particular will contribute significantly to the concretization of the framework. Companies are therefore well advised to continuously monitor regulatory developments and to align their internal processes with the foreseeable requirements at an early stage.
This article was created in collaboration with our student employee Emily Bernklau.