AI Act: New Code of Practice for General-Purpose AI Models by the EU Commission under discussion
Update IP, Media & Technology No. 124
On August 2, 2025, the provisions of the AI Act for general-purpose AI models will come into force. Shortly before this, on July 10, 2025, the European Commission published the Code of Practice for General-Purpose AI (GPAI-CoP), a code of conduct for general-purpose AI. These guidelines and voluntary code of conduct for companies in the field of artificial intelligence aim to implement the requirements of the AI Act for General Purpose AI (GPAI) in a practical and legally compliant manner.
Purpose and Structure of the Code of Practice
The GPAI-CoP supports the implementation of Articles 53 and 55 of the AI Act and is divided into three central areas:
- Transparency
- Copyright
- Safety and Security
These three chapters contain specific provisions on documentation, legal compliance, and risk mitigation in the development and operation of GPAI models. The chapters on Transparency and Copyright are intended to enable providers of generally usable AI models to document their obligations under Article 53 of the AI Act.
The chapters on Safety and Security mitigation are only relevant for providers of advanced models with systemic risk that fall under Article 55 of the AI Act.
Transparency
The central element of this chapter is a standardized documentation form that providers are required to maintain regularly and submit to regulatory authorities. It requires information on model architecture, energy consumption, the origin and selection of training data, approved use cases, and assessed risks. In addition to its regulatory purpose, the transparency requirement is also intended to build trust among end users and partner companies.
Copyright
The code also includes an obligation to treat copyright-sensitive content with particular care. This includes refraining from using illegal sources, observing technical opt-out signals, and establishing effective control mechanisms to prevent illegal outputs. In addition, complaint channels for rights holders are to be established.
Safety and Security
The GPAI-CoP sets out specific obligations for AI systems with a high degree of dissemination or particular risk potential and thus for a small group of AI models. In addition to security reports with scenario analysis and risk assessment throughout the entire model life cycle, these include reporting obligations for security-relevant incidents. Open-ended risk identification also obliges providers to continuously identify and document potential, previously unknown risks.
The assessment of risks is to be communicated at least every six months in a report to the EU Commission.
Legal Effect and Practical Relevance
Providers can voluntarily submit to the GPAI-CoP. The code has no direct legal effect, but can be used as proof of compliance with the AI Act requirements. Providers who sign the code benefit from regulatory simplifications and reduced testing efforts. Those who decide not to sign can demonstrate compliance with the AI Act through other methods. However, this requires them to establish their own structures for legal compliance with the AI Act and to demonstrate their effectiveness in a transparent manner. This can be particularly challenging for smaller providers with limited resources. Those who choose not to sign must also be prepared for increased inquiries from the Commission. Compliance with the AI Act will be monitored by the newly established "AI Office" within the EU Commission.
Reactions and Controversies
The response to the new AI guideline has been controversial. While certain industry associations generally welcomed the code, others voiced criticism regarding its lack of technical clarity and the extensive documentation requirements. It was suggested that both the AI Act and the GPAI-CoP, in their current form, might take insufficient account of technological realities. Smaller companies, in particular, were said to face challenges, given that the requirements appeared ambitious and only partially practical. Moreover, the provisions were reportedly too abstract and difficult to integrate into existing processes, with clear operational guidance still lacking.
Another point of criticism concerned the insufficient differentiation between various types of AI systems. It was argued that the regulations treated generative AI, traditional machine learning models, and other technologies in largely the same way, despite differing risk profiles and use cases. This, some claimed, could lead to implementation uncertainty and a sense of overregulation.
The role of major U.S. tech corporations was also viewed critically. Some civil society organizations reportedly feared that their influence on the design of the code might come at the expense of European interests and standards. Consequently, stronger safeguards for affected individuals, clearer regulation of generative AI systems, and an expansion of whistleblower protections were deemed necessary.
Regarding Chapter 3, which addresses Copyright and the protection of creative works, doubts were raised about whether the proposed measures would be sufficient to ensure meaningful protection for individual creative output. The obligations related to labeling and traceability of AI-generated content were considered technically difficult to implement and legally complex. Many companies were said to find the corresponding requirements overly intricate and impractical, pointing to a lack of clear standards and implementation tools.
On the other hand, industry representatives warned that excessive regulatory demands placed on U.S.-based AI partners might impair the innovative capacity essential to European companies. Regulation, it was argued, should support Europe's competitiveness in industrial AI, not hinder it. Many of the proposed measures were seen as going beyond the original objectives of the AI Act.
In addition, the timeline for implementing the AI Act was called into question. Although the provisions for general-purpose AI models are set to take effect in August 2025, most of the regulations are scheduled to become fully applicable by August 2026. Industrial stakeholders reportedly deemed this timeline unrealistic. Some even advocated for suspending the entire AI Act, arguing that it was not yet practically viable. Furthermore, 44 European business leaders publicly recommended a two-year postponement to allow for a realistic and workable implementation. The current German federal government under Friedrich Merz also appeared to support a delay in the EU regulatory agenda.
Conclusion
With the GPAI-CoP, the European Commission wants to create practical guidelines to support companies in implementing the requirements of the AI Act for AI models for general use. In practice, however, this poses major challenges. Fears of technical ambiguities, high documentation requirements, and a lack of differentiation between AI systems, as well as concerns about the influence of international technology companies, make it clear that the response from industry to the regulations is still characterized by considerable uncertainty. It remains to be seen whether the EU Commission will need to make further adjustments.