AI ACT: How do companies need to label AI-generated content?
Update Data Protection No. 225
With the transparency requirements of Article 50 of the AI Regulation, European legislators are responding to the growing prevalence of AI-generated and AI-manipulated content and the associated risks to trust, opinion-forming, and democratic processes. The regulations require providers and operators of certain AI systems to identify the artificial origin of content and make it recognizable to users. However, since Article 50 of the AI Regulation only specifies the basic technical and organizational requirements, the European Commission has published a first draft of a voluntary code of conduct on the transparency of AI-generated content. The draft is intended to flesh out the legal requirements of Article 50 of the AI Regulation and at the same time serve as a practical benchmark for the implementation of transparency obligations.
I. Legal framework: Transparency requirements of Article 50 of the AI Regulation
Article 50 of the AI Regulation contains a tiered system of transparency obligations that differentiates between providers and operators depending on the type of AI system and the role of the respective actor. The central idea is to ensure that natural persons can recognize whether they are interacting with an AI system or are being confronted with AI-generated or AI-manipulated content. At the same time, the standard takes into account the fact that transparency requirements are context-dependent and cannot or should not apply with the same intensity in every case.
The regulation focuses on transparency requirements for generative AI systems. Pursuant to Art. 50 (2), providers of such systems must ensure that synthetic audio, image, video, or text content is labeled in a machine-readable format and is recognizable as artificially generated or manipulated. Article 50 of the AI Regulation deliberately refrains from specifying concrete technical procedures, but formulates qualitative requirements for the effectiveness, interoperability, resilience, and reliability of the solutions used. At the same time, technical feasibility, content specifics, economic reasonableness, and the state of the art must be taken into account.
On the operator side, Art. 50 (4) AI Regulation contains specific disclosure requirements for particularly sensitive use cases (we reported). For example, deepfakes must be disclosed as artificially generated or manipulated in order to limit the risk of deception. However, for obviously artistic, creative, satirical, or fictional content, this obligation is mitigated and limited to appropriate disclosure that does not compromise the character of the work. Comparable transparency requirements apply to AI-generated or manipulated texts that are published to inform the public about matters of public interest (typically news), provided that there is no human review or editorial responsibility.
These obligations are flanked by Art. 50(5) of the AI Regulation, which stipulates that information must be provided in a clear, unambiguous, and accessible manner at the latest when the AI system or AI content is first encountered.
Against this background, Article 50 of the AI Regulation expressly provides for the development of practical guidelines and codes of conduct at Union level to specify the open terms and technical requirements, thereby creating the legal basis for the draft code of conduct for AI-generated content that has now been presented.
II. Key content of the draft
The draft code of conduct on the transparency of AI-generated content published by the European Commission is explicitly intended as a concrete implementation of the transparency obligations under Article 50 of the AI Regulation. It was developed by two multidisciplinary working groups and is based on extensive consultations with industry, academia, civil society, and Member States. The code is deliberately designed as a soft law instrument: it is not legally binding, but is intended to serve as a reference framework for the implementation of legal obligations and at the same time provide supervisory authorities with a uniform basis for assessment
1. Section 1: Obligations for providers of generative AI systems
Section 1 of the draft is aimed at providers of generative AI systems and aims to specify, in technical and organizational terms, the transparency obligations set out in Article 50(2) and (5) of the AI Regulation for the labeling technically reliable of audio, image, video, or text content. The code takes a clearly functional approach. It does not specify a particular technical procedure, but rather formulates binding basic principles, graduated obligations, and concrete measures that providers should use to ensure the labeling and recognizability of AI-generated or AI-manipulated content.
The central approach of the draft is a multi-layered labeling approach. The draft explicitly assumes that there is currently no single technology that can meet the legal requirements on its own. Providers therefore undertake to use a combination of several active labeling techniques that complement and reinforce each other. These include, in particular, machine-readable metadata, imperceptible watermarks and, where necessary, supplementary fingerprinting or logging mechanisms.
For content that allows metadata embedding, the code stipulates that information on the origin and creation process of the content must be included in the metadata and digitally signed. In addition, AI-generated or manipulated content must be marked with an invisible watermark that is as robust as possible against typical processing steps such as compression, cropping, or format changes. The draft leaves open whether these watermarks are set during training, inference, or in the output layer, but explicitly calls for the "best possible technically and economically viable" implementation. For particularly challenging content types – such as short texts – additional methods such as logging or hashing can be used to enable later attribution.
The code pays particular attention to responsibility along the value chain. Providers of base models, especially generative AI models for general use or with open weights, should implement marking techniques at the model level to make it easier for downstream providers to comply with transparency requirements. At the same time, overall responsibility for proper labeling remains with the respective provider of the AI system, especially in the case of multimodal outputs or the combination of several models.
In addition to labeling, Section 1 also requires that AI-generated content be traceably detectable. To this end, providers should provide free interfaces, APIs, or publicly accessible detectors that allow users and third parties to check whether content has been generated or manipulated by a particular AI system. The results must be explained in an understandable way and be accessible without barriers. In addition, forensic detection methods are required that also work when labels have been removed or damaged in order to counteract attempts at manipulation.
These technical obligations are accompanied by requirements for interoperability, standardization, and governance. Providers should adhere to open standards, support common verification infrastructures, and regularly test, monitor, and further develop their solutions. To this end, the code provides for adaptive threat models, documented compliance frameworks, and close cooperation with market surveillance authorities, among other things. Overall, Section 1 thus outlines a demanding but deliberately flexible implementation model that is intended to enable technological innovation while at the same time placing high expectations on the organizational and technical maturity of generative AI providers.
2. Section 2: Obligations for operators
Section 2 of the draft specifies the transparency obligations under Article 50(4) and (5) of the AI Regulation for operators of AI systems that use and distribute deepfakes or certain AI-generated or AI-manipulated texts. Unlike Section 1, which primarily focuses on technical labeling and machine-readable recognizability, the focus here is on the disclosure of the AI origin in a manner that is immediately perceptible to humans. The code expressly understands these obligations as a supplement to the technical measures taken by providers and places the responsibility for the specific design of transparency with those actors who actually publish or distribute content.
A central element of the operator section is the introduction of a uniform disclosure logic that should be recognizable and contextually appropriate throughout the EU. To this end, the draft initially provides for a common taxonomy that distinguishes between fully AI-generated content and AI-assisted content. This differentiation is intended to enable users to better assess the extent of AI involvement, particularly with regard to the potential for deception and the depth of content intervention. The taxonomy also serves as the basis for all further labeling and disclosure measures by operators.
On this basis, the code obliges operators to label deepfakes and AI-generated or manipulated texts on topics of public interest using a common icon. Until a uniform EU-wide symbol is developed, the use of a transitional icon is envisaged, which will generally consist of a two-letter abbreviation for artificial intelligence (e.g., "AI"). The icon must be clearly visible at first glance, unambiguously assignable, and placed in a position suitable for the medium in question. In the future, an interactive EU symbol is to be developed that not only indicates the AI origin, but also provides further information on the type and scope of AI processing, for example by linking to machine-readable provenance data from Section 1 of the Code.
The draft attaches great importance to context-specific disclosure. Adapted forms of disclosure are described for different formats such as real-time videos, recorded videos, images, audio content, or multimodal content. For example, videos may require the icon to be displayed permanently, while audio formats may require additional or alternative acoustic cues. For particularly intrusive content such as deepfakes, the code requires clear, timely labeling that is perceptible to the audience without additional interaction. At the same time, Section 2 takes into account the fundamental rights tensions that can arise in connection with transparency obligations. For obviously artistic, creative, satirical, or fictional works, a less stringent disclosure is provided for, which must not impair the enjoyment and expressiveness of the work. Nevertheless, even in these cases, appropriate references to the use of AI should be made in order to avoid deception and protect the rights of third parties.
Finally, the code contains special provisions for AI-generated or manipulated texts on matters of public interest. Operators must disclose such texts as a matter of principle, unless they have been subject to human review or editorial control and a natural or legal person bears editorial responsibility. In order to invoke this exception, the draft requires traceable internal processes and a documented assignment of editorial responsibility.
The material disclosure requirements are accompanied by organizational requirements. Operators should maintain internal compliance documentation, train employees, and establish mechanisms for reporting and correcting incorrect or omitted labels. The code also emphasizes the importance of accessibility: disclosures must also be perceptible to people with disabilities, for example through alternative text descriptions, audio cues, or sufficient visual contrasts. Overall, Section 2 positions operators as the central interface between technical labeling and public perception of AI content and assigns them an active role in protecting the information space.
III. Practical implications and recommendations for action
The draft code of conduct makes it clear that the transparency requirements of Article 50 of the AI Regulation in a timely mannershould be implemented, even if they will not be binding until August 2026. Although the code is not legally binding, it is likely to serve as a central reference framework for supervisory authorities and thus effectively set the standard for proper implementation. Providers and operators should therefore already take it into account when designing their compliance measures.
For providers of generative AI systems, this means in particular that the labeling and detectability of AI content must be understood as an integral part of system design. Subsequent implementation is only possible to a limited extent, both technically and organizationally. It is therefore recommended to evaluate suitable labeling and detection methods at an early stage and to establish documentation and testing processes for providing evidence to supervisory authorities.
Operators of AI systems are faced with the task of establishing clear internal processes for classifying, labeling, and disclosing AI content. This applies in particular to the handling of deepfakes, AI texts on topics of public interest, and the clear demarcation of editorially responsible content. Internal guidelines, training, and reporting processes can help to implement labeling requirements consistently and in a context-appropriate manner.
IV. Outlook and conclusion
The draft code of conduct provides an important clarification of the transparency requirements of Article 50 of the AI Regulation and sets a clear direction for future practice. Once the current consultation phase has been completed, a revised draft is to be presented in spring 2026, before the final code of conduct is expected to be published by mid-2026. The transparency obligations under Article 50 of the AI Regulation will become binding on August 2, 2026. Against this backdrop, it is already clear that transparency of AI-generated content will become a permanent compliance issue. Companies and public authorities would therefore be well advised to closely follow the further development of the code and to integrate transparency requirements into their AI strategies at an early stage.
This article was created in collaboration with our student employee Emily Bernklau.