05-07-2025 Article

Artificial Intelligence: These Transparency Obligations must be observed

Update Data Proctection No. 208

The adoption of Regulation (EU) 2024/1689 on artificial intelligence ("AI Regulation") created a Europe-wide legal framework for the use of AI systems. The Regulation entered into force on August 1, 2024. Initial obligations, such as the introduction of measures to ensure basic AI competence within companies, must therefore already be implemented (we reported). Numerous other obligations – in particular for providers and operators of AI systems – must be implemented gradually by August 2026.

The mandatory measures also include compliance with specific transparency obligations when dealing with AI-generated or AI-manipulated content. The following overview sets out the requirements of Article 50(4) of the AI Regulation that are particularly relevant for companies that publicly disseminate AI content.

I. Scope

Article 50(4) of the AI Regulation regulates transparency requirements for certain AI systems that create or modify synthetic content. The scope of application covers two main categories:

1. Deepfakes

These are known as deepfakes. The AI Regulation defines a deepfake as "image, audio, or video content generated or manipulated by AI that resembles real people, objects, places, facilities, or events and would be likely to mislead a person into believing it is real or true" (Art. 3 No. 60 AI Regulation).

This includes content that appears to be a representation of reality. However, recognizable fictional representations are not covered. The decisive factor here is not whether actual persons or events are depicted, but whether the content as a whole gives the impression of being a real representation. The decisive factor is whether an average viewer would assume the content to be authentic. Even minor manipulations that are not immediately recognizable may be sufficient to establish the character of a deepfake.

Article 50(4) of the AI Act does not distinguish between comprehensive generation and mere editing. Standard editing such as filter adjustments does not trigger a transparency obligation as long as it does not feign the authenticity of the content depicted. If, on the other hand, a supposedly real situation is artificially created or altered, disclosure is required.

2. Text content generated or manipulated by AI

On the other hand, text content generated or manipulated by AI is also affected by the transparency obligations of Art. 50 (4) AI Regulation. Originally, the inclusion of such content was not provided for in the Commission's draft and was only included at the initiative of the European Parliament. This suggests that the legislator considers AI-generated text to be less in need of regulation than audio or visual content.

Furthermore, the disclosure obligation does not extend to all text content, but only to content that is published "to inform the public about matters of public interest" (Article 50(4) subparagraph 2 sentence 1 AI Regulation). This covers content relevant to the formation of opinion of a political, social, economic, cultural, or scientific nature, regardless of whether it is of global or merely local or group-specific significance.

II. Addressees

The transparency obligations apply to the operators of AI systems. This includes all natural or legal persons who use AI systems on their own responsibility, unless the AI system is used in the context of a personal and non-professional activity (Art. 3 No. 4 AI Regulation). Even publication to a broad or undefined public can qualify as operation.

For companies, marketing departments, or providers of e-learning platforms, this means that when using AI-generated content, they are regularly classified as operators within the meaning of the AI Regulation.

III. Implementation of transparency obligations

The implementation of the transparency obligations under Art. 50 (4) AI Regulation requires operators to disclose that the content in question has been artificially generated or manipulated. Any further information, such as the identity of the responsible person, is not required. A corresponding proposal by the European Parliament to name the creator was not taken into account in the legislative process. The notice must therefore merely make it clear that the content in question is not real but has been technically generated or altered.

According to Art. 50(5) AI Regulation, the disclosure must be clear and unambiguous and must be provided at the latest at the time of the first interaction. In the case of audiovisual content, this can be done, for example, by directly integrating the notice into the image or audio material; in the case of longer content, repetition may be necessary in order to reach less media-savvy users. For text-based content, a clearly visible and understandable notice in the immediate vicinity of the publication is sufficient.

However, the disclosure requirement is not without exceptions. In particular, it does not apply in cases where the use of the content is legally permissible for the detection, prevention, or prosecution of criminal offenses. A further restriction applies to deepfakes that are part of an obviously artistic, satirical, or fictional work. In these cases, there is a general obligation to provide transparency, but this must not unreasonably impair the perception or artistic expression of the work. It is sufficient, for example, to display a corresponding notice at the beginning or end of a video if continuous labeling would disrupt the overall creative impression.

For text content, Art. 50 para. 4 subpara. 2 also provides for an exception if the content has been subject to human review or editorial control and, at the same time, a natural or legal person assumes editorial responsibility. According to the protective purpose of the provision, these two conditions must be understood cumulatively. The exception is intended to cover in particular cases where AI-generated texts are reviewed, classified, and accounted for by editorial staff—for example, in a journalistic or internal company context. In such cases, the labeling requirement does not apply, as human intervention significantly reduces the risk of unmonitored dissemination of misleading content.

IV. Sanctions

Violations of the transparency obligations under Art. 50 AI Regulation may be punished with substantial fines. According to Art. 99 (4) AI Regulation, fines of up to EUR 15 million or, in the case of companies, up to 3% of global annual turnover may be imposed, whichever is higher. The sanctions cover both the failure to label AI-generated content and the inadequate labeling of such content.

In addition to financial risks, potential complaints to supervisory authorities and reputational damage resulting from publicly disclosed violations must also be taken into account.

V. Implementation steps in marketing

Compliance with the transparency obligations under Art. 50 (4) AI Regulation requires a structured approach. Marketing departments are well advised to establish processes at an early stage to label AI-generated content in a legally compliant manner or, if necessary, to exempt it from the labeling requirement.

The following implementation steps provide guidance:

  • Inventory: Companies should record whether and to what extent AI-supported tools are used in marketing communications to create or edit text, image, audio, or video content.
  • Relevance check: It should be checked whether the content in question falls within the scope of Article 50(4) of the AI Regulation—in particular, whether it gives the impression of being a representation of reality or conveys opinion-forming information on public issues.
  • Check for exceptions: Companies should clarify whether editorial control has been exercised and whether a natural or legal person assumes responsibility for the content. In these cases, the labeling requirement may not apply. The same applies to artistic, satirical, or fictional works, provided that adequate disclosure is made.
  • Labeling: If no exception applies, a clear notice that is understandable to all users must be provided. The labeling should appear directly next to the content in question and, in the case of longer formats, be repeated as necessary.
  • Standardization: Companies should develop standardized wording for notices that can be used consistently. An example would be: "This article was created in whole or in part with the help of artificial intelligence."
  • Documentation and training: The procedures and responsibilities should be documented internally and employees should receive specific training, particularly with regard to the distinction between content that requires labeling and content that is exempt.
Download as PDF

Contact persons

You are currently using an outdated and no longer supported browser (Internet Explorer). To ensure the best user experience and save you from possible problems, we recommend that you use a more modern browser.