Legally Compliant Use of AI Transcription Tools in Businesses
Update Data Protection No. 248
The automated transcription of online meetings using AI-powered tools is increasingly becoming part of the daily work routine at many companies. It promises significant efficiency gains, such as reducing the burden of minute-taking tasks and simplifying the follow-up work after meetings. At the same time, however, its use involves the processing of personal data, which must be carefully evaluated from a legal perspective. In addition to data protection issues, criminal law aspects and new regulatory requirements – particularly those arising from the AI Regulation – also come into focus. Against this backdrop, the following article examines the key legal framework and highlights what companies should consider when using such tools.
I. Data Protection Classification
The use of AI-powered transcription tools in online meetings constitutes the processing of personal data within the meaning of the General Data Protection Regulation (GDPR). This regularly involves the capture of both spoken words and additional contextual information from participants, thereby bringing the activity within the scope of the GDPR. In its 40th Annual Report, the LfDI Baden-Württemberg clarifies in this regard that companies that decide to use such a tool are generally to be regarded as data controllers themselves and bear overall responsibility under data protection law.
In practice, the provider of the transcription service will often be classified as a processor, meaning that a data processing agreement pursuant to Article 28 of the GDPR must be concluded. This agreement must be critically reviewed and technically secured, particularly with regard to any potential use of the data for the provider’s own purposes, such as for training AI systems.
Furthermore, the processing requires a sound legal basis. In particular, the consent of the data subjects or the protection of legitimate interests may be considered, although a careful balancing of interests must be carried out on a case-by-case basis. In this context, the BayLDA emphasizes that consent, particularly in an employment context, often cannot be regarded as a valid basis, and instead the legitimate interest under Article 6(1)(f) of the GDPR takes on particular significance, provided that the necessity of the transcription can be justified.
In addition, the information obligations under Article 13 of the GDPR must be observed. Participants must be informed in a timely and transparent manner about the use of the transcription tool, the purpose of the processing, and any storage of data. Furthermore, it must be ensured that data subjects’ rights, such as the right to object, can be exercised effectively in practice.
II. Protection of the spoken word (Section 201 StGB)
In addition to data protection requirements, the criminal law protection of the spoken word must also be observed when using transcription tools. Under § 201 StGB, anyone who records or makes accessible a non-public spoken statement without the consent of the data subjects may be liable to prosecution. This is regularly the case with automated transcription, as at least an audio recording is technically made.
In practice, this means that prior consent from the participants is generally required. While this consent may also be implied – for example, by participating in a meeting clearly marked as such – it always requires clear and transparent information about the transcription. Companies should therefore ensure that all participants are explicitly informed about the transcription before the recording begins and that they consent to it.
III. Requirements of the AI Regulation
The AI Regulation introduces an additional legal framework that must be taken into account when using transcription tools. The risk-based approach of the Regulation is particularly relevant here. Pure transcription solutions will generally not be classified as high-risk AI systems as long as they are limited to converting speech into text and do not make any further assessments or decisions.
However, this classification may change if transcription functions are combined with additional analysis or evaluation elements, such as for monitoring employee performance or analyzing the content of communications. In such cases, a connection to high-risk applications – particularly in an employment context – may be considered.
Regardless of a high-risk classification, companies are required to systematically document and manage the use of AI systems. This includes, in particular, clearly defining areas of application, assessing risks, and embedding the use of such tools into internal policies and compliance frameworks.
IV. Recommendations for Companies
In practice, the following measures can be derived from the requirements outlined above:
- Ensure transparency and participant involvement: The invitation should already clearly inform participants about the planned transcription and its purpose. An additional notice (e. g., verbally or via a pop-up) is recommended at the start of the meeting. Furthermore, it must be ensured that participants can object to the transcription or switch to alternative communication channels.
- Configure and review tool settings carefully: Features such as AI training, advanced content analysis, or unnecessary data storage should be deactivated wherever possible. Additionally, it should be assessed whether transcription without permanent storage (e. g., live captions) is sufficient. The blanket or permanent activation of such features should be avoided.
- Implement contractual and organizational safeguards: Entering into a robust data processing agreement is essential and should be carefully reviewed, particularly with regard to the provider’s access to data. Additionally, it is recommended to introduce internal guidelines for the use of AI tools that include clear guidelines on use cases and the handling of sensitive content.
V. Conclusion and Outlook
The transcription of online meetings using AI-powered tools offers significant efficiency potential but is subject to complex legal requirements. In addition to data protection regulations and criminal law restrictions, the framework of the AI Regulation must increasingly be taken into account. It is therefore crucial for companies to manage the use of such tools in a structured manner and to ensure legal compliance. Given the ongoing regulation and the growing practical significance of this field, it is expected that regulatory authorities and legislators will devote even greater attention to this area in the future.
This article was created in collaboration with our student employee Emily Bernklau.