AI ACT: Should ChatGPT be classified as High-Risk AI?
Update Data Protection No. 220
Since the widespread introduction of ChatGPT and similar language models, companies have increasingly been using generative AI systems in business-critical processes, such as text creation, customer support, and personnel pre-selection. With the European AI Regulation (AI-VO) coming into force in August 2024, the question now arises as to under what conditions the use of such systems is legally considered high-risk AI. While ChatGPT, as a general language model, is not initially assigned to any specific risk category, its use in sensitive areas of application, particularly in the employment context, may exceed the threshold for high-risk classification. The following article explains the AI Regulation's system and shows when the use of ChatGPT is considered high-risk AI within the meaning of the regulation and what legal consequences this has.
I. Risk-based approach of the AI Regulation
The AI Regulation follows a tiered risk assessment system that differentiates between obligations and prohibitions according to the potential risk of an application. The aim is to enable innovation on the one hand and to ensure the protection of safety and fundamental rights on the other. Accordingly, the regulation distinguishes between four main categories of AI systems.
AI practices with unacceptable risk are completely prohibited. This includes, in particular, applications that manipulate people or predict criminal behavior based on personal characteristics.
High-risk AI systems, on the other hand, are subject to strict requirements regarding data quality, transparency, documentation, and monitoring. They concern areas in which algorithmic decisions have a significant impact on the lives or rights of natural persons – for example, in healthcare, education, or the assessment and selection of employees.
Systems with limited risk must primarily meet transparency requirements. Users must be informed that they are interacting with AI and be able to understand how it works. Examples include chatbots in customer support or systems for automated content creation.
There are no specific regulatory requirements for AI applications with minimal risk, such as spam detection or computer games.
In addition, the AI Regulation explicitly covers general-purpose AI (GPAI) models that can be used for a variety of applications and fall into any of the above categories depending on their intended use. This group includes generative language models such as ChatGPT. Their legal status therefore depends crucially on the specific context in which they are used.
II. ChatGPT as generative AI
As a generative language model, ChatGPT is not limited to a specific task area, but can be used in various operational contexts such as text creation, data processing, or supporting internal decision-making processes. For certain applications, the AI Regulation provides for specific transparency obligations, which are regulated in Art. 50 AI Regulation and apply regardless of a possible high-risk classification (we reported). They are intended to ensure that users can recognize when they are interacting with AI-generated or AI-modified content.
In a business context, Article 50 of the AI Regulation is particularly relevant when ChatGPT is used to create or edit content that is communicated or published externally. This applies, for example, to the creation of texts for websites, press releases, or marketing campaigns, but also to automated chatbots that interact with customers. In all these cases, it must be made clear that the content in question has been generated or modified in whole or in part by an AI system. In contrast, internal drafts, notes, or other purely internal applications are generally not subject to labeling requirements as long as they are not intended for a wider audience.
The disclosure must be clear and unambiguous and must be available at the latest at the time of the first interaction or publication. A brief note such as "Created with the support of AI" is sufficient, provided that it clearly indicates that the content was generated technically. The transparency requirement applies to text content in particular if it serves to inform the public about political, economic, or social issues. If such a contribution is written or significantly modified with the help of ChatGPT, it must be disclosed that it was created with the support of AI.
Content that is subject to human review and editorial control and for which a natural or legal person assumes responsibility for the content is exempt from this obligation. If a text created by ChatGPT is reviewed, revised, and approved prior to publication, labeling is therefore not required. In such cases, the legislator assumes that the risk of misleading or unsupervised publications is sufficiently reduced by editorial responsibility.
Violations of transparency obligations can be punished with fines of up to €15 million or three percent of global annual turnover under Article 99(4) of the AI Regulation. In addition to financial penalties, there is a particular risk of reputational damage if AI-generated content is distributed without labeling, giving the impression that it was created by humans.
III. ChatGPT as high-risk AI
Whether the use of ChatGPT triggers the obligations for high-risk AI systems is determined by Art. 6 (2) in conjunction with Annex III of the AI Regulation. According to this, AI systems are considered high-risk if they are used in one of the areas of application listed in Annex III . The decisive factor is therefore not the technical functioning of the model, but its specific intended use.
Article 6(2) supplements the provision in paragraph 1, which refers to AI systems that are part of a product or safety component and are subject to a separate conformity assessment. Paragraph 2 extends this scope to so-called stand-alone systems, i. e., AI applications that can be operated independently of a specific hardware environment. ChatGPT typically falls into this category, as it is a universally applicable language model that can be used in different organizational and technical contexts.
The purpose of the regulation is to also cover systems whose use poses a high risk due to their social or fundamental rights implications. The decisive factor here is whether the context of use is likely to affect the health, safety, or fundamental rights of natural persons. The risk assessment is therefore not linked to the model itself, but to the intended area of application, i. e., "what" and "how" the system is used.
1. Intended use according to Annex III
Annex III of the AI Regulation lists a total of eight overarching high-risk areas, including the use of AI in law enforcement, education, and employment. Particularly relevant to the practical use of ChatGPT is Annex III No. 4, which covers AI systems "intended to be used for the recruitment or selection of natural persons, in particular to place targeted job advertisements, review or filter applications, and evaluate applicants."
If ChatGPT is used in one of these contexts, for example to automatically review applications, evaluate aptitude tests, or prepare personnel decisions, this regularly constitutes a high-risk application within the meaning of Art. 6 (2) in conjunction with Annex III. The same applies if ChatGPT is used for profiling or evaluating employee performance. The targeted display of personalized job advertisements to specific groups may also fall under the regulation if the AI-supported selection mechanism influences the opportunities of individuals.
Practice shows that such systems in particular pose considerable risks to fundamental rights. Previous cases, such as algorithmic application filters or discriminatory job postings on large platforms, have made it clear that even subtle distortions in training data or model architectures can lead to discrimination. Against this background, the inclusion of employment and selection processes in the high-risk catalog is logical.
2. Significance of Art. 6(3) AI Regulation
Article 6(3) provides for an exception to the high-risk classification if the provider can demonstrate that the system in question does not pose significant risks to the fundamental rights or safety of natural persons. This exemption takes into account the fact that not every AI-supported process in the field of human resources is necessarily high-risk.
This may apply to ChatGPT in particular if the system does not make automated decisions but merely serves as a tool, for example, for formulating job applications, preparing for interviews, or supporting human evaluations. However, this is conditional on the human decision-maker retaining control over the process at all times and critically reviewing the AI-supported results.
Whether the exception applies therefore depends on the specific implementation in the company. The more ChatGPT is integrated into decision-making processes, the more likely it is to be classified as a high-risk system. If, on the other hand, the model is only used for text processing or analytical support without any immediate decisions being made as a result, its use can be considered to be of limited risk.
3. Assessment and differentiation
The classification of generative AI systems such as ChatGPT illustrates the central importance of the context of use in the risk-based approach of the AI Regulation. ChatGPT itself is not a high-risk system, but it can take on this character through its integration into critical processes, particularly in the area of employment and human resources management. The decisive factor is whether its use is suitable for deciding on opportunities, rights, or access to employment.
This requires companies to carefully determine the purpose and analyze the risks when implementing ChatGPT. They must document whether and to what extent the system influences decisions relating to fundamental rights and, if necessary, fulfill the extensive obligations for high-risk AI systems.
IV. Legal consequences of classification as high-risk AI
If the use of ChatGPT is classified as high-risk AI in accordance with Art. 6 (2) in conjunction with Annex III, companies as operators are subject to comprehensive obligations under Art. 26 of the AI Regulation. They must ensure that the system is only used in accordance with its intended purpose and the instructions provided.
Human oversight is of central importance: only qualified persons may monitor ChatGPT-based applications, check system outputs, and take corrective action if necessary. In addition, operators must ensure that the input data used is fit for purpose and does not contain any discriminatory biases.
In addition, operation and system performance must be monitored continuously. If risks or serious incidents are identified, the provider and, if necessary, the competent authority must be informed until the problem is resolved. Operators must also keep automatically generated logs for at least six months and inform employees about the use of a high-risk system at the workplace before it is put into operation.
Finally, the regulation requires transparency towards those affected. Individuals who are subject to decisions supported or influenced by ChatGPT when used as high-risk AI must be informed about the use of the system. Companies should therefore clearly define internal processes and responsibilities to ensure verifiable compliance with operator obligations.
V. Conclusion and outlook
As a generative language model, ChatGPT is not automatically classified as high-risk AI. The specific intended use is decisive: if the system is used in sensitive areas such as personnel pre-selection, performance evaluation, or decision support, classification under Art. 6 (2) in conjunction with Annex III of the AI Regulation may be appropriate. In these cases, the extensive operator obligations of Art. 26 AI Regulation apply, which are intended to ensure safe, transparent, and traceable use.
For companies, this means reviewing the use of ChatGPT from a legal perspective at an early stage, establishing internal control mechanisms and supervisory processes, and training employees in the use of generative AI. The practical challenge in the coming years will be to exploit the potential of such systems without exceeding the legal limits set by the AI Regulation.
This article was created in collaboration with our student employee Emily Bernklau.