The Use of AI in Employment: Legal Boundaries, Risks, and Duties
Update Data Protection No. 247
The use of artificial intelligence (AI) is also becoming increasingly important in human resources and now encompasses key processes throughout the entire employee lifecycle – from applicant selection and shift scheduling to performance evaluations. Companies expect this to lead to efficiency gains and more data-driven decision-making. However, with the European AI Regulation (AI Reg) set to take effect in August 2024, the legal classification of such systems is coming into sharper focus. In the HR sector in particular, many AI applications are classified as high-risk AI due to their connection to employment decisions and are therefore subject to comprehensive regulatory requirements. Below, we summarize the relevant legal framework for the use of AI in HR and practical implementation steps for you.
I. Areas of AI Application
Artificial intelligence is increasingly being used in human resources throughout the entire employee lifecycle, encompassing both strategic and operational HR processes. The spectrum ranges from supportive tools to systems that prepare or significantly influence decisions.
Recruiting is a key area of focus. Here, AI systems are used in particular for the automated pre-screening of applications, for matching candidate profiles with job requirements, and for communicating with applicants, for example via chatbots. The analysis of resumes and the structured evaluation of interviews are also increasingly being carried out with AI support.
Furthermore, AI is applied in onboarding and employee development, for example through the creation of individualized onboarding plans or personalized learning and training opportunities. In the areas of performance management and employee retention, AI systems are used to evaluate performance data, identify development potential, or predict turnover risks. The employer’s managerial authority is also likely to be increasingly supported by AI in the future – whether in the allocation of work or the scheduling of shifts or departmental plans.
Finally, AI is also playing a growing role in HR administration, for example in the automated processing of employee inquiries, the creation of documents, or the analysis of HR data.
II. Requirements of the AI Regulation
1. High-Risk AI
The AI Regulation follows a risk-based regulatory approach. The scope of the legal requirements is determined by the classification of an AI system into one of the risk categories provided for in the Regulation. While systems with low or minimal risk are subject only to limited transparency obligations, the Regulation imposes comprehensive requirements on high-risk AI, particularly with regard to risk management, data quality, documentation, and human oversight. Systems with unacceptable risk, on the other hand, are generally prohibited.
For the HR sector, it is of central importance that numerous typical use cases are classified as high-risk AI. Pursuant to Article 6(2) in conjunction with Annex III of the AI Regulation, this specifically covers AI systems used in connection with employment, human resources management, and access to employment. This primarily includes applications in recruitment, such as for the automated pre-selection and evaluation of applicants, as well as systems that prepare or influence decisions regarding hiring, promotion, or termination of employment. The same applies to AI-supported systems for performance evaluation or for assigning tasks based on individual behavioral or performance data. In these areas, there is a particular risk to the rights and freedoms of data subjects, which is why the regulator provides for classification as high-risk AI.
In contrast, not all AI applications used in HR must necessarily be classified as high-risk AI. In particular, support systems in HR administration or in standardized, purely technical processing steps may fall below the high-risk threshold, provided they do not independently evaluate or make decisions regarding individuals or significantly influence such evaluations or decisions. This applies, for example, to simple automations in document creation or AI-supported tools for internal process optimization that do not involve personal data.
The exemption provided for in Article 6(3) of the AI Regulation is also of particular practical relevance. According to this provision, AI systems that generally fall within the application areas listed in Annex III are, in exceptional cases, not considered high-risk AI if they do not pose a significant risk to the health, safety, or fundamental rights of natural persons and, in particular, do not substantially influence decision-making. This may be the case, for example, with AI systems that merely perform a narrowly limited, supportive function, such as the purely formal structuring or sorting of application documents without evaluating their content – but not if the use of AI results in applicants being excluded from consideration from the outset. Systems that merely provide preparatory information without conducting an independent evaluation or significantly influencing the decision may also fall under this exception.
However, distinguishing between cases is challenging and requires a careful analysis of the specific function of the respective AI system within the HR process. A key factor here is whether the system merely supports the decision regarding an individual or whether it effectively dictates or significantly pre-structures it. Particularly in recruiting and in performance- and behavior-related evaluations, the threshold for high-risk AI is therefore regularly crossed.
2. Obligations for Operators of High-Risk AI
Companies that use AI systems in the HR sector are generally classified as operators within the meaning of the AI Regulation. Article 26 of the AI Regulation attaches a separate, comprehensive system of obligations to this operator status.
Central to this is the obligation to implement appropriate technical and organizational measures to ensure that the high-risk AI system is used in accordance with its intended purpose and the specifications in the operating instructions. In addition, the use of the system must be subject to effective human oversight. This oversight must be carried out by sufficiently qualified and trained individuals who are capable of understanding the system’s functioning and results and intervening to correct them if necessary.
Another key focus is on ensuring data quality. To the extent that the input data is subject to the operator’s control, it must be ensured that it is suitable, relevant, and sufficiently representative for the intended purpose. This aspect is particularly important in the HR context, such as in applicant selection or performance evaluation, as erroneous or distorted data can directly lead to discriminatory or factually inaccurate results.
In addition, there are ongoing monitoring and response obligations during the operation of the AI system. Operators must monitor the system’s functioning and are obligated to immediately report any risks, malfunctions, or serious incidents to the provider and the competent authorities and, if necessary, suspend use of the system. This is accompanied by documentation obligations, in particular the obligation to retain automatically generated logs for a reasonable period of time, typically at least six months.
Finally, transparency obligations toward employees are of particular relevance to the HR sector. Employers are required to inform employee representatives and the affected employees about the use of a high-risk AI system before its introduction or use in the workplace. Furthermore, affected individuals must be informed if an AI system makes decisions about them or significantly supports such decisions.
III. Data Protection Aspects
The use of AI systems in the HR sector regularly involves the processing of personal data and is therefore subject to the provisions of the GDPR with virtually no exceptions. In this regard, the AI Regulation does not create a separate legal framework for data processing but rather supplements existing data protection requirements. Companies must therefore ensure that there is a sound legal basis for every form of AI use in HR and that the principles of purpose limitation and data minimization are upheld.
Particular challenges arise as early as the training and implementation phase of AI systems. Existing personnel data is often used for training purposes without having been originally collected or processed for this purpose, which raises questions regarding a change of purpose and permissibility under Article 6 of the GDPR. Compliance with transparency obligations toward applicants and employees can also prove difficult in practice, particularly with complex or opaque (“black box”) systems.
Article 22 of the GDPR is also of central importance, as it grants data subjects the right not to be subject solely to an automated decision that produces legal effects or similarly significantly affects them. Particularly in recruiting, in performance-based evaluations, or in connection with the employer’s managerial authority, it must therefore be ensured that AI systems, at most, prepare decisions, but that the final decision is made by a natural person. It should be noted that even AI recommendations that effectively influence outcomes can pose problems under data protection law.
In addition, further data protection requirements must be observed, in particular the conduct of a data protection impact assessment for high-risk applications, the safeguarding of data subjects’ rights, and – when using cloud-based AI services – compliance with the requirements for transfers to third countries.
IV. Recommendations for Companies
Companies should first create a structured AI inventory in the HR sector and systematically record all AI applications currently in use and those planned. On this basis, a legal classification under the AI Regulation must be performed for each system, particularly with regard to a possible classification as high-risk AI. This inventory forms the basis for all further compliance measures and should be updated regularly.
Building on this, it is recommended to implement clear human-in-the-loop structures in HR. Specifically, this means that AI systems – particularly in recruiting or performance evaluations – must not be allowed to make autonomous decisions, but must always be subject to review and override by qualified HR staff. This requires not only organizational guidelines but also appropriate training for the relevant employees to critically assess the AI’s results.
Another key step is the early involvement of the works council and the creation of transparent regulations. The introduction of AI systems in human resources is almost always subject to the works council’s right of co-determination. Ideally, therefore, a framework works agreement on the use of AI should be concluded. In parallel, an internal AI policy should be established that specifically regulates the permissible use of AI in HR, for example regarding the handling of applicant data or the use of external AI tools.
Finally, companies should make targeted investments in data quality and testing. Before productive use, it must be verified whether the data used is representative and free of systematic biases. Additionally, AI systems should be regularly tested using specific HR use cases, such as through spot checks in the recruiting process, to identify and correct discriminatory or factually incorrect results at an early stage.
V. Conclusion and Outlook
The use of AI in HR offers significant potential for efficiency and optimization, but is subject to complex legal requirements. In particular, the classification of many HR applications as high-risk AI leads to extensive obligations under the AI Regulation, which in practice are closely intertwined with the provisions of the GDPR. Companies are therefore required to systematically document the use of AI in human resources at an early stage and ensure legal compliance.
With regard to the implementation of the AI Regulation, current regulatory developments are also gaining further momentum. As part of the so-called “AI Omnibus,” discussions at the European level are focusing in particular on adjustments and clarifications regarding implementation deadlines, which is also significant for high-risk AI systems in HR. Regardless of potential exemptions or delays, however, it is already clear that companies must establish the necessary organizational and technical prerequisites to meet future requirements. The legally compliant use of AI is thus increasingly becoming an ongoing compliance task – not only in the HR sector.
This article was created in collaboration with our student employee Emily Bernklau.