Legal framework for the use of AI AGENTS
Update Data Protection No. 215
AI agents are considered the next stage in the development of artificial intelligence and open up new opportunities for companies to automate complex tasks. While the discussion has so far focused primarily on generative language models, the focus is now shifting to systems that can plan, decide, and act independently. With the AI Regulation coming into force and data protection requirements still in place, a complex legal framework is emerging that shapes the use of such agents. Companies must address regulatory requirements at an early stage in order to take advantage of the opportunities offered by the technology while avoiding compliance risks. The following section takes a closer look at how AI agents work, their legal classification, and the key areas of action for companies.
I. Functions of AI agents
AI agents are specialized systems that can independently plan, execute, and continuously adapt tasks based on large language models (LLMs) or similar technologies. Unlike traditional AI applications or rule-based automation (e. g., robotic process automation), they have a kind of "cognitive center" that translates the user's objectives into action steps, evaluates intermediate results, and derives further actions accordingly. Their functionality is characterized by three central mechanisms: (1) Planning and execution loops, which break down complex tasks into subtasks in iterative cycles, (2) Retrieval-Augmented Generation (RAG), in which external knowledge bases are integrated in a context-dependent manner, and (3) Tool-Calling, i. e., the ability to directly control external programs, interfaces, or databases.
These characteristics open up a wide range of possible applications in business practice. In the administrative area, for example, AI agents can automatically record and process invoices, analyze contracts in the legal department, or support the onboarding of new employees in HR processes. They are also gaining importance in operational business. For example, customer inquiries can be processed via chatbots or personalized product recommendations via e-commerce agents, while in recruiting, intelligent systems preselect application documents and identify suitable candidates. In all these cases, the agents are characterized by the fact that they not only provide information, but can also make decisions and control processes independently.
The risk profile can therefore differ significantly from other AI systems, because while LLMs have primarily generated text up to now, AI agents can actively intervene in business processes and trigger actions with real economic consequences. The degree of autonomy determines to a large extent how much human control remains and how high the risk of erroneous or unforeseen actions is. Companies are therefore faced with the task of balancing the potential of AI agents to increase efficiency with careful management of possible wrong decisions.
II. Classification in the AI Regulation
The European AI Regulation basically distinguishes between two levels: on the one hand, the underlying "AI models" and, on the other hand, the "AI systems" based on them. An AI model forms the technical basis, for example a large language model, while the AI system represents the specific application that companies use. Separate rules apply to both levels, some of which may overlap.
At the model level, it is not the specific purpose of use that matters, but the capabilities of the model itself. Models that are suitable for a wide range of tasks are classified as "general-purpose models." This typically applies to large language models, which form the basis of many AI agents. Providers of such models must comply with extensive transparency and documentation requirements under Art. 53 ff. AI Regulation. If a model is additionally classified as particularly powerful and far-reaching, it can be classified as a model "with systemic risk," which entails additional requirements such as model evaluations.
At the system level, however, the intended use is decisive. If an AI agent is used in very different contexts or has far-reaching control capabilities, for example because it can interact independently with browsers or operating systems, it can be considered a "general-purpose AI system." If, on the other hand, it is used in a particularly sensitive area such as human resources management, education, or critical infrastructure, it falls into the category of high-risk systems. In this case, strict requirements apply, particularly with regard to human control and data governance. The higher the degree of autonomy of an agent, the greater the monitoring requirements, which makes it difficult to strike a balance between efficiency gains and supervisory obligations.
Overall, it is clear that AI agents are at the interface between the two regulatory levels of the AI Regulation. Users of AI for general purposes are initially only subject to transparency obligations (see Art. 50 AI Regulation). At the same time, AI agents can trigger high compliance requirements in sensitive areas of application. The interaction of both regimes increases the complexity of compliance and at the same time reveals regulatory gaps. The problem is that the risk-based approach does not adequately cover the specific risks of agent systems. For example, a seemingly uncritical use case, such as a travel booking agent, can pose significant risks if the system is capable of independently controlling a computer or browser. The AI Regulation does not yet adequately address such risks, as it focuses primarily on the intended use and less on the technical capabilities of the agent.
III. Interfaces with data protection law
The use of AI agents raises questions not only about the AI Regulation, but also about data protection law. Since the systems are typically based on large language models and are often operated in the cloud, there is a risk that personal data or confidential company data will be transferred to third parties in an uncontrolled manner or even reused for training purposes. For companies, this means that the basic principles of the GDPR, such as lawfulness, purpose limitation, and data minimization, must be strictly adhered to. Particularly problematic is the fact that AI agents often not only process information, but also actively intervene in business processes, thereby generating new data streams that must be documented and secured.
A key tool here is the data protection impact assessment pursuant to Art. 35 GDPR, which is also explicitly required for high-risk AI systems pursuant to Art. 26 AI Regulation. It is always required when the use of AI agents is likely to pose a high risk to the rights and freedoms of data subjects, which will often be the case in typical areas of application such as recruiting, compliance monitoring, or, under certain circumstances, contract analysis. Companies must not only check what data is being processed, but also where this data is stored, encrypted, and shared. Transfer impact assessments and protection through standard contractual clauses are indispensable, especially in cloud scenarios, in order to comply with the requirements for international data transfers (Art. 44 ff. GDPR).
In addition, there are technical and organizational measures within the meaning of Art. 32 GDPR, which must be adapted to the specific characteristics of AI agents. Classic protection mechanisms such as pure transport encryption are not sufficient if an agent calls up external tools or independently controls interfaces. Instead, concepts such as zero-trust architectures, edge pre-processing, or strict access controls must be established to ensure the confidentiality and integrity of the data. In addition, it is necessary to carefully select processors and regularly verify their compliance through certificates or audits.
The interfaces between the GDPR and the AI Regulation in particular show that companies need an integrated approach. While the AI Regulation focuses primarily on technical security and risk classification, the GDPR sets the framework for the processing of personal data with its basic principles and data subject rights. An effective data protection and IT security concept must combine both sets of regulations and be designed in such a way that both regulatory and practical risks are addressed.
IV. Interfaces with the Data Act
In addition to the AI Regulation and the GDPR, the European Data Act, whose transition periods expired on September 12, 2025 (we reported), may also become significant for the use of AI agents. The Data Act regulates access to and sharing of data generated by connected products or services. This is a key point of reference for AI agents, which create added value precisely because of their ability to autonomously access different data sources. For example, they can use machine data from production facilities, sensor data from vehicles, or usage data from IoT devices to independently optimize processes or prepare decisions.
However, it should be noted that the Data Act refers to raw data that is generated directly through the use of a connected product. This data must be made available to manufacturers and third parties. On the other hand, so-called "derived data," i. e., information that is only generated through downstream processing or analysis (see Recital 15), is not covered. Data generated by an AI agent in the course of its activities, such as reports, forecasts, or recommendations, is therefore generally not subject to the access obligations.
For companies, this means, on the one hand, that they can efficiently exploit the newly created access rights under the Data Act through AI agents, thus making data that was previously tied up in data silos more usable. On the other hand, new obligations arise, as the transfer of such data to agent systems requires compliance with contractual requirements, technical standards, and the distinction from personal data.
V. Interfaces with the Cyber Resilience Act
In addition to the Data Act, the European Cyber Resilience Act (CRA) is also becoming increasingly important for the use of AI agents. The CRA has been in force since December 2024 and will be fully implemented from the end of 2027. For the first time, it sets binding minimum cybersecurity requirements for all products with digital elements offered on the EU market – including AI agents, provided they are connected to networks or other systems.
Manufacturers, importers, and distributors are required to consider cybersecurity aspects as early as the product development stage ("security by design") and in the default settings ("security by default"). This includes conducting systematic risk assessments, integrating vulnerability management, and providing security updates throughout the entire product lifecycle. A key element is the transparency requirement, for example by creating a Software Bill of Materials (SBOM) that documents all components used.
For AI agents, this means that they must not only be designed and operated in compliance with data protection regulations, but also be demonstrably cyber resilient. Compliance with CRA requirements will be a prerequisite for CE marking and thus for market access in the EU in the future. Violations can result in significant penalties, including fines and product recalls.
VI. Interfaces to the eIDAS 2.0 Regulation (EUDI Wallet)
The amendment of the eIDAS Regulation and the introduction of the European Digital Identity Wallet (EUDI Wallet) will create a uniform Europe-wide framework for digital identities and trust services. By the beginning of 2027 at the latest, all EU member states must provide their citizens and businesses with at least one certified EUDI Wallet, which will serve as a digital ID, proof, and authentication tool.
This opens up new opportunities for AI agents, particularly in the areas of secure and automated identity verification, onboarding, and interaction with customers and business partners. In the future, AI agents could directly access verified identity data and attributes that are controlled and approved by the user. This increases security and compliance in transactions, reduces the effort required for manual checks, and enables new, automated business models – for example, in the financial sector, healthcare, or digital contract conclusion.
However, companies that take advantage of these new opportunities must ensure that AI agents process wallet data in a manner that is compliant with data protection regulations and trustworthy. Starting in 2027, the wallet requirement will apply to all regulated industries in which strong authentication or KYC processes are mandatory.
VII. Interfaces with the BFSG (Barrier-Free Accessibility Act)
With the BFSG, Germany is implementing the European Accessibility Act (Directive (EU) 2019/882); since June 28, 2025, certain products and consumer-oriented services must be designed to be accessible.
This is relevant for AI agents as soon as they are part of such offerings – for example, chat or voice agents in online shops, banking apps, ticket and booking systems, or communication services.
The requirements address, among other things, websites/apps in electronic commerce (§ 19 BFSGV) and other covered services (telecommunications, banking, transportation), including user interfaces and functionality. The technical reference points are EN 301 549 (with references to WCAG criteria) and harmonized standards; they establish a presumption of conformity. There is a (sector-specific) micro-enterprise exemption for service providers, but not for affected products. Market surveillance is carried out by the federal states; violations are subject to official measures and fines (in some cases up to €100,000).
In practice, this means that AI agents must not create new barriers and must be compatible with assistive technologies, multimodal (alternative to pure voice input/output), and integrated into accessible user paths.
VIII. Recommendations for action for companies
Here is a preliminary summary of recommendations for action on the legally compliant use of AI agents.
1. Systematically evaluate fields of application, autonomy, and risks
- Analyze the business areas in which AI agents are to be used and the degree of autonomy these systems have.
- Conduct a comprehensive risk assessment, simulate scenarios, and determine (especially for high-risk AI) whether and how human control ("human in/on/out of the loop") will be maintained.
- Consider not only classic error risks, but also regulatory, ethical, and technical risks (e. g., accessibility, cybersecurity, data integrity).
2. Ensure regulatory classification and compliance
- Check whether your AI agent is considered a high-risk system, a general-purpose system, or part of a regulated industry (e. g., finance, healthcare, critical infrastructure).
- Consider the requirements of the AI Regulation, the Data Act, the Cyber Resilience Act, the eIDAS 2.0 Regulation, and the BFSG.
- Document the legal classification, define internal responsibilities, and prepare for regulatory audits and certifications (e. g., CE marking, ISO 42001/27001).
3. Address data protection and data security holistically
- When processing personal data, carry out a data protection impact assessment in accordance with Art. 35 GDPR, especially for high-risk AI.
- Choose secure operating models (cloud, on-premise, hybrid), implement encryption, access controls, logging, and zero-trust architectures.
- Check international data transfers (transfer impact assessment, standard contractual clauses) and ensure the compliance of processors.
4. Ensure cybersecurity and resilience in accordance with CRA
- When developing your own products, integrate cybersecurity requirements into the product development process ("security by design" and "security by default").
- Perform systematic risk assessments and vulnerability management, provide regular security updates, and document all components (Software Bill of Materials – SBOM).
- Please note that compliance with CRA requirements is a prerequisite for market access and CE marking.
5. Plan digital identities and wallet integration in accordance with eIDAS 2.0
- Prepare your AI agents for EUDI wallet integration to enable secure and automated identity verification and authentication.
- Ensure that wallet data is processed in a privacy-compliant and trustworthy manner and that the requirements for regulated industries (KYC, strong authentication) are met.
6. Ensure accessibility in accordance with the BFSG and European Accessibility Act
- Check whether your AI agents are considered part of consumer-oriented products or services under the BFSG and must therefore be designed to be accessible.
- Ensure multimodal usability, compatibility with assistive technologies, and compliance with EN 301 549/WCAG criteria.
- Document accessibility features, keep declarations of conformity on hand, and prepare for market surveillance and possible regulatory audits.
7. Establish governance, monitoring, and continuous improvement
- Set up an interdisciplinary governance model (IT, legal, data protection, specialist departments) that continuously monitors and controls the use of AI agents.
- Conduct regular audits, output monitoring, and employee training.
- Use established management systems (e. g., ISO 27001 for information security, ISO 42001 for AI management).
8. Secure contract design and liability
- Conclude clear contracts with external providers regarding liability, security, audit rights, and access to agent evaluations.
- Avoid lock-in risks and regulate the use and evaluation of agent data in the contract.
9. Ensure communication and transparency with stakeholders
- Inform users, business partners, and authorities transparently about the use, functionality, and protective measures of your AI agents.
- Ensure that all information requirements (e. g., regarding accessibility, data protection, security) are met.
IX. Conclusion, opportunities, and outlook
AI agents mark the next major step in the development of artificial intelligence and promise significant efficiency gains in numerous areas of business. They also offer a significant opportunity for the companies that use them: in the future, customers will choose those contractual partners who have automated their own business processes with AI agents as effectively as possible, as this is the basis for high performance in contract fulfillment. At the same time, they operate in a complex regulatory environment of AI regulations and data law, which places high demands on transparency, security, and accountability. For companies, this means that opportunities and risks are closely intertwined. In the coming years, standardization, certification, and privacy-enhancing technologies are expected to provide greater legal certainty. Until then, actively shaping compliance and IT security remains the key to the responsible and competitive use of AI agents.
This article was created in collaboration with our student employee Emily Bernklau.