11-02-2022Article

Update Datenschutz No. 121

The New EU Regulation on Artificial Intelligence – Entry into Force Possible in early 2023

The European Union is currently working at full speed on a comprehensive legal system for the digital space. It is essential that this includes a law on artificial intelligence (AI), and thus, on April 21st, 2021, the Commission presented the first draft of an AI Act (which we reported on). Since then, many companies have been looking closely to see what the EU's next steps will be, as it cannot be ruled out that the new regulation will already apply to currently-developed AI.

On October 25th, a new draft was discussed in the Council of the European Union, which could lead to an agreement this year (provided that there are no more major changes). In addition to transparency requirements for the use of AI, manufacturers and users of high-risk AI in particular will face comprehensive new obligations.

A. Background

The planned AI Act is intended to be a regulation that – once it has entered into force – no longer needs to be transposed into national law by the Member States. The aim of the AI Act is to establish a legal framework for the deployment of medium- and high-risk artificial intelligence (AI) in the EU without compromising the competitiveness of European technologies.

Artificial intelligence is defined in the AI Act as software that has been developed using special techniques, that captures and interprets data, and that uses it to generate clearly-defined results that affect physical or digital environments. The definition is very broad, despite the details in the appendix, and could cover a large part of the more complex software.

B. General provisions

I. Protection requirement classes

The obligations imposed by the AI Act on providers and users are determined by the protection requirements of the respective systems. For this purpose, three different levels of protection can be derived from the AI Act.

1. Minimal or low risk

The use of AI with minimal risk remains almost unrestricted, i.e. the AI Act, despite the broad definition, does not entail disproportionate hardship for, e.g., search algorithms, spam filters or AI-based video games. For low-risk AI, certain minimum transparency requirements must already be met. In particular, systems that interact with natural persons, such as chatbots or deepfakes, must point out that AI is being used (cf. Art. 52 AI Act).

2. High risk (high-risk AI)

For AI that involves a high level of risk, the AI Act provides for comprehensive requirements, such as comprehensive transparency requirements, an authorization requirement for the European market and a risk and quality management system that provides, inter alia, the technical possibility of human oversight (information on this can already be found here). Since the first draft of the AI Act, there has been little change in the strict requirements for high-risk AI, so only minor adjustments are currently expected in the further course of the legislative procedure.

High-risk AI includes, in particular, AI systems for biometric identification and categorization, critical infrastructures, accessibility of basic private and public services and benefits, employee management and access to self-employment, education, law enforcement, migration, asylum and border control, justice and democratic processes, medical products and, according to the latest compromise draft, AI for insurance risk calculation and pricing.

3. Unacceptable risk

AI with an unacceptable risk is even banned completely, with some exceptions. This includes systems for social scoring, unconscious manipulation of human behavior, exploitation of weaknesses or vulnerability due to age or physical or mental disability, as well as for biometric real-time remote identification in publicly accessible spaces.

II. Whom the AI Act is aimed at

The AI Act covers not only manufacturers, but also providers, importers, distributors or users of AI systems that place, operate or make available AI systems on the European Union market. This applies even if these providers and users are not based in the EU, but the result generated by the AI system is used in the EU.

III. Liability risks

The AI Act allows fines of up to EUR 30 million or 6% of the worldwide annual revenues for violations. The proposed level of the fines varies between the various draft proposals. Further sanctions, orders or warnings may be possible.

In addition, the determination of sanctions takes into consideration points with a more severe or mitigating effect, such as the intentional or negligent nature of the infringement, attempts at mitigation, or previous sanctions for similar infringements.

C. New content in the latest draft

The latest compromise proposal makes some clarifications. Thus, the exemption for military, defense, and national security provided for in the AI Act should apply not only to the placing on the market, but also to the use of high-risk AI.

The definition of biometric identification systems, for which high requirements apply as high-risk applications, has also been adapted: Since the frequently used fingerprint sensors are also to be qualified as biometric identification systems, the prohibition of biometric remote identification ("remote") should only apply if the use takes place without the active participation of the person.

In addition, the registration requirement has been restricted: Authorities using high-risk AI for law enforcement, migration, asylum and border control, as well as critical infrastructure, are exempted from the requirement to register in the EU database.

I. Special rules for general-purpose AI

When the first draft was published, the handling of general-purpose AI was still unclear. General-purpose AI can be adapted to different tasks depending on the intended use and cannot be clearly assigned to a single purpose.

The current draft now follows the proposal of the Czech EU Council Presidency to direct the European Commission to implement an act adapting the obligations for general-purpose AI. In order to create legal certainty for providers of general-purpose AI, they should be able to participate in regulatory sandboxes and apply for codes of conduct before the adoption of the implementing act.

II. Additional requirements for high-risk AI

New requirements have also been added for the providers of high-risk AI compared to the Commission's first draft. For example, providers of systems that can cause significant damage must indicate the expected result from correct use in the instructions for use, so that users can better react to nonconforming or incorrect results. Moreover, pollution control systems will not be classified as high-risk AI, but AI used for insurance risk calculation and pricing and not offered by an SME will be classified as high-risk AI.

III. A small victory for freedom of the arts

The new compromise proposal even limits the transparency requirements for AI applications such as deepfakes: Transparency requirements must not lead to restrictions on the right to freedom of the arts. This should apply in particular to parts of an "obviously creative, satirical, artistic or fictional work or program".

D. Outlook and recommendation for action

A decision by the European Parliament on the draft is expected later this year, with the final adoption as early as the beginning of 2023. The AI Act will then enter into force 20 days after its publication in the Official Journal of the European Union and is expected to become fully effective after a two-year transition period.

Even though the AI Act provides for a sufficient transition period, AI manufacturers and users can already now prepare for the new requirements for high-risk AI and take into account the requirements for AI that is currently under development. Operators and users of AI can also already implement the transparency requirements for the labeling of AI systems.

In the case of high-risk AI, the following points, inter alia, are expected to be implemented when the AI Act comes into force:

  • instructions for use containing all information in a precise, complete, correct and unambiguous form;
  • in particular, indication in the instructions for use of the expected result when used correctly (in the event of possible major damage);
  • establishment of a post-market surveillance system for AI;
  • establishment of a risk management system, in particular: a) identification and analysis of known and foreseeable risks arising from the system; b) assessment and evaluation of potential risks when the system is used in accordance with its intended purpose; c) evaluation of other risks that may arise on the basis of post-market surveillance; and d) implementation of appropriate risk management measures;
  • establishment of a quality management system;
  • the possibility of oversight by natural persons;
  • compliance with certain quality criteria for training, validation and test data sets;
  • integration of functional features that enable automatic recording of processes and events during the operation of the systems;
  • ensuring an adequate level of accuracy, robustness and cybersecurity throughout the entire life cycle;
  • undergoing a conformity procedure and the corresponding CE marking;
  • Information to national competent authorities and registration in the EU High-Risk AI database.

Update (November 15th, 2022)

The EU Council has now reached a final draft for the AI Act. Compared to the compromise discussed on October 25th, the current draft still contains some changes, most of which are based on proposals by the Czech Council Presidency:

It remains the case that in the insurance industry only AI for risk calculation and pricing is classified as high-risk AI. What is new, however, is that this will be limited to their use for health and life insurance. In addition, there are exceptions for micro or small businesses, but only if they use such systems to sell their own insurance products.

In the area of critical infrastructure, there was a clarification to the effect that only those security components necessary to ensure the integrity of the system are classified as high-risk AI, and not components for their mere functioning.

Furthermore, it is clarified once again that the responsibilities, tasks, powers and independence of national authorities that monitor the safeguarding of fundamental rights are not affected. These include, for example, equality bodies and data protection authorities.

This text has now been sent to EU member states for approval by permanent representatives in the Council on Friday (November 18th, 2022). Signature by the representatives in the EU Transport Telecommunications and Energy Council is scheduled for December 6, 2022.

Download as PDF

Contact persons

You are currently using an outdated and no longer supported browser (Internet Explorer). To ensure the best user experience and save you from possible problems, we recommend that you use a more modern browser.