12-18-2023Article

Update Data Protection No. 162

AI Act (update): Agreement on the new AI regulation

A few days ago in Brussels, negotiators from the EU Parliament and EU Council agreed on the basic content of the new AI Regulation (AI Act), which will now be finalized in the coming weeks. We have already reported on the legislative process to date (Data Protection Update No. 146, No. 121 and No. 94). This article provides an overview of the most important contents of the regulation that have now been agreed as well as the different positions of the legislative bodies involved.

Key obligations for companies

The upcoming AI Regulation will not only impose obligations on manufacturers of AI systems, but also on importers, distributors and even operators (users), especially when using so-called high-risk AI systems. In accordance with the market location principle, the regulation also applies to companies that are based in third countries but use the AI system within the EU.

For small providers and small users, the EU member states can implement special facilitations in accordance with Art. 55, such as access to regulatory "sandboxes", the use of which is not subject to strict legal requirements.

The individual obligations for companies result from the risk classification of the AI systems used. Low-risk systems (such as chatbots, deepfakes, spam filters, AI-supported video games) only trigger transparency obligations; providers must therefore ensure that their users are informed about the use of AI (Art. 52). According to the EU Parliament's demands, AI base models are now regulated somewhat more strictly as medium-risk systems; new regulations have been included in Art. 28b in this regard, in particular transparency obligations as well as obligations to provide proof of technical documentation along with instructions for use and compliance with copyright requirements. However, particularly powerful AI base models are classified as "systemic" and are therefore subject to significantly increased requirements (similar to the following category) and must also report on their energy efficiency. However, the largest number of obligations for companies relates to the use of so-called high-risk AI systems, such as those for biometric identification, autonomous driving, use in critical infrastructure, assessing the educational level of employees or automatic selection of applicants, assessing creditworthiness or use as a lie detector by law enforcement authorities. According to Art. 29, operators of such high-risk AI systems must implement enhanced technical and organizational measures (in particular for cybersecurity), set up a monitoring system under human supervision, carry out a detailed risk and data protection impact assessment, prepare technical documentation and instructions for use, maintain all automatically generated logs, undergo an EU conformity procedure and enter the necessary information in a publicly available database of the EU Commission. They must also inform the EU Commission in the event of cyber incidents (Art. 62). Finally, there is the risk classification as a so-called unacceptable AI system (Art. 5). These include social scoring systems, the reading of facial images from the internet, biometric categorization systems using sensitive characteristics, systems for manipulating human behavior or exploiting human weaknesses or automated emotion recognition in the workplace or at educational institutions.

AI systems that are provided under free or open source licenses are not to be subject to the Regulation unless they are integrated into a high-risk AI system or are classified as "systemically relevant" (Art. 2). Such a regulation was not yet provided for in the EU Commission's original draft.

Further information on the new obligations can be found on the Q&A information page that has since been published online by the EU Commission.

Fines

Fines can be imposed for breaches of the obligations of the new AI Regulation in accordance with Art. 71. Depending on the type of infringement, these range from 1 % (or EUR 5 million) to 7 % of global turnover (or EUR 35 million). More proportionate upper limits are provided for fines for small providers for violations of the AI Act. Citizens have the right to lodge complaints about AI systems and, with regard to high-risk AI systems, to request explanations of the provider's decisions.

Trilogue

The EU Commission had already published its first draft of an AI regulation in April 2021. It was not until June 2023 that the EU Parliament cleared the way and agreed internally on an amended version (as reported). Since then, negotiations have taken place between the EU Parliament and the EU Council, which were successfully concluded a few days ago, meaning that the draft regulation is currently (as of 14 December 2023) being finalized in Brussels.

Recent discussions have focused in particular on the requirements for government surveillance measures and the inclusion of AI base models (such as Chat GPT).

With regard to surveillance measures, the EU Parliament called for any biometric recognition via AI to be banned unless the person concerned consents. The EU Council, on the other hand, insisted on the extensive use of such systems, including for the automatic recognition of sensitive characteristics such as religious beliefs or sexual orientation. A compromise has now been reached to the effect that biometric identification in public spaces is only permitted for certain serious criminal offenses and only with judicial approval. This also applies to the subsequent analysis of video footage using AI systems.

Predictive policing, i. e. the use of AI systems to predict the commission of crimes, was also a contentious issue. The content of the agreement in this regard is not yet clear, but misuse is likely to be prevented by reducing the applicability to actual suspected cases and independent authorities may have to grant permission for such measures in future.

The EU Council wanted AI base models to be excluded from the scope of the AI Regulation (a voluntary commitment was suggested), but this was opposed by the EU Parliament. As a compromise, these basic models are now regulated in the regulation as medium-risk systems, but in principle only with reduced requirements for the operators, as long as there are no high-performance systems that have been trained with very high computing power (10 to the power of 25 computing operations per second), such as GPT-4, Gemini or Inflection 2.

Transition period, AI Office

It is expected that the final version of the draft regulation will be available and adopted within a few weeks. The AI Regulation will then enter into force 20 days after publication in the Official Journal. A transitional period of 24 months will then probably apply, meaning that it is expected to be applicable by the beginning of 2024 at the latest. However, it is envisaged that certain bans will already apply after six months, and specific provisions for general purpose AI after 12 months. In order to implement the new requirements more effectively, an office called the "AI Office" will be set up within the EU Commission, which will be responsible for coordination at European level and will also monitor providers of large AI base models.

Necessary implementation measures

Even after the above agreement between the EU Parliament and the EU Council, our previous recommendations on the implementation of the necessary preparatory measures remain valid (see our article, chapter "Obligations for companies").

The following measures are recommended as a first step:

  • Check whether internally deployed AI systems are subject to the above risk classes
  • Carrying out and documenting risk assessments
  • Implementation of transparency obligations through the creation of instructions for use, technical documentation and employee guidelines (in particular a "Code of AI Practice")
  • Establishment of internal control and governance systems if high-risk AI is used
  • Training employees on the use of AI systems.

Outlook

It is good that the use of artificial intelligence is now being regulated by law for the first time in the world. It is also good that the material scope of application has been restricted from "software" in general to "machine-supported AI systems", that the term "high-risk AI" has been significantly clarified, that companies in the IT sector remain largely unaffected in the area of open source software development and that developers of their own AI base models are now generally only subject to very weak regulation. It now remains to be seen what the final version of the regulation will look like. We will subsequently publish an update if there are any deviations from the content of this article.

Download as PDF

Contact persons

You are currently using an outdated and no longer supported browser (Internet Explorer). To ensure the best user experience and save you from possible problems, we recommend that you use a more modern browser.