04-29-2021  | Update Data Protection No. 94

Draft European Regulation on the Supervision of Artificial Intelligence




back to overview

Even if it still remains to be seen whether the regulation will be issued and applied, extensive requirements are emerging which the EU will impose on so-called "high-risk AI systems" in the future. All companies that are planning to use such AI in the long term or that are developing such AI should obtain a broad outline of the new requirements now. After all, a violation of certain prohibited uses of AI can be sanctioned with a fine of up to EUR 20 million or 4 % of the worldwide annual turnover. For medium and long-term development work, it is therefore certainly of value to already consider the topics of the new regulation in a fundamental way.

Background

The EU Commission has already announced in its White Paper of February 19, 2021 (WHITE PAPER On Artificial Intelligence – A European approach to excellence and trust) that it will regulate the use of artificial intelligence that leads to a high risk. For example, requirements for training data, transparency, robustness and accuracy, documentation and human supervision were envisaged. The corresponding draft ordinance was published on April 21, 2021, after a version was leaked one week earlier that was only slightly different.

As a regulation, the set of rules will be immediately and directly applicable in every Member State once it comes into force. In contrast to the General Data Protection Regulation, the draft is not a “directive with the attributes of a guideline” which is characterized by numerous opening clauses for the Member States. Only a small amount of autonomy remains with the Member States for very few – but significant – issues: In particular, the Member States can determine which authorities are responsible for national enforcement and which sanctions apply in the event of a violation of the ordinance (only for a few violations has a range of fines been set that is parallel to the GDPR).

Prohibition of the use of AI systems

Regardless of whether the use of artificial intelligence falls under the definition of a “high-risk AI system”, the ordinance contains the following prohibitions – albeit with a currently still very vague description (see Article 4). Accordingly, artificial intelligence may not be used for the following purposes:

  • the manipulation of human behavior, the formation of opinions and the freedom of choice, to the extent that this results in an opinion or a decision to the disadvantage of the data subject;
  • evaluation of information about individuals or groups in order to exploit their weaknesses or special circumstances and to promote decisions to the detriment of these individuals;
  • social scoring should be completely excluded (however, no credit check and other previously customary scoring procedures);
  • using AI to monitor individuals.

Depending on how broadly the above prohibitions are interpreted, all forms of the use of AI for "nudging" could also be prohibited. The use of artificial intelligence for the selection of suitable advertising media could also be included. Finally, even all major social media offerings could be banned, as they all ultimately influence opinions and use AI. Therefore, it is hoped that the legislature will tightened up here.

Complex definition of "high-risk AI systems"

While the prohibitions listed above apply to all AI systems, the requirements listed below only apply to high-risk AI systems. It is therefore essential to know which AI systems are classified as such. For this purpose, the ordinance is based firstly on existing European product safety law. For example, it stipulates that all products that require a conformity assessment by a third party are considered high-risk AI systems. In addition, entire product categories are in principle viewed as high-risk AI systems. These include the use of AI in motor vehicles, aircraft, railways and ships.

On the other hand, a list also specifies the uses of AI – regardless of product safety law – which are per se “high-risk”.  This includes the following applications:

  • selection of applicants;
  • credit checks;
  • access to studies and training, as well as the assessment of examination results in training and studies;
  • assessment of persons in connection with law enforcement and similar measures restricting freedom;
  • auxiliary AI for judges;
  • asylum and visa checks;
  • biometric recognition when monitoring in publicly accessible places;
  • use of AI in the operation of critical infrastructures such as water, gas and electricity supply.

Requirements for "high-risk AI systems"

The detailed formulation of the requirements for high-risk AI systems is extensive and it is questionable whether every detail will prevail in the legislative process. In principle, however, the following requirements can be expected:

  • requirements for the quality of test and training data (Article 8);
  • traceability of the results (Article 9);
  • documentation (Article 9) and catalog with minimum requirements for documentation in Annex IV;
  • transparency and information obligations towards users;
  • monitoring human use of AI (Article 11);
  • robustness, accuracy and security (Article 12)
  • after market introduction: monitoring the activities of the AI in the market (Article 54);
  • the obligation to report serious incidents or malfunctions (Article 55).
  • Who is subject to the obligation?

The above obligations do not only apply to those who, as “providers”, either supplies someone with their own AI system in their own name or who uses it themselves. An importer, distributor and, ultimately, the simple user of an AI system are also subject to the ordinance. By using a very vague formulation, even other third parties who are involved in the value chain for “high-risk AI systems” are also subject to the obligation.

Liability and sanctions

Parallel to the GDPR, the maximum fine of EUR 20,000,000 or 4 % of the worldwide annual turnover has only been envisaged so far for the use of the "forbidden" AI systems. The Member States should decide for themselves to what extent and with which amount further violations can be sanctioned. There is, therefore, a risk of a European regulatory hodgepodge.

In all other respects, however, the regulations will affect existing civil liability and tighten it accordingly. Should damage be (at least also) based on the fact that an obligation arising from the ordinance has not been implemented, it may be much easier in the future for the injured party to prove that a "protection law" has been violated or what is required in terms of "due diligence". It is also conceivable that the provisions of the ordinance will be classified as “market behavior rules” within the meaning of the UWG [German law against unfair competition] – parallel to the similarly structured product safety rules.

What does this now mean for those obligated?

In view of the explosive nature of the subject – and, given the pandemic-related “paralysis” of some bodies – the legislative act is not expected to be finalized very quickly. Nevertheless, it must be noted that with this draft the EU is showing that it is serious about the input from various ethics committees. In the past few years, various national and international idea-giving commissions have repeatedly called for precisely those points which are mentioned above as obligations.

Anyone who is planning, leading, financing or is otherwise involved in medium- and long-term developments in the field of AI must, sooner or later, expect the aforementioned obligations to become applicable law - however, the details of how they will be drafted and the sanction mechanisms they will be subject to still remain to be seen. For this reason, it is generally recommended to check upon current developments to see whether the implementation of the above-mentioned requirements can be taken into account in a general and cost-effective manner at the beginning of a project. You may remember the problem of erasing data, a requirement of the GDPR that simply could not be technically implemented at first because the IT systems and databases did not take this into account. The draft ordinance offers an opportunity to do it better, possibly saving extensive costs later, and even providing an edge over competitors.

Contacts

further reports which may be of interest to you

This website uses cookies. Please read our data protection provisions to learn more about how we use cookies and how you can change your privacy settings. OK