The European Commission’s new proposal for an AI Regulation
On 21 April 2021, the EU Commission presented the long-awaited proposal for a directive to regulate artificial intelligence (AI Regulation). The proposal is based on a number of recommendations etc. from the European Commission and its aim is not least to safeguard the citizens' trust in AI systems. The AI Regulation is the first targeted legal regulation of artificial intelligence and it will have a significant impact in Europe and the rest of the world when it comes to the development and use of artificial intelligence. The AI Regulation will apply parallel to the GDPR which requires full compliance, f. i. when using personal data to train algorithms or when using AI systems for automatic decisions with legal consequences for the data subjects. There are also concurrent plans to update the safety requirements for machinery in relation to AI by a new regulation (the Machinery Regulation) to replace the existing Machinery Directive. There is no set schedule for the EU Parliament and the EU Council's reading and the subsequent trilogue negotiations, however, a lengthy and thorough process is to be expected. This Plesner Insight will give you an overview of the proposed AI Regulation as well as a look into a number of particularly interesting aspects of the proposal.
General description of the AI Regulation
Overall, the AI Regulation contains four types of regulation:
- A ban against the use of certain AI systems (Banned AI Systems) (Article 5);
- Strict requirements for the use of AI systems considered high-risk (High-risk systems) (Article 6-51);
- Transparency requirements for AI systems with human interaction (Article 52);
- Framework for voluntary "codes of conduct" for non-high-risk AI systems (Article 69)
The Banned AI systems are AI systems that (a) threaten human beings physically or mentally via subliminal techniques or by exploiting vulnerabilities, (b) use social score cards by surveilling citizens, and (c) certain types of facial or individual recognition.
The focus of the AI Regulation is to regulate High-risk systems defined as AI systems within eight fields: Biometric identification, control of critical infrastructure, education, HR, essential services (welfare services, credit scoring, alarm services), law enforcement, migration/asylum and application of the law by the courts. The development/operation, sale and use of High-risk systems are strictly regulated. These systems require f. i. the establishment of risk management and quality assurance systems as well as human involvement, transparency, robustness, cyber security and accuracy.
It is a particular requirement that approval of the High-risk systems is subject to a conformity assessment concerning technical documentation, the applied quality assurance system as well as the legality of the system vis-à-vis the AI Regulation.
The design of all AI systems, including High-risk systems, intended for interaction with physical persons must be such that the persons are made aware that they are interacting with an AI system. A heightened duty of disclosure will also apply to AI systems that generate "deep fake" manipulated pictures, sound and video.
The EU Commission and the Member States must promote the drafting of codes of conduct for AI systems, including environmental sustainability and accessibility for persons with disabilities.
The AI Regulation will employ a system of fines comparable to the one that applies for competition law and GDPR. However, the maximal level of fines will be raised to 6 % of the business's global turnover relating to breach of the rules for Banned AI systems.
Which technologies (Article 3 and Annex 1)
In the AI Regulation, an AI system is defined as "software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with".
Annex 1 lists a series of technologies which are open and widely described, however still the technologies commonly referred to in connection with AI, e. g. machine learning. There is therefore a level of uncertainty as to whether a system will be comprised, in particular if a software solution only to a very limited degree involves techniques which may be characterised as AI. The intention is that the EU Commission will be able to update Annex 1 further to developments in the market (Article 4).
AI systems that will not be regulated
Most experts consider autonomous weapon systems based on AI to be one of the greatest threats against mankind (se f. i. the pledge of Future of Life Institute). However, the AI Regulation does not apply to AI systems developed or used solely for military purposes (Article 2(3)), on the grounds that this issue must be addressed as part of the joint security and defence policy of the Union in accordance with the EU Treaty, section V. Across the AI Regulation, there is also a significant number of exceptions to the use of AI systems by law enforcement authorities and organisations.
Another area, which will not be directly regulated, is autonomous vehicles. It is expected that the EU will introduce separate guidelines for the approval and use of these, taking the principles of the AI Regulation into consideration. In 2018, the EU Commission presented its vision for autonomous vehicles and among other things in 2019 issued guidelines for provisional approvals.
Geographical scope of application (Article 2)
The AI Regulation applies to providers that market, sell or put AI systems into operation in the EU, whether or not these providers are resident in the EU. Further, users of AI systems that are resident in the EU are also comprised. Finally, in order to avoid circumvention, providers and users outside the EU will also be covered, should the output of AI systems be used within the EU. This extra-territorial effect entails that recipients of data pertaining to the areas comprised by the AI Regulation will need to know the origin of such data. e. g. whether the data was generated via Banned AI systems or High-risk systems.
Who is comprised by the rules
What characterises the AI Regulation is its aim to regulate all the links in the chain from provider, importer, distributor to users - together defined as "operators". Users are defined as private enterprises and public institutions but does not include the use of AI systems for private or non-professional use. Chapter 2 of the AI Regulation directs providers of High-risk systems to comply with a series of material requirements such as risk management, transparency, human involvement, and security etc. The remaining links in the chain are not subject to the same requirements but have a duty to supervise the providers' compliance with this obligation. Importers, distributors and users will also considered to be providers should they market the High-risk system in their own name, amend its purpose or carry out a material change to the system. Thus, it will be of key importance to importers, distributors and users to ensure compliance with the rules in order to avoid a "change of role". Finally, the users of High-risk systems are subject to strict obligations, including in relation to surveillance and the use of input data.
Development of AI systems
The AI Regulation governs the marketing, sale and use of AI systems. The key concepts are "placing on the market", "making available on the market" and "putting into service". The AI Regulation widely mentions the development of AI systems, but nevertheless the regulation only concerns marketing, sale and use. In principle, there would be nothing to hinder providers in developing AI systems within the EU without observing the rules as long as these systems are not marketed, sold or used in the EU. This allows for European technology providers to compete in other markets (i. e. the US market) without being limited by the requirements of the AI regulation if such requirements do not apply abroad. However, if personal data is used in the development of such AI systems, the GDPR will still apply.
Issues in common with other EU legislation
One of the challenges of the AI Regulation is that it touches upon a vast number of legislative acts, in particular within product safety, and a great deal of effort has been put into identifying common points and coordinating in relation to such points (see Article 2 and Annex II). There are also significant points in common with the GDPR, and the GDRP applies fully for the use of AI systems. In this regard, it is important to note that the user organisations must carry out a data privacy impact assessments (DPIA), see Article 29(6) using the information supplied by the provider as part of its transparency information.
Legal tech in particular
AI systems used to assist the courts in their application of the law are considered High-risk systems if they concern assistance in "researching and interpreting facts and the law and in applying the law to a concrete set of facts". The very same AI systems may however also be used by prosecuting authorities, lawyers and other legal advisers in which cases these systems will not be considered High-risk. Furthermore, the use of software for arbitration will not be regulated. The fact that solely the courts' use of AI systems will be regulated may cause an imbalance between the courts on one side and prosecuting authorities/lawyers on the other side. It is not unlikely that legal tech providers will refrain from entering the market for AI systems for courts of law but rather prioritise the legal sector, which will gain access to sophisticated AI systems that will not be available to the courts.
The issues surrounding the use of AI systems by the courts is described in "Robots Entering the Legal Profession" (in particular chapter 16)
Read the proposed AI Regulation here.