The European countries' policymakers reached an agreement on Friday, December 8, on preliminary terms for the EU Artificial Intelligence Act after an impasse between the European Parliament and the Council of the EU over how to regulate foundational models. The proposed legislation still needs to be voted on at the European Parliament in early 2024 and is subject to formal approval by the Council. This will be followed by a transitional period of two years before the regulation can be enforced. During this time the Commission will be launching an AI Pact convening AI developers from Europe and around the world to voluntarily implement key obligations of the AI Act ahead of the legal deadlines.
Introduced in April 2021, the EU AI Act emerged as the first legislative proposal to regulate Artificial Intelligence. It can set the pace for global regulation standards much like GDPR has done on data protection when originally agreed to and enacted in 2016 and 2018.
Overall, the agreement aims to regulate AI use while balancing innovation, risk management, and fundamental rights protection.
The "Definitions and Scope" section of the agreement aligns the definition of AI systems to the OECD's criteria. It also specifies that the regulation is confined to the realm of EU law and does not extend to national security matters of member states, military, or defense purposes, and does not apply to AI systems used solely for research, innovation, or non-professional uses.
The "Classification of AI Systems as High-Risk and Prohibited AI Practices" section of the agreement outlines the following key points:
Introduction of a horizontal layer of protection for AI systems categorizing AI systems into high-risk and limited-risk groups. High-risk AI systems are those with the potential to cause serious violations of fundamental rights or significant risks. These systems are subject to stricter regulations. On the other hand, AI systems that present only limited risk are subject to minimal transparency obligations, such as disclosing that their content is AI-generated. This approach aims to regulate AI systems appropriately based on their risk potential while ensuring users are informed about the use of AI in the content they interact with.
Permissions for a variety of high-risk AI systems to be used in the EU market but under specific requirements and obligations to ensure compliance and safety. These requirements have been refined to make them more technically practical and less burdensome for stakeholders, including aspects like data quality and the preparation of technical documentation. Special attention has been given to making these requirements feasible for small and medium-sized enterprises. This aims to balance the need for innovation and technological advancement with the necessity of ensuring safe and responsible AI deployment.
The agreement clarifies responsibilities within AI development and distribution value chains, specifying roles for providers and users, and aligns these with existing EU legislation, including data protection and sector-specific laws.
Bans some uses deemed too risky, including cognitive behavioral manipulation, untargeted scraping of facial images, emotion recognition in workplaces and schools, social scoring, biometric categorization for sensitive data (like sexual orientation or religious beliefs), and certain predictive policing practices.
The agreement allows “Law Enforcement Exceptions” to use AI systems while ensuring confidentiality and rights protection. This includes an emergency procedure for deploying high-risk AI tools that have not passed the conformity assessment in urgent situations and the use of real-time biometric identification in public spaces, strictly for serious crime prevention, threat mitigation, or locating suspects of the most serious crimes, like terrorist attacks. These exceptions come with added safeguards to balance law enforcement needs with fundamental rights protection.
There is increased focus on “Transparency and Protection of Fundamental Rights” and the provisional agreement mandates a fundamental rights impact assessment before deploying high-risk AI systems. It also increases transparency in the use of these systems, requiring public entities using high-risk AI to register in the EU database for high-risk AI systems. It introduces an obligation for users of emotion recognition systems to inform people when they are subject to such technology.
A new provision has been added for “General-Purpose AI Systems and Foundation Models” that can be adaptable for various uses and capable of performing diverse tasks like generating text, video, and images, or interacting in natural language, or generating code. These must comply with transparency obligations before they are placed in the market. A stricter regime was introduced for ‘high impact’ foundation models defined as those trained with large amounts of data and with advanced complexity, capabilities, and performance well above the average, which can disseminate systemic risks along the value chain.
A “New Governance Architecture” is introduced and includes an AI Office within the Commission to oversee advanced AI models, enforce EU-wide rules, and foster standards and testing practices. A Scientific Panel of independent experts will advise this office on GPAI models, evaluating foundation models, and identifying high-impact models and potential safety risks. The AI Board, with representatives from member states, will coordinate and advise on regulation implementation and develop practice codes for foundation models. Additionally, an Advisory Forum will be established for stakeholders like industry representatives, academia, civil society, SMEs, and startups to provide technical expertise to the AI Board.
“Penalties” for violating the AI Act include fines based on a percentage of a company's global annual turnover or a set amount, whichever is higher. Fines are €35 million or 7% for banned AI application violations, €15 million or 3% for other obligation breaches, and €7.5 million or 1.5% for providing incorrect information. The agreement allows for lower fines for SMEs and startups. Moreover, individuals or entities can file complaints about non-compliance to the market surveillance authority, which will follow specific procedures to address these complaints.
The new compromise agreement includes “Measures in Support of Innovation” with an intent to create an innovation-friendly legal framework that promotes evidence-based regulatory learning. It includes AI regulatory sandboxes for controlled development, testing, and validation of innovative AI systems, with provisions for real-world testing under certain conditions and safeguards. To reduce the administrative burden for smaller companies, the provisional agreement includes a list of actions to be undertaken to support such operators and provides for some limited and specified exemptions.
Together with the White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence from October 30, we now have two substantial bodies of proposed regulation to establish safeguards for the use of Artificial Intelligence. As I mentioned in my recent article about the race for AI innovation, the industry is proceeding full steam ahead and it looks like the regulators are also making progress.
This is good news since we want progress, fairness, and safety.