European Parliament

The final version of the EU Artificial Intelligence Act: agreement reached

On December 9, 2023, negotiators from the European Parliament and the Council presidency reached an agreement on the final version of the European Union Artificial Intelligence Act (EU AI Act).[1] This landmark legislation is claimed to be the world’s first comprehensive legal framework for regulating artificial intelligence.

The EU AI Act is a significant regulatory framework designed to ensure responsible AI development and use within the European Union. It aims to balance innovation with the protection of individuals’ rights and safety. The Act was proposed by the European Commission in April 2021, approved by the European Parliament in June 2023, and has now been agreed upon in its final form following negotiations between the European Parliament, the European Council, and other institutions representing EU member states.

“The EU AI Act is a global first,” said European Commission president Ursula von der Leyen on X.[2] “[It is] a unique legal framework for the development of AI you can trust. And for the safety and fundamental rights of people and businesses.”

The EU AI Act represents one of the most comprehensive regulatory frameworks for artificial intelligence technology, particularly generative AI, but it’s not the world’s first. China had implemented its rules for generative AI earlier in August 2023.[3] However, the EU AI Act is considered more expansive and detailed in its coverage.

The way generative AI influenced the debate

The development of AI regulations has indeed faced challenges due to the rapid evolution and expansion of AI technologies, including those like ChatGPT created by OpenAI. When the first draft of the EU AI Act was written in early 2021, the landscape of AI applications was different from what it is today.

Lawmakers initially focused on regulating AI based on specific use-cases and categorized them by risk levels. This approach was intended to address the potential risks associated with AI in various domains, including aviation, education, and biometric surveillance. However, the emergence of “General Purpose AI Systems” (GPAIS) like ChatGPT, which are not designed for a single specific task but can perform a wide range of tasks, created a challenge for fitting them into the existing regulatory framework.

The adaptability and versatility of GPAIS tools like ChatGPT raised questions about how to categorize and regulate them effectively.[4] This has led to ongoing discussions and debates among policymakers about how to address the unique characteristics and potential risks associated with these types of AI systems.

The fast-paced development of AI technologies continues to pose challenges for regulators, who must strike a balance between fostering innovation and ensuring the responsible and ethical use of AI. It highlights the need for flexible and adaptable regulatory frameworks that can keep up with the evolving AI landscape.

Key features of the approved EU AI Act

Key provisions of the EU AI Act include:

  • Prohibitions: The Act prohibits AI systems considered to pose an “unacceptable risk” from being deployed within the EU. This includes systems engaged in cognitive behavioral manipulation, social scoring, biometric identification, and real-time remote biometric identification, with limited exceptions for law enforcement.
  • High-Risk AI: AI systems categorized as “high risk” will be subject to specific obligations. This category includes AI used in areas such as product safety (e.g., toys, aviation, medical devices), critical infrastructure management, education, law enforcement, migration and border control, and more. These systems must undergo assessment before deployment and throughout their lifecycle.
  • Foundation Models: The EU AI Act addresses foundation models, including measures to ensure compliance with European copyright law, requirements for publishing detailed summaries about the content used for training these models, and the preparation of technical documentation related to their use.

Enforcement and penalties

Member States are required to designate one or several authorities to oversee and enforce the legislation. Entities that monitor the GDPR’s application will likewise supervise the implementation of the legislation concerning personal data. The Act, while being directly applicable as an EU regulation, necessitates considerable adaptation by Member States. Concerns have been raised about potential discrepancies in sanctioning across different countries due to the ability for penalties to vary, possibly even markedly. Like the fines under the European General Data Protection Regulation, the fines for breaches of this Act will be determined as either a percentage of the infringing entity’s worldwide annual revenue from the last fiscal year or a set amount, depending on which is greater:

  • €35 million or 7% for the use of prohibited AI applications;
  • €15 million or 3% for breaches of the Act’s requirements;
  • €7.5 million or 1.5% for providing false information.

However, there will be reasonable limits on the administrative fines for small and medium-sized enterprises and startups. Moreover, citizens will have the right to file complaints about AI systems that negatively impact them.

Recent trends indicate increasing global attention to AI governance, with various countries developing frameworks to ensure ethical and responsible AI use. The European Union’s legislation represents a significant step in this direction, setting a precedent for others to follow. As AI continues to evolve, monitoring and enforcement mechanisms, as well as sanctions for non-compliance, will likely become more sophisticated and internationally coordinated. This approach seeks to balance innovation with accountability, ensuring that AI benefits society while minimizing potential harms.


You May Also Like