Go to content
${facet.Name} (${facet.TotalResults})
${item.Icon}
${ item.ShortDescription }
${ item.SearchLabel?.ViewModel?.Label }
See all results
${facet.Name} (${facet.TotalResults})
${item.Icon}
${ item.ShortDescription }
${ item.SearchLabel?.ViewModel?.Label }
See all results

EU reaches historic agreement on world’s first AI Act

13 Dec 2023
|

On 9 December 2023, in a significant development, the Council of the EU (the Council) and the European Parliament have reached a provisional agreement on the world’s first artificial intelligence (AI) act. The AI Act aims to establish harmonised rules for AI systems in the European market, ensuring safety, respect for fundamental rights, and adherence to EU values. Agreement on the AI Act has been hailed as a “historical achievement”, emphasizing the delicate balance it strikes between innovation and respecting citizens' rights.

The AI Act introduces specific regulations for general-purpose AI models, emphasizing transparency throughout the value chain. It also introduces a risk-based approach, categorising AI systems from minimal to high and unacceptable risk based on their potential societal harm, with stricter rules for higher-risk AI systems:

  • Minimal risk: Majority of AI systems fall into this category, exempting them from obligations. Voluntary commitment to additional codes of conduct is permitted for minimal-risk AI systems like recommender systems or spam filters.

  • High risk: Stringent requirements apply to high-risk AI systems, including risk mitigation, quality data sets, detailed documentation, human oversight, and robust cybersecurity. Regulatory sandboxes will facilitate responsible innovation for high-risk AI systems.

  • Unacceptable risk: AI systems posing a clear threat to fundamental rights will be banned. Examples include manipulative applications, "social scoring" systems, and certain uses of biometric systems.
  • Specific transparency risk: Measures mandate user awareness of AI machine interactions, labelling of deep fakes, and informing users when biometric or emotion recognition systems are in use.

The proposal, initially presented in April 2021, is a crucial element of the EU's strategy to promote safe and lawful AI across the EU single market, fostering investment, innovation, and a unified approach to AI applications. The agreement follows a risk-based framework, aligning with the EU's coordinated plan on artificial intelligence to accelerate AI investment in Europe. The EU Council reached a general approach in December 2022, leading to inter-institutional talks with the European Parliament in June 2023.

Fines

The provisional agreement includes fines for AI act violations, set as a percentage of the offending company's global annual turnover. Specific penalties are outlined for banned AI applications, breaches of AI act obligations, and the supply of incorrect information. Fines for non-compliance range from €7.5 million to €35 million, with caps for SMEs and start-ups.

Governance architecture

The AI Act introduces a governance architecture with an AI Office overseeing advanced AI models and contributing to standards and testing practices, a scientific panel of independent experts, an AI Board for coordination, and an advisory forum for stakeholders.

Timeline

The implementation of the AI Act is anticipated two years after its entry into force, with specific provisions applying earlier.

During the transitional period, the EU Commission will launch an AI Pact, engaging AI developers to voluntarily implement key obligations ahead of legal deadlines. The EU plans to advocate for trustworthy AI rules internationally in forums such as the G7, OECD, Council of Europe, G20, and the UN.

The Council of the EU’s official press release can be found here.

The European Commission’s official press release can be found here.