The EU AI act is an ambitious AI regulatory framework developed by the European Union and is the most comprehensive attempt at regulating artificial intelligence anywhere in the world so far. 

The act was first proposed in April 2021, and has entered into force from August 1, 2024. According to the European Commision, over 100 companies had pledged their compliance with the EU AI pact by late September, 2024. However, major tech firms like Apple and Meta have expressed concerns about the regulatory landscape, with Apple delaying the rollout of certain AI features in the EU, citing regulatory uncertainties.

In this post, I will break down the key aspects of the EU AI Act, how it’s structured, and the implications it holds for both AI developers and users.

Key Objectives of the EU AI Act

The EU AI Act is part of the broader European strategy on AI and aims to make the EU a global leader in responsible AI by fostering trust, transparency, and accountability in AI systems across all member states.

This act regulates the AI products, not the technology itself. More specifically, it doesn’t focus on technical specifications like neural network layers, number of parameters used or development cost. Instead, it categories products that incorporate AI based on the impact and risk of the application. The regulations for mitigating potential harms and supporting innovation are based on the risk level.

This risk based approach contrasts fundamentally with the California SB 1047 bill, which I discussed earlier in my post.

The Risk-Based Approach

The EU AI Act classifies AI systems into four categories based on the potential risks and regulates based on that. The diagram below shows four risk categories, corresponding regulatory requirements and examples.

Requirements for High-Risk AI Systems

For high-risk AI systems, the EU AI Act imposes rigorous requirements to ensure that these technologies meet standards for safety, fairness, and accountability. These requirements include:

  • Data and Data Governance: High-risk AI systems must be trained on high-quality, representative datasets to reduce bias and ensure equitable outcomes. Developers are expected to regularly evaluate their data to prevent discrimination or harmful biases.
  • Transparency and Documentation: Developers must provide documentation that explains how the AI system operates and the logic behind its decision-making process. This documentation must be accessible to relevant authorities.
  • Human Oversight: Human oversight mechanisms are required for high-risk systems to ensure that a human can intervene in the AI system’s decision-making process if needed. This provision aims to prevent AI from fully replacing human judgment in sensitive areas like law enforcement and healthcare.
  • Robustness and Accuracy: High-risk AI systems must undergo regular testing to ensure accuracy and reliability. Developers must ensure that the systems perform consistently well under a range of conditions.

Enforcement and Compliance

The EU AI Act establishes a comprehensive enforcement framework. It has created a European Artificial Intelligence Board to facilitate collaboration and standardization across the Union. It also designates national authorities within each EU member state to monitor and enforce compliance. This approach is designed to ensure consistent application of the Act across the Union while allowing flexibility to address local contexts.

Though some parts of the Act are already active, the full Act will apply in two years. There are a few exceptions. 

  1. Prohibitions for the unacceptable risk category products will start in *six months. This aggressive timeline is to mitigate the most severe risks asap.
  2. Governance rules and rules for general-purpose AI models will start in *twelve months. This 12-month timeline reflects the urgency of regulating rapidly advancing general-purpose AI while allowing developers time to comply.
  3. Rules for AI systems in regulated products will start in *thirty-six months. Products like medical devices, vehicles, or machinery will need to comply with additional sector-specific standards under the Act. This extended timeline accommodates the complexity of aligning AI-specific rules with existing EU regulatory frameworks in these industries.

*This six, twelve, and thirty-six-month timelines for the EU AI Act’s phased implementation are counted from when the Act officially entered into effect, which is August 2024

The EU AI Act also establishes penalties and fines for non-compliance. Companies found in violation of the Act may face fines of up to 6% of their global annual revenue or €30 million (whichever is higher) for the most serious violations, such as using banned AI systems. Less severe violations, like failing to meet high-risk system requirements, carry lower penalties. These fines reflect the EU’s commitment to enforceable AI governance and underscore the importance of ethical compliance.

Criticisms and Challenges

While the EU AI Act is widely considered a major step forward in AI regulation, it has faced some criticism. Some experts argue that the Act’s definitions and categories, such as “high-risk,” are too broad. This could unintentionally stifle innovation in certain sectors. Others are concerned about the costs of compliance for small businesses and startups which may lack the resources to meet these rigorous standards.

There are also debates about how certain provisions would work in real world scenarios. For example, restrictions on biometric surveillance may impact law enforcement and public safety. Balancing these competing interests will likely require further adjustments and clarifications as the EU AI Act is implemented and tested in real-world scenarios.

Implications for AI Development and the Global Market

The EU AI Act is likely to have far-reaching effects on AI development globally. Companies entering or operating in the EU must meet its requirements, making compliance essential. Given the EU’s market size and significance, many non-European companies may adopt EU standards, which will broaden the Act’s global reach. 

Additionally, the Act could inspire other countries to implement similar frameworks. Nations like Canada and Brazil are already considering such regulations, and the EU AI Act could serve as a model for these efforts.

It will be essential to monitor how the Act is implemented, enforced, and potentially adapted to address emerging technologies and concerns.

Trending