AI Act is here blog cover

The AI Act is here: what we know so far   

After a marathon negotiation process, the EU has finally reached a deal on the AI Act. What do we know so far about the package that is set to become the world’s first regulation on artificial intelligence?  

Artificial intelligence holds incredible potential, but its large-scale deployment comes with some significant risks for businesses, people, and society at large. That’s why the European Union has been working on a landmark AI regulation over the past years – the (in)famous EU AI Act.  

The Act has been the subject of heated debates that culminated in the recent approval of a compromise agreement between the three main European institutions. We covered the basics of the Act here before, but now that the dust has settled, it’s worth exploring the newly approved package deal, since it’s almost certain that the Act will come into force in the next few years.  

The trilogue

We’re willing to bet that in the past weeks, no Eurobubble-words have been Googled as much as “trilogue”. It’s a term for the final stage of negotiation in the European Union’s usual process to make laws – the so-called “ordinary legislative procedure”. In the trilogue, representatives from the European Commission, the Council of the EU, and the European Parliament sit together and hash out whatever still needs to be discussed to get an agreement on a piece of legislation.

You might think that since the EU institutions are all looking out for the EU, there should be little to disagree on – but things are much more complicated than that. THe Council generally defends the interests of the EU member states, the Parliament protects European citizens, and the Commission represents the EU itself. That means there are often quite fundamental differences between what the different institutions want.  

While generally seen as an informal type of meeting, trilogues can last a while. In the case of the AI Act, the representatives started the trilogue process back in June 2023 already, and still needed record-breaking marathon talks in early December 

Why negotiations are so complicated

The complexity of creating legislation in the EU doesn’t stop there, however. The different institutions each have their wants and needs, but so do industry associations, civil society actors, multinational companies, and interest groups. Big Tech companies have been lobbying EU lawmakers like never before to influence – most likely to weaken – the AI Act. Big Tech companies often present regulation as a barrier to innovation and fear the Act would reduce their opportunities to profit from artificial intelligence.  

There’s another side to that coin, however: Big Tech companies also have a great deal of knowledge of and experience with artificial intelligence and its real-world consequences. This is due in part to the fact that these companies are at the forefront of AI development, as it requires a ton of resources that smaller companies don’t have. In a way, that makes it less surprising that lawmakers primarily met with Big Tech companies to discuss the Act. Either way, everyone wants to make their mark on a law that is set to influence one of the most important technologies of the twenty-first century, and it’s never easy to keep everyone happy.  

The Ai Act agreement

So what kind of agreement did the EU institutions come up with in the end? While the final text still needs to be approved in early 2024, there’s quite a bit that we already know. We covered the AI Act here before, and the general framework we explained there is intact.  

Negotiators were faced with a few particularly difficult points of discussion: what to do with general-purpose AI or foundation models, potential law enforcement exceptions for certain high- or unacceptable-risk AI (especially remote biometric identification), the governance structure and penalty regime of the new law, and measures to support innovation.  

General-purpose AI

Lawmakers have refined the requirements for systems like ChatGPT. If an AI technology can be used for several very different purposes (also known as general-purpose AI), and especially if they are large and effective general-purpose models, like GPT-4 (also called foundation models), providers will have to comply with transparency obligations before placing them on the market. The rules for high-impact foundation models (the best models, basically) are a bit stricter as well.

For companies deploying AI, this will not change much, as most of the obligations will fall on the shoulders of large tech companies. The idea of this kind of regulation is to reduce risks “at the source” or “upstream” to ensure that smaller companies won’t be faced with disproportionate regulatory burdens and compliance costs. In the end, it’s better to regulate a few big players once (and early on) rather than to let thousands of companies building on the work of those big players mitigate risks later in the deployment process.  

Law enforcement exemptions

Another stumbling block has been law enforcement and military exemptions for unacceptable and high-risk AI applications. Member states (via the Council of the EU) pushed for high-risk AI to be allowed across the board for military and security purposes. Law enforcement agencies would in some cases even be allowed to use AI that is deemed to pose an unacceptable risk, such as remote biometric surveillance systems that match faces detected on security camera footage with faces in a database to flag potential terrorists

While some civil society actors have claimed that watering down the Act with exemptions for law enforcement would defeat the entire purpose of the law, lawmakers have pushed on, although adding specific rules on when exemptions are allowed. The new agreement allwos for an emergency procedure that lets law enforcement agencies use high-risk tools without performing a conformity assessment to ensure timely deployment. The exact details of law enforcement exemptions will be hashed out in the coming weeks, as this is a topic that has seen considerable discussion already.  

On the other hand, the agreement aims to protect citizens by forcing deployers to perform a fundamental rights impact assessment before placing their system on the market, while public entities (i.e. any government-related organization) using a high-risk system must register that system in the central EU database.  

Governance structure

As for the governance and enforcement structure of the Act, the Commission will set up an AI Office, advised by an independent expert panel, to enforce common rules as well as oversee standard-setting and testing for general-purpose AI models. In addition, an AI Board comprised of member states’ representatives, in turn advised by a forum of stakeholders, will coordinate and advise the Commission on the implementation of the Act. Relevant market surveillance authorities will of course also be allowed to enforce the Act’s requirements, and anyone in the EU can file a complaint for non-compliance.  

Penalty regime

The new agreement also changed the penalty regime slightly: violating the unacceptable risk regime now results in a €35 million fine or 7% of a company’s global annual turnover, €15 million or 3% for skirting the Act’s obligations, and €7.5 million or 1.5% for supplying incorrect information. The new agreement also places lower caps on the fines for SMEs and start-ups, but it remains to be seen what the technical discussions will eventually deliver. 

Measures for innovation

Perhaps in response to critics arguing that the AI Act could downright kill AI innovation in the EU, the agreement also aimed to make the Act more innovation-friendly. One example is a new approach to regulatory sandboxes, with AI innovators being allowed to test and validate their systems in real world conditions, as well as a lighter administrative burden for smaller companies.   

In the coming weeks, the institutions will do painstaking technical work to flesh out every little detail of the Act. Much remains to be done, so the triumphant claims of negotiators (even if they did an incredible amount of work) may be slightly overstated. Still, this is a monumental achievement for AI governance in the world, especially considering the vested interests that have tried to make their mark on the Act itself.  

To stay on top of any developments related to the Act, make sure to read our upcoming insights. If you’re interested in what the AI Act means for you and how we can help you get compliant in time, reach out to our EU AI Act Manager Koen Mathijs or contact us. 

Scroll to Top