EU AI Act by Sparkle

EU AI Act draft consolidated text leaked online: an insight in a few surprises  

Right after the consolidated text of the EU AI Act was shared with member state representatives, the draft was leaked online. The new text shows several details being fleshed out, but few substantial surprises.  

The European Union is set to pass the world’s first comprehensive regulation on artificial intelligence, the EU AI Act, in 2024. The EU’s three legislative institutions (the Commission, Council of the EU, and the European Parliament) had already reached a political agreement on the contents of the Act in early December. The core of the EU AI Act is that it is a risk-based regulation, with higher-risk systems being subject to stricter requirements. Its primary purpose is to safeguard EU citizens’ fundamental rights in the age of AI.  

Since the political agreement, the technical details of the legislation have been hammered out by EU policy experts. Now, as the technical points have been finalized, the full text of what could very well be the final product of the negotiations has been leaked online. The full “four-column” document, which is similar to a document with “track changes”, was recently shared with EU member states and was almost immediately leaked. Luca Bertuzzi, a technology journalist at Euractiv who has been covering the EU AI Act from the very start, posted the entire file (good for almost 900 pages) on LinkedIn. In response, a senior policy advisor at the European Parliament posted the consolidated draft without the “four-column” layout, which still amounts to over 250 pages.  

The entire document will be discussed in the Telecom Working Party of the EU Council (where the Ministers of member states meet) and will be put to a vote in the so-called Committee of the Permanent Representatives of the Governments of the Member States to the European Union (COREPER) as soon as February 2. This means that the document must be combed through by aides and assistants at a break-neck pace, and such a short time may not be enough to really work through the draft – although the vote in this case can only be a “yes or no”, as it’s not a continuation or extension of the past negotiations. Technical details can still change, however.  

Generative AI and copyright

One of the major discussions in the past weeks on generative AI (AI systems that can produce text, images, or other content through natural language prompts), has been the New York Times’ lawsuit against OpenAI. The Times alleged that OpenAI’s main product, the chatbot ChatGPT, is trained on and can identically reproduce texts that fall under copyright laws – hence, ChatGPT infringes on the copyright of anyone whose (monetized) texts have been scraped from the internet and included in the training data.  

To ensure fairness for content producers such as writers, the new text obliges providers of general-purpose AI models marketing their systems in the EU (regardless of where they developed the model) to share summaries of the data used to train and fine-tune their model, with sufficient detail so as to inform copyright holders of whether their content is or is not included in the training data. Like in the previous version, they can choose not to be included in the training data. The same obligations apply to open-source models, although the regime is not as strict for SMEs. 

GPAI systems with systemic risk

The recently added two-tiered approach, in which GPAI models with systemic risk are subject to stricter requirements, also remains. These requirements entail the assessment and mitigation of those systemic risks, documenting and reporting incidents, developing solutions, and ensuring security. The open-source exemption for GPAI models comes with a few requirements as well: they should be freely accessible, should be allowed to be openly used, modified, and distributed, and all their technical details must be publicly available.    

Military and law enforcement

Another contentious point – especially for human rights organizations and other NGOs – is whether or not to exempt militaries and law enforcement agencies from certain restrictions on prohibited systems. A notable example is real-time biometric identification.  

The EU AI Act now also confirms AI systems that are only intended to be used for military, defense, or national security purposes are exempt from any prohibitions. Furthermore, systems intended for scientific research and development will be exempt from the restrictions as well – but eventually, companies developing technology in this area will have to comply with any relevant rules if they want to bring their product to market.  

Prohibited systems

The EU AI Act prohibits certain applications of AI deemed to pose unacceptable risk to the fundamental rights of European citizens. Over time, member states have increasingly been pushing for exemptions for law enforcement agencies. For example, biometric categorization, where people are labeled based on their bodily features, movements, or behaviors, has been a problem for a while now. Rights groups and critics have argued that such categorization systems discriminate against gender non-conforming people such as transgender people, but can also be fundamentally biased against, for example, people of color.  

The new proposal confirms that police will be allowed to label and use biometric data (if lawfully acquired). It also reiterates that real-time biometric identification, using for example live images of security cameras to identify people, is generally banned, except in some cases where police are looking for a specific victim or suspect of a serious crime. In sum, the workarounds and loopholes installed in the EU AI Act over the period of negotiation have largely been preserved.  

EU AI Act & High-risk AI systems

From a compliance perspective, high-risk systems are perhaps the most important category of AI systems in the entire EU AI Act. The harmonized product areas and types of systems that the EU deems high-risk remain largely unchanged. The details on what ‘access to justice’ meant in the previous text have been fleshed out, as systems for law enforcement, migration (incl. asylum and border control), and systems used by or for a judicial authority are all explicitly included in the high-risk category.  

Another change is that AI systems meant to influence voting behavior, election outcomes, or referendums are now high-risk systems, although tools used to organize political campaigns (only for administration and logistics) are not. This point may prove hugely important in the record election year of 2024, in which AI will undoubtedly be a major factor.  

Obligations for high-risk AI systems only apply until three years after the EU AI Act or AIA enters into force. Companies should start their compliance track much earlier than that, because any delay would imply massive fines and starting early means lower costs.  

Penalty regime

The penalty regime has also been tweaked slightly (again): supplying incorrect information now implies a fine of 1% instead of 1.5% of global annual turnover (though the €7.5 million remained the same), while fines for GPAI providers are set at €15 million or 3% of worldwide annual turnover. Taking OpenAI’s $1.6 billion annual revenue in 2023 as an example, that would amount to a $48 million fine (just under €44 million).  

Interestingly, fines for union institutions, agencies, and bodies are set at just €1.5 million for not obeying prohibitions and €750.000 for other infringements. While this seems like a somewhat understandable form of self-preservation, lowering the stakes for public bodies arguably runs counter to the very idea of preserving citizens’ fundamental rights, as the impact of public sector AI on citizens’ lives can be significant.  

AI Office

The AIA will be enforced by the EU AI Office, which will support national governments in the enforcement of the EU AI Act (collecting complaints, requesting and analyzing documents, asking for mitigation measures, etc.), while taking on the brunt of the work on policing GPAI models. More precisely, it will evaluate the models and determine any systemic risks. 

During the negotiations, as reported by Euractiv, the AI Office seemed to have been reduced from a quasi-autonomous agency to just an extension of the Commission, but the new consolidated text shows its competences should not overlap with those of the Commission, and that it is planned to start near the end of February. It also has a separate budget line from the Commission, so its autonomy is likely to be more or less preserved.  

To sum up, as many observers (and our own experts) predicted, the political agreement remains largely unchanged, and the consolidated text elaborates on some points of contention rather than changing the course of the existing text as a whole. France might still try to stage a coup to derail the process and weaken the requirements on GPAI models (to strengthen the position of the French GPAI developer Mistral), but it currently looks like it won’t have the required support to do so. It would seem that the soap opera of the AI Act negotiations is finally nearing its conclusion.  

Get ready for the EU AI Act without taking on risks

Do you want to jump on the AI train without taking on risks? Get in touch with our team via Koen Mathijs or your local Sparkle office.

Scroll to Top