AI regulation

EU AI regulation: (in)famous?

Why you should start preparing now

AI holds immense promise for businesses, but it also comes with its pitfalls. AI regulation in the form of the EU’s AI Act is set to create a risk-based regulatory regime for AI. Compliance with the AIA will require considerable expertise – and the sooner you start, the better.

From chatbots and virtual assistants to smart automation or self-driving cars – the potential of artificial intelligence (AI) seems limitless. With ChatGPT, AI has had its breakthrough moment, stirring up media hype and catching the attention of businesses worldwide. IBM reports that about 25% of European companies are deploying AI, and about half say they’re exploring ways to start using AI. 

That should come as no surprise, as AI might be the most transformative technological advancement to go mainstream in the past few years. What leader wouldn’t want to make their business more efficient, offer better services, and increase sales? The possibilities seem endless. IBM’s report also lists which use cases are most common: 

EU AI regulation
Source: IBM, 2022

It’s safe to say that AI is an incredibly versatile technology. At first glance, it seems like the perfect opportunity to rework how businesses do just about anything. It has a broad range of possible applications in nearly all sectors and, if used well, offers measurable quality and efficiency gains across the board. 

AI risks

Things aren’t that simple, though: AI requires significant resources, integration in a broader data strategy (after all, AI only works well if you have robust data pipelines), and buy-in across your organization. With AI’s great potential also comes great risk. If you’re automating things, you lose some degree of control over your processes. AI requires massive amounts of data to work well, and that data can entail intellectual property infringements or compromise people’s personal privacy. Its outputs can be biased or even discriminatory, which could lead to disastrous outcomes – as a case in point, just look at the Dutch benefit fraud scandal, in which an algorithm used by the government disproportionately flagged citizens with an immigrant background as potential fraudsters. 

That’s why governments are anxiously looking AI regulation to manage the risks of AI, with the EU hoping to be in the driver’s seat in Europe and beyond. The European institutions are set to pass a landmark piece of legislation poised to transform the way we can use AI legally.

Unpacking the EU AI act

When we talk about the EU, we mean the three main European institutions: the European Parliament, generally representing EU citizens; the European Commission, the EU executive chiefly defending the EU’s interests; and the Council of the EU, representing EU member states through meetings of Ministers. 

Most EU laws like the upcoming AI regulation in the form of the AI Act are created through a complicated procedure in which the Commission proposes a law, the Parliament and Council review it, and it goes back-and-forth between the institutions until a final agreement is reached or the negotiations fail. During the process, companies and other organizations will try to influence the legislative procedure through lobbying and consulting strategies. The AI Act is currently nearing the final stretch of the process, with final approval expected in late 2023 or early 2024. 

The main objective of the Act is to put in place rules that ensure any AI system being developed, sold, or used in the EU is safe. To do so, it starts from a tiered risk-based approach to AI, with each AI system falling somewhere between unacceptable, high, limited, and minimal risk. Generally, the higher the risk, the higher the regulatory requirements will be. For example, limited risk systems must adhere to transparency requirements while high risk systems must fulfil requirements related to risk management, data governance, documentation, transparency and disclosure, human oversight, robustness, accuracy, and security. 

For a great summary of the Act’s main technical points, see our previous post on the AI Act. 

What the AI regulations all means for you

So what does that mean for EU companies using AI? First of all, the punishments for breaking the law are no joke: fines for infringement in the current proposal range from 1% to 7% of a company’s worldwide annual turnover, capped at five to forty million euros, depending on the type of infringement. For example, providing false information on your AI systems can entail a fine of 1% of turnover or €5 million, while not complying with unacceptable risk bans can entail a 7% turnover or €40 million fine. In other words: compliance is not optional, but crucial for the survival of your AI efforts. 

In practice, the AI regulation pushes businesses to mitigate risks, use AI ethically and transparently, and build internal awareness of what AI does and how it’s being used. You should, for instance, keep auditable logs, clearly disclose to end users that they’re interacting with an AI system and not a real person, and ensure there’s always a human (with sufficient expertise) keeping tabs on what an AI is doing. Get in touch with our AI strategy and compliance team at Sparkle to see what exactly you have to do to get ready.

Some might say restricting high-risk systems limits innovation – but you shouldn’t be deterred by the fines and stringent requirements on systems with more risk. The significant demands put on higher-risk systems are counterbalanced by these systems often offering more significant benefits to businesses. As the EU itself writes in the AI Act:  

“The costs incurred by operators are proportionate to the objectives achieved and the economic and reputational benefits that operators can expect from this proposal.” (see page 7 in the 2022 proposal)

In human language, that means: your investments in AI risk management and governance are worth it. If you hire a great AI risk management team with compliance expertise, you can rest easy knowing you won’t break the law. 

Why you should start preparing now

In the end, a team with proper expertise will take the brakes off innovation without exposing you to undue risks. However, setting up the proper AI policies and building awareness in your company takes time. You’ll have to set up but also build knowledge in your company on how AI works and why responsible use is of vital importance to control regulatory, reputational, and financial risks. 

To really be ready for this groundbreaking AI regulation, you need to get started early – the sooner the better. Acting now will give you a massive advantage over other companies and set you up to be an AI innovation leader in your field. Responsible AI deployment doesn’t just mean you’re compliant and ensuring your AI efforts are legal: it also means you can proudly put your innovation efforts on display to attract top talent looking to work at leading companies without having to worry about any of the risks. 

Our strategy, risk management, and compliance experts at Sparkle can help you prepare for the AI Act’s sweeping regulation, ensuring that your systems are safe, compliance and that you’ve already built internal expertise by the time the AI Act comes into force.

There’s one elephant in the room, though: the GDPR is still hugely important for your company’s data flows – and you need data to power AI. It’s a costly mistake to assume that the AI Act will replace existing rules on data governance, like the GDPR – they’ll both determine the future of AI deployment in businesses. 

Looking for guidance on the AI Act? Get in touch with our EU AI Act Manager Koen Mathijs.


Jens Meijen

Jens Meijen

PhD at KU Leuven & Consultant at Sparkle

Scroll to Top