EU AI act by Sparkle

EU AI Act: why it could accelerate innovation instead of slowing it 

AI’s potential to drive business value is massive – but using it responsibly is of vital importance. With the EU AI Act the EU taking a risk-based approach to regulating AI, the importance of AI compliance has never been greater. If you take the right approach, the AI Act can help drive innovation.  

With more and more businesses deploying AI, the attention to solid AI governance has grown in parallel. AI systems always carry some degree of risk, as they can make or influence decision-making within companies, (semi-)autonomously guide customers towards services or products, and even create content. Fortunately, our AI strategy & compliance team has everything you need to use AI responsibly and effectively.  

EU AI Act

At the same time, lawmakers in the EU have jumped at the chance to regulate AI with its world-leading EU AI Act. The core philosophy of the AIA is that it takes a risk-based approach, with higher risk systems being subject to more stringent requirements.  

Some stakeholders have argued that the Act (and really, any form of meaningful regulation) would limit innovation in the EU. The most ardent opponents of more AI regulation are, not unexpectedly, those with skin in the game: companies most likely to profit from a low-regulation environment.  

However, while regulation can sometimes impede growth, this is likely not going to be the case for the AI Act.  

  1. Firstly, startups and SMEs would be exempt from many provisions in the EU AI Act and therefore face a much smaller regulatory burden than large companies. The AIA also contains specific provisions for startups and SMEs to freely experiment and innovate in regulatory sandboxes.  
  2. Secondly, the AIA creates a more consistent regulatory landscape across the EU by preventing fragmentation – your AI solution can be marketed all over the EU without incurring compliance costs for every jurisdiction, even if some countries might impose (limited) additional requirements for certain systems. Likewise, companies operating in different countries won’t have to tailor-make their systems for all of their branches. Ensuring a digital single market across the entire EU is crucial to ensure seamless digital offerings across the Union – if you can only offer your AI solution in one European country, it very likely won’t be worth the investment to build it in the first place.  
  3. Thirdly, unregulated (and poorly managed) AI poses significant reputational and operational risks for your organization. What if your company places a system on the market that produces biased and discriminatory outputs? If the media report on an issue like that, the damage to your reputation would be devastating. Or what if a tool isn’t technically robust, but you still deploy it as a core component of your business operations? A hacker could then paralyze your company completely within minutes. On the front end, the EU AI Act helps you foster user trust in the AI systems you’re developing or deploying, which in the end could lead to higher adoption and retention rates. After all, if your AI system isn’t trustworthy and causes a privacy leak or a discrimination scandal, then people won’t use your product. Or even worse: they might see your company as irresponsible or downright evil.  

Following the guidelines in the AI Act would not just make you compliant, but also reduce the potential for adverse events. So, if you follow the AIA’s requirements, you can develop and deploy innovative AI solutions without causing harm. As a result, with the right business strategy, the EU AI Act is an innovation accelerator – not a barrier.  

The EU’s upcoming requirements for AI systems are not just empty declarations – adhering to them is absolutely essential. Otherwise, you could face fines of up to €35 million or 7.5% of your global annual turnover (in the current agreement) – absolutely eye-watering sums, in short. 

However, many companies rolling out AI projects and systems throughout their organization are not sufficiently focused on (or aware of) compliance requirements. After all, in the past few years, there was little incentive to really focus on responsible, compliant AI use, because there were simply not many rules that were really enforceable – aside from existing product safety and compliance regulations. With the AI Act, that’s all about to change. Compliance will become a core component of any AI strategy within companies looking to capitalize on the promise of AI. 

Our strategy as a driver of innovation

Many companies see the compliance process as a brake on innovation. It’s much more difficult to get new ideas from development to deployment if you have to assess your adherence to several sets of rules at every step of the way. Some might see the EU AI Act as an additional set of rules that will only restrict innovation – but with the right approach, it can actually drive profits instead of racking up needless costs. 

But the potential gains of a robust AI compliance strategy go even further. Our EU AI Act readiness track allows you not just to easily get and remain compliant, but also to pave the way for further, safer innovation. Our tried-and-true strategy allows you to create standardized and easily transferable processes and policies for every type of system that you might be thinking (or dreaming) of. 

To learn more about our strategy, make sure to keep reading our future insights, or just get in touch with our EU AI Act Manager Koen Mathijs or your local Sparkle office.

Scroll to Top