AI risks blog cover

Managing AI risks: why you should need external expertise to properly manage them

AI has huge potential for businesses, but it also carries risks. Some companies are trying to manage those risks internally – but that’s not always a good idea: internal AI risk management can bring in unexpected complications.  

AI has huge potential for businesses, but it also carries risks. Some companies are trying to manage those risks internally – but that’s not always a good idea: internal AI risk management can bring in unexpected complications.  

Many larger companies have hired to do exactly that: they have teams with compliance officers, bias analysts, and even people with backgrounds in sociology. But some smaller organizations are also trying to take risk management completely in-house. This is understandable for a few reasons.  

For one, it’s usually cheaper: getting external consultants on board is never free. Another, perhaps more sensible, reason is that doing things in-house also helps your team, from your engineers to your lawyers, better understand your AI systems from the inside out. If you’ve done a bias assessment and implemented review protocols yourself, you have more internal expertise, and you could gain the insights you need to repair the system yourself.  

But skimping out doesn’t get you the protection that you need. AI risks can be fundamental to a business – just look at the AI Act’s enormous fines – and leaving everything to internal teams can be tricky.  

First, when your AI team is taking over risk management tasks, it can’t innovate. It can’t develop new solutions that drive value if they’re only double- and triple-checking the outputs of a system for bias, or if they’re constantly red-teaming your generative AI system that was supposed to make everything more efficient. AI engineers are in demand on the job market, so they’re not just pricey, they also want to be working on things they find interesting. So if you’re only putting them on evaluation duty instead of the innovation playground, you’re committing resources that could be better spent elsewhere and risking an exodus of valuable AI talent. Some companies might be tempted to offload AI ethics or risk management tasks to an existing Data Protection Officer (DPO). That might sound like an easy fix, as DPOs are generally better equipped to manage AI risks than just random employees. However, DPOs often already have a daunting range of tasks, and just tacking on AI means they get overloaded and can’t manage their original or additional responsibilities. That’s why we offer AI Officers-as-a-service to manage these risks on an ongoing basis – the perfect solution to this conundrum.  

Second, AI risk management demands incredibly specific technical expertise but also very broad business-level know-how. It encompasses testing methodologies that many AI engineers haven’t really mastered. Tasking them with learning these techniques in addition to their usual duties takes away even more of their time. Most engineers also don’t have the process or people management skills required for proper AI risk management. You can’t exactly send your entire technical team to go do an MBA just to manage your AI systems.  

Third, internal evaluation can bring office politics into the equation. This is particularly problematic for smaller and mid-sized companies. If the people working on risk management tasks know everyone involved in a project (or, in fact, have worked on that project themselves), they might not test and evaluate as rigorously as they might otherwise do, whether consciously or subconsciously.  

If you look at it this way, it might seem like there’s no way to win. External consultants can be expensive and siphon off your expertise, while internal evaluations can hog human resources and lack methodological rigor.  

The solution is to bring on board a trusted partner that knows your processes and ensures your expertise remains in-house. We focus on building internal expertise on AI and AI risk management by building awareness through workshops, and we make sure that everyone in your organization gets involved and has the opportunity to speak their mind.  

Next to that, we bring a neutral, cross-organizational perspective without office politics or personal interests. We combine deep technical expertise with holistic business-level insights to make sure you get the best of both worlds.  

Interested in AI risk management and compliance without losing control over what you’re doing? Or interested in our compliance-as-a-service offering? Get in touch with our team via Koen Mathijs or your local Sparkle office.

 

Scroll to Top