Scope and general objectives of the AI Act
With the AI Act, the EU aims to regulate artificial intelligence (AI) and statistical models for decision making to foster better conditions for the development and utilization of this innovative technology. Although AI can offer many benefits, there is also a danger in its development. For instance, misinterpretation of or unjustified trust in the technology can have impact consequences for users or citizens.
To ensure that AI is developed and deployed in a safe, responsible and reliable way, the European Commission proposed in April 2021 the first EU regulatory framework for AI across Europe: the AI Act. This should ensure that any emerging risks or negative impacts on individuals or society are minimised as far as possible. To do so, this framework proposes that AI systems must be analyzed and categorized based on the risk they present to users. Different risk levels will result in varying degrees of regulation.
The European legislative procedure is a lengthly and complex decision-making process. For the AI Act we are in the final phase of this process. The European Commission aims to enact the AI Act with a final deadline in February 2024. In addition, almost every European regulation has a transition period after it is finally adopted, giving the market time to comply. Nevertheless, we can already make a good estimate about the framework of the AI Act and what obligations are most likely to come. In fact, al lot of information is already available from Europe and a growing number of organizations are already anticipating.
Location and market presence
The European AI Act doesn’t only apply to businesses based in the EU. If you offer AI-related products or services in the EU market, regardless of where your business is headquartered, you need to be compliant with the EU AI Act.
Risk categorization & compliance
The AI Act categorizes AI systems based on the level of risk they pose, and depending on this categorization, different rules and regulations apply. This means that for each AI system, the risk is assessed and certain requirements are set on that basis. Here’s a breakdown to help determine when you might be subject to the EU AI Act:
- Unacceptable risk: If your AI system is deemed to pose an unacceptable risk to the safety or fundamental rights of individuals, it will be prohibited under the EU AI Act. Examples include systems that manipulate human behavior, exploit vulnerable individuals, or enable social scoring by governments.
- High risk: AI systems that have significant implications for user safety or their fundamental rights are classified as high risk. If your business operates in sectors like healthcare, transportation, or law enforcement, and you use AI systems, there’s a high likelihood you’ll fall under this category. High-risk systems require a conformity assessment before they can be placed on the market.
- Limited risk: AI systems that pose limited risks are subject to transparency obligations. For instance, if your AI system is a chatbot, users should be aware that they are interacting with a machine. While these systems don’t require a pre-market conformity assessment, transparency is crucial.
- Minimal risk: The majority of AI systems fall under this category. If your AI application is a video game or spam filter, for example, it’s likely to be classified as minimal risk. Such systems can operate freely without any specific legal obligations under the EU AI Act.
Additionally, the EU AI Act establishes obligations not just for providers but also for users and third parties. If you’re a distributor, importer, or user of an AI system, you might have specific responsibilities, especially if the system is categorized as high risk.
Continuous monitoring: It’s essential to note that the categorization of an AI system can change based on its application and potential impact. Regular assessments and staying updated with the EU AI Act’s provisions are crucial.
Biases, transparency, and ethics: the cornerstones of the European AI Act
Biases:
- Definition: Biases in AI refer to systematic and unfair discrimination based on features of individuals, often reflecting societal prejudices.
- European AI Act’s Stance: The EU AI Act emphasizes the importance of non-discrimination in AI systems. It recognizes that biased data can lead to biased outcomes, which can perpetuate societal inequalities. As a result, the EU AI Act mandates that AI systems used in the EU should be developed using data that is representative and free from undue biases.
- Implications for Businesses: Businesses must ensure that the data they use to train AI models is carefully vetted for biases. This might require investing in data auditing, bias detection tools, and diverse teams that can bring multiple perspectives to the table.
Transparency:
- Definition: Transparency in AI means that the workings and decisions of AI systems are clear and understandable to humans.
- European AI Act’s Stance: The EU AI Act places a strong emphasis on transparency, ensuring that AI systems are traceable and their workings can be explained. It’s not enough for an AI system to make a decision; it should also be able to clarify how it arrived at that decision.
- Implications for businesses: Businesses will need to adopt AI systems that offer explainability features. This might mean prioritizing interpretable models over more complex ones or investing in explainability tools. Transparent AI practices will not only ensure compliance but also build trust with customers and stakeholders.
Ethics:
- Definition: Ethics in AI refers to ensuring that AI systems are developed and used in ways that are morally right and respect human rights.
- European AI Act’s stance: The EU AI Act underscores the ethical use of AI, ensuring that AI systems respect users’ privacy, dignity, and rights. It prohibits AI practices that are deemed harmful or manipulative, such as social scoring by governments.
- Implications for businesses: Ethical considerations should be at the forefront of AI adoption in businesses. This means ensuring that AI systems are used in ways that align with societal values and norms. Businesses should also be prepared for ethical audits and should consider setting up ethics committees or consulting with external ethics experts.
Penalties for non compliancy
Roles & responsibilities
The European AI Act establishes a clear framework for the roles and responsibilities of various stakeholders involved in the AI ecosystem. Two of the most crucial roles in this context are that of the producer and the deployer. Understanding these roles is vital for businesses to ensure compliance and responsible AI practices.
1. Producer:
The producer, often referred to as the developer or provider, is responsible for creating and supplying the AI system. Their responsibilities under the European AI Act include:
- Risk assessment: Before introducing an AI system to the market, the producer must conduct a thorough risk assessment to categorize the system based on its potential impact on users and society.
- Quality control: Producers are required to implement robust quality management systems, ensuring that the AI system’s development and updates adhere to the standards set by the EU AI Act.
- Transparency: Producers must provide clear documentation about the AI system, including its purpose, capabilities, limitations, and potential risks. This documentation should be easily accessible to users and regulators.
- Continuous monitoring: Even after deployment, producers are expected to monitor the performance of their AI systems, ensuring they remain compliant and addressing any issues that arise.
2. Deployer:
The deployer, or user, is the entity that integrates and uses the AI system within a specific context or application. Their responsibilities include:
- Proper use: Deployers must ensure that they use AI systems as intended by the producer and in accordance with the provided documentation.
- Feedback loop: Deployers should provide feedback to producers about the AI system’s performance, especially if any discrepancies or issues arise during its operation.
- User awareness: If the AI system interacts directly with end-users, deployers are responsible for informing those users that they are interacting with an AI, ensuring transparency.
- Compliance with local regulations: Deployers must be aware of local regulations and ensure that the AI system’s deployment aligns with these rules, especially if they differ from the European AI Act’s stipulations.
In conclusion, the European AI Act emphasizes collaboration between producers and deployers, ensuring that AI systems are developed, deployed, and operated responsibly. Both roles come with their set of obligations, and understanding these is crucial for businesses to remain compliant and foster trust in their AI-driven solutions.
Wondering how the AI Act will impact your business? Now is the time to act!
Sparkle offers an EU AI Act (AIA) readiness track. Going through this track will enable your organization and AI systems to become compliant. Get in contact to find out more.
An overview of Sparkle's EU AI Act Readiness track:
What does the EU AI Act Readiness Track look like:
- Awareness workshop: develop a common understanding about what the EU AI Act is all about.
- Assessment of AI projects and tracks: get an overview of all your AI solutions and statistical approaches that fall under the EU AI Act.
- Model risk framework: gain a good understanding about how to apply the model risk framework.
- Conformity assessment: get an overview of the risk categories for all your AI solutions and statistical approaches.
- Final report and recommendations: get clear recommendations and steps you need to take to become compliant with the EU AI Act.
Read more in “The EU AI Act FAQ guide” for more information about the act.
ABOUT THE AUTHOR
Jarne Swennen
Junior Business Analyst
Discover more content from our blog
Webinar: From Innovation to Regulation – the European AI Act
Join us for an exclusive online webinar: the European AI act and its Impact on Intellectual Property.
WIDEN’s IT/IP experts from Latvia, Estonia, and Lithuania in collaboration with Sparkle, a strategic data consultancy firm, are bringing together the best minds in the Baltics to explore one of the hottest topics in tech today and the impact of the European AI act.
Lord of the Rings organization styles for Power BI
Check your Power BI organization according to the world of Tolkien: are you organized like Hobbits, Orcs, Dwarves or Sauron?
This unique approach explained by our Sparkleer Jo uses the legendary journey of Frodo Baggins to guide you through different organizational structures. Discover how to create a harmonious Power BI environment for your business.
Power BI licenses: a quick guide for new users
Sold on Power BI, but do you need licenses? Unlocking the potential of Power BI licenses can feel overwhelming. Our colleague Jo wrote this post to help you navigate the different options and costs.
Power BI versus Microsoft Fabric
Power BI versus Microsoft Fabric: what you need to do with your existing reports and how things change for your organization using MS Fabric. This blogpost includes our “Fabric Analyst in 1 day” workshop as well.
The AI Act in practice: what it means for your business | webinar recording
The AI Act in practice: what it means for your business. In this blogpost you can find the recap of our webinar we hosted together with our legal partner Sorainen and watch the webinar recording.
Future Finance: the link between accountancy programs and Power BI
With Future Finance plugin to power bi we make it easy for business managers and financial advisors to easily create a financial plan. Find out more.