Decison tree

Classifying your AI system according to the EU AI Act

On August 1, 2024, the European AI Act entered into force. Many companies are wondering whether this new regulation applies to their AI systems and if so, how they can become compliant with the AI Act.

The Act uses a risk-based framework, categorising AI systems into different risk levels depending on their potential impact on individuals and society. To comply with the AI Act, the initial step is to classify your AI systems according to these risk levels, such as prohibited or high-risk. 

To simplify this process, we have created a decision tree based on the official text of the AI Act. This tool will guide you through a series of questions to classify your AI systems correctly, so you can understand which obligations you need to meet and what actions you should take.

What's inside?

In the decision tree, you will find a series of questions designed to help you determine whether the AI Act applies to your AI system and, if so, which category it falls into based on its risk level: prohibited, high-risk, or transparency risk. We have also included references to the official text of the AI Act. However, it is still important to consult the relevant articles for your AI system, as the decision tree simplifies the process by omitting some exceptions and details. Be sure to review the original articles to ensure you don’t miss any information.

If you still have questions about classifying your AI system or if you're interested in how ML6 can assist you with compliance under the AI Act, please feel free to reach out.

Get the decision tree

Complete the form below to access the whitepaper.