Table of contents

Navigating the EU AI Act

The EU AI Act represents a pivotal development in AI governance, setting new standards for businesses. This page highlights the Act's key requirements and how ML6 is guiding clients towards complianc e.

OVERVIEW

The EU AI Act at a glance

The development and use of AI are already embedded in a vast landscape of rules, such as GDPR or copyright regulations. This regulatory environment is now joined by the EU AI Act, which will be the first horizontal regulation presenting a set of requirements applicable to AI systems. It is aimed at enhancing transparency and minimising the risks of AI.

Timeline

After the EU published the first draft of the AI Act in 2021, the final text will probably be published in May 2024. The AI Act will become applicable two years after it enters into force, except for some specific provisions: the prohibitions will apply after six months, while the rules on General Purpose AI will apply after 12 months.

Risk categories

The AI Act classifies AI systems into different categories. On a high-level basis, this can be visualised as follows:

Each risk category has different requirements and obligations. Companies will need to identify in which category each of their AI systems will fall and which role (provider or deployer) they are taking on.

At ML6, we have been closely following the developments of the AI Act and other relevant regulations. We are eager to design & develop compliant and trustworthy AI systems and guide our clients through the upcoming changes.

HIGH-RISK USE CASES

Tackling high-risk use cases

Specific use cases will be considered high risk in the AI act. These are use cases usually that have a high impact on individual lives and will require significant investments and processes. High-risk as classified by the AI Act does not mean a use case should not be implemented. It does mean however that it needs to be implemented with more care, and additional compliance measures.

The AI Act will consider specific use cases high-risk. These are use cases that usually have a high impact on individual lives and will require significant investments and processes. High-risk, as classified by the AI Act, does not mean a use case should not be implemented. It does mean, however, that it needs to be implemented with more care and additional compliance measures.

Part of developing high-risk use cases is a risk management framework, a systematic way to identify, mitigate and monitor risks before, during and after development.

Download the framework

Risk management

Providers of high-risk AI systems will need to develop a Quality and Risk Management system to ensure that risks are continuously identified, mitigated, and monitored, both on an organisational and use case level. This requires a systematic approach, embedding risk management and Trustworthy AI in all processes.

IMPLICATIONS

Implementing the EU AI Act in practice

If you’re in an industry or business function where high-risk use cases potentially arise, it is important to know about the AI Act and the requirements to implement it. It is also important to build up a system of processes embedded in your risk management.

Even if you are not in an industry where high-risk use cases are common, it makes sense to have a basic understanding and follow specific requirements voluntarily— building legally compliant and ethically sound AI systems builds trust, fosters adoption with users, and reduces regulatory and reputational risks.

AI Act at ML6

How ML6 helps clients

At ML6, we have been following regulatory developments for a long time. We are proactively implementing measures to ensure we can comply with the AI Act and other relevant regulations as soon as they come into force. Our longstanding commitment to Ethical and Trustworthy AI bolsters our proactive steps towards compliance. This dedication has given us strong expertise and experience and significantly aids our efforts in meeting regulatory requirements.

With our combined legal, ethical, and technical knowledge, we are uniquely positioned to deliver compliant AI solutions, reducing the burden of regulation from our clients. Our past efforts in emphasizing ethics reinforce our readiness and capability to adapt to these regulations effectively. We also advise our clients on implementing necessary measures on their side, ensuring they are well-prepared to navigate this evolving landscape alongside us.

Delivering compliant AI solutions

We are proactively incorporating the requirements for AI Act compliance within our operational workflows. We build AI solutions for our clients that are legally compliant as well as ethically sound. We embed best practices in our development processes and are confident to deliver also high-risk projects based on our extensive experience in building trustworthy AI solutions.

Advising clients on compliance & risks

Our experience in conducting detailed risk assessments enables us to effectively identify and mitigate potential high-risk scenarios. We have streamlined our sales process to systematically identify ethical and legal risks, ensuring that we can advise our clients and ensure the necessary risk mitigation measures are taken into account already from the start.

AI Act & ML6’s focus on ethical and trustworthy AI

At ML6, we don’t start from scratch when it comes to regulatory compliance. Our long time focus on developing ethical AI solutions, as well as our expertise with other regulations such as GDPR, equips us well for AI Act compliance. We see regulation as the minimum, and as a company go beyond that by systematically embedding Trustworthy AI practices in our processes.

ML6 already has strong measures in place to ensure this:

Learn more about Ethical AI at ML6

Ethical AI Board

Ethically sensitive projects, whether from the perspective of upcoming regulation or based on our defined principles and red lines, are discussed in an internal ethical advisory board. The board's goal is to identify risks early, define mitigation measures, and bring diverse opinions together.

Employee awareness & training

At ML6, we train all our employees to identify ethical and legal risks early on and have built a culture where risks can be identified and raised. We support employees with best practices and research, allowing us to design and develop our solutions in a trustworthy and compliant manner.

Risk assessments

When needed, for example, in high-risk projects, we conduct an in-depth ethical risk assessment, outlining and documenting the benefits and risks of a project along the seven dimensions of Trustworthy AI by the EU, as well as potential risk mitigation measures.

Learn more

As pioneers in the field, ML6 invites businesses to leverage our experience and expertise in crafting AI solutions that innovate and adhere to the highest ethical standards. Reach out to us today for consultation or collaboration, and let's pave the way for responsible AI together.

Contact us

Get in touch

Pauline Nissen

Ethical AI expert

Contact us
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.