
Louis Vanderdonckt
AI consultant
Tl;dr: The EU AI Act’s obligations for General Purpose AI models and high-risk AI systems are impending, yet legal uncertainty arises as crucial compliance tools are delayed. Despite "Stop-the-Clock" talks, halting AI governance is ill-advised. Risks are here now, future compliance is vital, and ethical AI simply leads to better products. Proactive steps in AI literacy, AI governance, and keeping track of relevant guidelines and standards are crucial for businesses to navigate the evolving landscape.
The EU AI Act, the EU’s landmark legislation on the development and deployment of Artificial Intelligence systems within the European Union was adopted in August 2024. Modeled after classic product legislation, its primary regulatory focus is on providers and deployers of high-risk AI systems. It achieves this by defining, among other things, the essential requirements that these systems must meet to guarantee safety and the protection of fundamental rights. In addition, the AI Act imposes obligations on providers of general purpose AI (GPAI) models.
It follows a staggered implementation timeline, with different rules coming into effect at different times over the next few years. With the rules on prohibited AI systems and AI literacy already applicable since February this year, the next important deadline is the 2nd of August 2025. This is when the rules for GPAI model providers become applicable. At the same time, it is also the deadline for member states to designate their national competent authorities that are tasked with the enforcement of the AI Act, and to lay down and implement rules for penalties and fines for infringements of the AI Act.
A year later, on 2 August 2026, most obligations related to high-risk AI systems will come into effect.
Similar to other pieces of product legislation, technical standards define concrete approaches that can be adopted to meet the AI Act’s requirements in practice.
For high-risk AI systems, those are the so-called harmonized standards. Adherence to these technical standards creates a presumption of conformity. European standardisation organisations, led by CEN and CENELEC, are in the process of drafting the necessary AI standards, following a request from the European Commission. These are the standards currently under development:
These instruments are crucial in terms of legal certainty for stakeholders and to lower compliance costs.
For GPAI models, the relevant obligations are being formalized within the Code of Practice. This document is currently under development by the EU AI Office, in collaboration with industry representatives and other stakeholders.
Both the Code of Practice and the harmonized standards have incurred significant delays. The Code of Practice was supposed to be finalized at the beginning of May. The Harmonized Standards were expected to be finalized well in advance of the August 2026 implementation date, allowing providers of high-risk AI systems sufficient time to align with them. For some of the harmonized standards, the timelines communicated by CEN CENELEC extend beyond the application date of the high-risk AI system obligations.
With crucial compliance tools and guidance still missing and application dates incoming, businesses are left in a state of significant legal uncertainty.
This has prompted some policy makers to suggest a pause button on the implementation of the AI Act - the so-called “Stop-the-Clock” proposal. Although the details are not yet clear - some EU Member State representatives have proposed a delay of two years - no formal legislative action has been taken to delay the Act. At the time of writing, there are in fact signs that the Code of Practice and accompanying Guidelines on GPAI models might be published in the first half of July.
All this legal uncertainty begs the question: is the AI Act in the ropes? Should companies tune back their compliance efforts and adopt a wait and see approach?
We would advise against that.
While the "stop-the-clock" discussions introduce an element of uncertainty, they also reflect a commitment from EU institutions to ensure a practical and effective implementation of the AI Act.
What is more, amidst all these discussions, the European Commission sent a strong signal by urging Member States to include a fallback clause in their AI sanction regimes. This would ensure all potential violations of the AI Act are sanctionable, not just those explicitly listed in the AI Act sanctioning regime. This would be a game-changer for the AI literacy obligation; the breach of which is - strangely enough - not directly penalized under the AI Act. This would also be the case for breaches of the obligations regarding fundamental rights impact assessments and the right to explanation.
There are also other reasons why companies should not roll back compliance efforts with the AI Act’s obligations (not just because the law says so).
Most importantly, these rules reflect the principles outlined in the Guidelines on Trustworthy AI. Is the impending AI Act the sole reason to build a trustworthy AI system? We hope not.
The delay in the rules should not be interpreted as lessening the importance of proper AI governance. For one, the risks associated with AI systems are already present today, even if the regulatory framework is lagging behind.
Secondly, the AI systems built today will have to comply with the rules of tomorrow. Integrating trustworthiness and AI ethics shouldn't be an afterthought but a foundational element, embedded from the initial design phase of your AI system development cycle.
For organizations new to product legislation, particularly those qualifying as Annex III High-Risk AI system providers, the path to compliance will be demanding. Establishing robust risk management and quality management systems - obligations familiar to those accustomed to product legislation - requires considerable time and dedicated effort. Without pre-existing frameworks to build upon, this preparatory period becomes even more crucial.
Finally, we are firm believers that embedding trustworthiness and ethics in your AI system development and deployment results in better, safer AI systems and products.
If the complexities of AI Act compliance feel overwhelming, don't navigate them alone. ML6’s AI Governance experts help organizations translate regulation into action, minimizing risk while building trust with users. Follow our blog for ongoing updates and expert insights, or reach out to us directly for tailored guidance. We're here to help you stay ahead.
Update: This developments in this space move incredibly fast. In the meantime, EU regulators have clearly indicated that there will be no stopping the clock: the provisions of the AI Act will enter into application in accordance with the initially planned timeline. What this means for you? There will be legal insecurity. You will have to monitor the space closely.
Your must-do checklist list:
- Install accountability (assign a AI Compliance Officer
- Map your entire AI inventory (all AI use cases)
- Identify which systems are “high risk”
- Run a gap analysis on those high-risk systems → prepare the conformity assessments.
- Lock down transparency requirements
- Install a solid QMS (quality management system) to keep everything repeatable across the organization.
- Be ready to prove compliance to clients: think GDPR-style vendor questionnaires on steroids.