TL;DR. On 2 August 2026 — 118 days from now — the EU AI Act enters full application for high-risk AI systems listed in Annex III. The agentic AI wave is already reaching enterprise workflows, with compliance programmes still catching up. Penalty for non-compliance: up to €15 million or 3% of global annual turnover, per Article 99 of the regulation.
What activates on 2 August 2026
Regulation (EU) 2024/1689 applies in structured layers. The ban on prohibited AI practices under Article 5 took effect on 2 February 2025. Provisions on general-purpose AI models (Chapter V) and transparency obligations (Article 50) have applied since 2 August 2025. The next milestone is the most operationally demanding.
On 2 August 2026, Articles 6 to 51 apply in full to high-risk AI systems listed in Annex III. Those systems span defined domains: critical infrastructure management, education and vocational training, employment and HR management, access to essential services, law enforcement, migration and border control management, administration of justice, and democratic processes. Any organisation deploying AI agents in these contexts faces binding, enforceable obligations from that date.
The regulation draws a precise line between providers — who develop or place a system on the market — and deployers — who use it in a professional context. A provider must complete a conformity assessment and assemble technical documentation per Annex IV. A deployer carries distinct obligations under Article 26: human oversight, fundamental rights impact assessment, and notification to the national competent authority for certain categories. Both roles can coexist within a single organisation.
Three advantages of preparing now
- Technical documentation is assembled incrementally, not overnight. Annex IV requires a complete description of the system, training data, robustness measures and performance metrics. Assembling this retrospectively in three months is not feasible — 118 days, approached methodically, still allow a solid dossier.
- Automatic logging must be embedded before deployment, not bolted on after. Article 12 requires automatic log-keeping for high-risk AI systems. Retrofitting this into existing technical architectures takes development time: anticipating it avoids a crisis rebuild under deadline pressure.
- Early conformity is a measurable commercial differentiator. European public buyers and large corporates are beginning to include AI Act compliance in procurement criteria. An attestation of conformity before August 2026 becomes a concrete competitive advantage in second-half 2026 tender processes.
Three risks of waiting
- Sanctions apply from the first day. Article 99 provides for fines of up to €15 million or 3% of global annual turnover for breaches of obligations related to high-risk AI systems. SMEs benefit from proportionality provisions — but the compliance deadline is identical for all organisations.
- Operational suspension is the real business risk. Article 79 authorises national competent authorities to require the restriction or withdrawal from the market of a non-compliant system. An organisation whose HR processes or client scoring depend on an AI agent could be forced to halt those operations.
- Regulatory overlap multiplies the compliance debt. The AI Act layers on top of the GDPR — it does not replace it. An AI agent processing personal data must satisfy both regimes simultaneously. Waiting until July 2026 means correcting two compliance gaps under maximum time pressure.
The European picture in April 2026
The European AI Office, established within the Commission to oversee compliance by GPAI providers, published its first draft codes of practice in 2025. On the deployer side, the agentic wave is accelerating faster than compliance programmes: Google and Kaggle have opened enrolment for a five-day AI agents intensive course scheduled for June 2026, per the official announcement of 27 April 2026, signalling that agent deployment is entering mainstream professional practice. At the same time, Google has embedded agentic safety and policy controls directly into its Google Ads Advisor tool, per the official publication of 21 April 2026 — illustrating how fast these systems move from prototype to live operational workflow, precisely the deployment pattern the EU legislator anticipated.
Three levers to activate this week
- Map all AI deployments against Annex III. List every AI system in operational use — agents, scoring tools, HR recommendation systems, chatbots in regulated contexts — and check each against the Annex III high-risk categories. This mapping can be completed in one day with a legal and a technical lead in the room.
- Establish your legal status for each system. Provider or deployer? The determination drives the full set of applicable obligations. A system developed in-house makes the organisation a provider; a system procured from a third-party vendor makes it a deployer. Deep customisation of a third-party model may require a dedicated legal analysis.
- Schedule the fundamental rights impact assessment for qualifying systems. Article 27 provides for a fundamental rights impact assessment procedure for deployers of high-risk AI systems. This assessment must be conducted before deployment — or, for systems already in production, before 2 August 2026.
Can your organisation prove today that its AI agents comply with the AI Act?
If this analysis speaks to you, I publish a piece of this calibre every day on digital innovation and enterprise AI. 👉 Get the next one straight in your inbox — sign-up takes ten seconds, and each edition is read before 9 a.m. by leaders of European SMEs, mid-caps and public institutions.
Sources
This article is part of the Neurolinks AI & Automation blog.
Read in: French | Dutch