OPINION

EU AI Act deployer evidence gaps SMEs will miss before 2 Aug. 2026

Organizations must build systems now to meet the unchanged 2 Aug. 2026 deadline.

Published
Subscribe to IAPP Newsletters

Contributors:

Abhishek Sharma

Founder

Move78 International

Editor's note

The IAPP is policy neutral. We publish contributed opinion pieces to enable our members to hear a broad spectrum of views in our domains. 

I have run deployer readiness assessments for about a dozen organizations over the past year. The pattern is consistent. Teams spend weeks debating whether the organization is a provider or deployer under the EU Artificial Intelligence Act, then stop.

That is only one classification question. The second is whether a specific system is prohibited, high-risk, limited-risk or outside those categories. Both questions matter, but neither is the finish line. For deployers using Annex III high-risk AI systems, classification is the starting gate.

For small and medium-sized enterprises using Annex III high-risk AI systems, 2 Aug. 2026 remains the operational date to plan against unless the law is formally amended. The question will not be whether the legal team can recite Article 26; it will be whether the organization can show evidence of how each system is used, monitored, logged and escalated throughout its lifecycle.

Based on my readiness work with deployers of Annex III high-risk AI systems, I repeatedly see the same evidence gaps around Article 26 and Article 27. The five gaps below reflect the most common misses I see among high-risk deployers. This list does not cover every deployer duty under the EU AI Act. Article 4 AI literacy, for example, has already applied since 2 Feb. 2025 and needs its own implementation track. Article 50 transparency obligations may also be relevant depending on the system and use case. 

'That is the vendor's job'

The misconception starts with Article 26. Many deployers read the provider obligations in Articles 9 through 15, see the weight of conformity assessment and technical documentation on the vendor side, and conclude the hard part sits upstream.

It does not. Article 26 gives deployers their own obligations. The vendor's certifications, audit evidence or system profile is not a substitute for the organization's own evidence.

Could the compliance team produce, by next week, an internal record showing how each high-risk AI system is used inside the organization? Usually, the answer is no. Not because people are careless, but because they assumed vendor documentation would cover the deployer's own processes.

A provider can share instructions for use, system documentation and testing information. The deployer still has to evidence what happened inside its own operating environment.

The five evidence gaps I keep finding

The first gap is inventory. Article 26 does not use the phrase "AI system inventory." But without one, a deployer will struggle to use evidence in accordance with instructions, human oversight, monitoring, log retention and escalation.

When I ask a room to name every AI system in use, I get the obvious three or four. Then the human resources team mentions a screening tool in pilot. The legal team mentions contract classification. The compliance picture changes in 20 minutes.

The second gap is classification rationale. Most teams have opinions. Few have a written policy or approved decision. "It is just a chatbot" is not a legal analysis. For Annex III systems, classification turns on intended purpose, function, use context and how the system is actually deployed. A conversational interface answering general employee questions is one thing. A system ranking job applicants, routing insurance claims or assessing creditworthiness is another. If there is no approved note explaining why a system is or is not high-risk, the decision is not strong enough to defend.

The third gap is human oversight documentation. Article 26(2) requires deployers to assign human oversight to natural persons with the necessary competence, training, authority and support. In practice, teams point to an organizational chart and assume that is enough.

An organizational chart is not an oversight control. The evidence is the named reviewer, decision authority, intervention trigger, escalation route and record of what happens when the AI system behaves abnormally.

The fourth gap is monitoring and log retention proof. Article 26(5) requires deployers to monitor the operation of the high-risk AI system based on the instructions for use. Article 26(6) requires deployers to keep automatically generated logs, to the extent those logs are under their control, for a period appropriate to the system's intended purpose and at least six months, unless other EU or national law provides otherwise.

Most mid-market companies have retention policies. What they often lack is proof those policies cover the specific AI systems in scope and that someone knows how to retrieve the relevant logs.

The fifth gap is incident response. Not the generic cyber playbook on the intranet, but evidence that the organization knows what to do when a high-risk AI system produces risky or unexplained outputs in production.

Article 26(5) creates two practical evidence questions. First, if a deployer has reason to believe that use of the high-risk AI system may pose a risk, it must inform the provider or distributor and the relevant market surveillance authority without undue delay and suspend use. Second, where a serious incident is identified, the deployer must immediately inform the provider, and then the importer or distributor and the relevant market surveillance authorities.

When I ask for the escalation path, the suspension threshold or a template for recording the event, the room often goes quiet.

There is a related blind spot worth flagging. The fundamental rights impact assessment under Article 27 does not apply to every deployer, but it is not limited to public-sector bodies either. It also applies to private entities providing public services and to deployers of certain Annex III point 5(b) and 5(c) systems, including creditworthiness and life and health insurance use cases. Where a data protection impact assessment already covers part of the ground, the fundamental rights impact assessment complements it rather than replaces it.

The Digital Omnibus could change the timeline, but current law still controls

Some teams have heard the European Commission's Digital Omnibus proposal on AI could shift the Annex III timetable. That is possible, but it is not current law.

The Commission proposal would link the application of certain high-risk rules to the availability of support measures, such as harmonized standards, common specifications or Commission guidelines. The Commission lists the AI simplification proposal as in co-legislative process. Recent negotiations do not alter that legal status unless and until an amending regulation is adopted and published in the Official Journal of the European Union.

For readiness planning, the current AI Act timetable still points to 2 Aug. 2026 for Annex III high-risk deployer obligations unless and until the legal text changes. Treating a proposal as a current rule is a poor compliance bet.

A triage for resource-constrained teams

Most mid-market compliance teams cannot do everything at once. My recommendation is to work in sequence, not in parallel.

Start with the inventory. Everything else depends on it. Document the classification rationale for each system that could plausibly fall into Annex III. Assign and document named human oversight for the highest-risk deployments. Then test whether monitoring, log retention and incident escalation work in a way that can be evidenced.

For teams that need a public starting point, the Agencia Española de Supervisión de la Inteligencia Artificial's 16 practical guides from December 2025 are worth reviewing. These guides should be treated as non-binding implementation aids, not as substitutes for the EU AI Act, harmonized standards or organization-specific legal analysis.

The companies lagging behind are not always the ones that misunderstood the law. They are often the ones that never turned it into artifacts: a register, a classification rationale, an oversight assignment, a log retention setting and an escalation record.

The gap is rarely legal theory. It is missing operational proof.

 

CPE credit badge

This content is eligible for Continuing Professional Education credits. Please self-submit according to CPE policy guidelines.

Submit for CPEs

Contributors:

Abhishek Sharma

Founder

Move78 International

Tags:

AI literacyAI and machine learningLaw and regulationProgram managementEU AI ActAI governance

Related Stories