The deadline is close, and most teams are not ready

The main enforcement date for the EU AI Act is 2 August 2026. That is four months away.

That sounds far enough until you look at the numbers. Vision Compliance’s 2026 EU AI Act Readiness Report says 78% of enterprises are unprepared for the obligations coming into force this year. Among the enterprises assessed, 83% had no formal inventory of the AI systems they use or deploy, and 74% lacked a designated internal owner or governance body for AI compliance.

The common failure mode is not bad intent. It is confusion.

Legal teams read the Act as a compliance exercise. Engineering teams read it as a policy document. Product teams assume someone else owns the problem. Meanwhile, the systems that may fall under the AI Act keep shipping.

This article is not a legal memo. It is a technical checklist for engineering teams that build, deploy, or integrate AI in the EU, or serve EU users with AI output. The goal is simple: understand what needs to be done before August 2026, and what can be started immediately.

What the AI Act actually asks technical teams to do

The AI Act is risk-based. That matters because the obligations are very different depending on the system.

For high-risk AI systems, the regulation requires technical work in six areas that engineering teams actually own:

RequirementWhat the Act expectsArticles
Risk managementRecurring process for identifying, assessing, and reducing harm throughout the product lifecycleArt. 9
Data governanceData quality criteria that reduce bias, errors, and relevance problems. Documented data lineageArt. 10
Technical documentationDetailed enough for internal review and external scrutiny. Living artifact, not a legal appendixArt. 11
Logging and traceabilityInput/output logging, model versioning, decision traces, human override tracking, retention rulesArt. 12
Human oversightReal, not ceremonial. Clear escalation paths, override capability, operator trainingArt. 14
Accuracy, robustness, cybersecurityOffline and online evaluation, drift monitoring, adversarial testing, access controlArt. 15

Source: Bundesnetzagentur high-risk AI systems overview

If your system is high-risk, compliance is not a policy PDF. It is an engineering program that touches architecture, evaluation pipelines, observability, QA processes, model documentation, incident handling, access control, human-in-the-loop workflows, and vendor management.

What counts as high-risk

This is the first question every team should answer, because everything else depends on it.

Article 6 says a system is high-risk if it is either a safety component of a regulated product under Annex I, or listed in Annex III.

Annex III is the practical list most engineering teams should review first.

Annex III categories

CategoryExample use cases
BiometricsRemote biometric identification, biometric categorization, emotion recognition
Critical infrastructureSafety components for power, water, gas, heating, road traffic, digital infrastructure
Education and vocational trainingAdmission, student placement, grading, proctoring, learning outcome evaluation
Employment and workers managementHiring, candidate filtering, task allocation, promotion, termination, performance monitoring
Essential private and public servicesBenefit eligibility, credit scoring, insurance pricing, emergency triage
Law enforcementRisk assessment, evidence evaluation, polygraph-like tools, profiling in criminal justice
Migration, asylum, border controlRisk assessments, document and application support, identification
Administration of justice and democratic processesJudicial assistance, systems influencing elections or voting behavior

Source: EU AI Act, Annex III

A practical way to read Annex III

Your system may be high-risk if it is used to make, support, or materially shape decisions about people in contexts such as hiring, education, credit, insurance, public benefits, identity verification, law enforcement, migration, or safety-critical infrastructure.

If your product ranks, filters, scores, recommends, flags, or automates decisions in one of those contexts, do not assume you are out of scope.

The carve-out and its limits

Article 6(3) includes a limited exemption: an Annex III system is not high-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights. Four conditions can trigger this exemption: the system performs a narrow procedural task, improves the result of a prior human activity, detects patterns without replacing human assessment, or performs a preparatory task.

Critical exception: systems performing profiling of natural persons are always classified as high-risk, regardless.

The provider must document this classification before placing the system on the market (Article 6(4)) and register the assessment under Article 49(2). “We think it is low risk” is not enough. You need a documented, defensible classification decision.

The technical checklist teams can start this week

Here is the part most teams need: the work that is realistic now.

StepActionWhy it matters
1Build a system inventory. List every AI use case with owner, purpose, users, jurisdictions, data sources, vendors, and deployment locationYou cannot classify risk if you do not know where AI is used
2Classify each system against Annex III. Decide: clearly out of scope, possibly in scope, likely high-risk, or definitely high-risk. Document the reasoningThe classification drives every other obligation
3Write an intended-use statement. Define what the system is for, what it is not for, who may use it, which decisions it supports, which it must not makeOne of the fastest ways to expose risky misuse
4Add evaluation gates before release. Make releases fail if checks are missing: performance thresholds, fairness checks, bias tests, logging verification, owner sign-offIf compliance is not a release gate, it will be skipped under pressure
5Define bias and error testing by use case. False positive/negative rates, subgroup performance, calibration, confusion matrix by cohort, stability across language and regionThe point is to test harm, not just accuracy
6Build audit logs that work. Log request/response metadata, model version, decision outcome, reviewer actions, override actions, timestamps, exceptionsLogging should answer: what happened, when, and why
7Create a human oversight workflow. Document when humans review, who reviews, how exceptions escalate, what authority reviewers have, how they are trainedA policy on a wiki is not proof of oversight. Test the workflow
8Write a model card or system card. Purpose, scope, datasets, metrics, limitations, known risks, fallback behavior, monitoring planMakes cross-functional review possible
9Assign ownership. Engineering owner, product owner, legal/compliance reviewer, security reviewer, operations ownerIf ownership is diffuse, nothing gets maintained
10Prepare an incident response path. Cover user complaints, harmful outputs, model regressions, security incidents, regulatory inquiriesInclude who can freeze a release and how a system is shut off

A simple operating model

LayerWhat to do
ProductDefine intended use, prohibited use, and risk class
EngineeringImplement logging, oversight, testing, and safeguards
SecurityReview access, secrets, model supply chain, and abuse paths
ComplianceVerify classification, documentation, and accountability
OperationsMonitor incidents, complaints, and release drift

You do not need a perfect governance program before you begin. You need a minimum viable control plane for AI.

What happens if teams do nothing

The AI Act has a serious penalty regime:

ViolationMaximum fine
Prohibited practices (Article 5)35 million EUR or 7% of total worldwide annual turnover, whichever is higher
Other AI Act obligations15 million EUR or 3% of turnover
Supplying incorrect or misleading information7.5 million EUR or 1% of turnover

For SMEs, the lower of the two amounts applies.

The practical risk goes beyond fines. If a system is found non-compliant, teams may face blocked deployments, forced redesigns, vendor replacement, delayed procurement, customer audits, enterprise sales friction, reputational damage, and internal trust loss. For B2B engineering teams, “we need to pause rollout” is often more expensive than the fine itself.

The territorial reach is broad

The AI Act is not only for EU-incorporated companies. Article 2 applies to providers placing AI systems on the EU market even if established outside the EU, deployers in the EU, providers or deployers in third countries where the output is used in the EU, importers, distributors, and situations involving affected persons located in the EU.

That is the GDPR lesson all over again. If your system touches the EU market or EU users, “we are based in the US” is not a shield.

What already applies since February 2025

Not everything waits until August 2026.

The AI Act’s prohibited practices under Article 5 became applicable on 2 February 2025. Penalty enforcement for Article 5 violations followed on 2 August 2025.

Prohibited practices include manipulative and deceptive techniques, exploitation of vulnerabilities, social scoring, certain biometric data scraping, emotion inference in workplaces and schools, and some forms of real-time remote biometric identification for law enforcement.

If your company is doing anything close to biometric scraping, emotion detection in workplace or education contexts, behavioral manipulation, social scoring, or prohibited law enforcement uses, the relevant risk is already here.

Real-world interpretation is still messy

Not everything is settled. The European Commission was supposed to publish guidelines and practical examples for Article 6 classification by 2 February 2026. That deadline was missed. As of early 2026, the Commission was still integrating feedback, with IAPP reporting that final guidance may not land until spring 2026.

What is still unclear

Open questionWhy it matters
How strictly Annex III edge cases will be interpretedBorderline systems need a defensible classification now
How national authorities will prioritize enforcementEnforcement style will differ across member states
What counts as “significant risk of harm” in borderline casesAffects the Article 6(3) carve-out
How much documentation is enough for low-risk exemptionsDetermines the compliance burden for edge cases
How vendor and customer responsibilities split in complex AI chainsMulti-party deployments create shared liability
How much evidence regulators will expect for bias testingSets the testing bar for engineering teams
How much human oversight is “meaningful” in practiceDistinguishes real oversight from checkbox compliance

What that means for engineering teams

Do not wait for perfect guidance. Build for the most defensible interpretation you can support: document your classification, keep your evidence, test your system, make decisions reviewable, make overrides possible, and keep humans in the loop where required.

That will not solve every ambiguity, but it will put you in a much better position than improvisation.

Bottom line

The EU AI Act is no longer a distant policy topic. For many teams, it is a product engineering deadline.

If you build AI systems that touch hiring, credit, education, benefits, identity, infrastructure, or other consequential decisions, August 2026 should already be on the roadmap. The teams that wait for a legal memo will lose time. The teams that start with inventory, classification, logging, testing, and oversight will be in a much better position.

The law is broad, the penalties are real, and the technical work is knowable.

Start with the systems you already ship. Then make them auditable.

Sources

SourceURL
EU AI Act implementation timelineai-act-service-desk.ec.europa.eu
Vision Compliance, 2026 Readiness Report (78% unprepared)natlawreview.com
Article 5: Prohibited AI practicesartificialintelligenceact.eu
Article 6: High-risk classification rulesai-act-service-desk.ec.europa.eu
Annex III: High-risk AI system categoriesai-act-service-desk.ec.europa.eu
Article 99: Penaltiesartificialintelligenceact.eu
Article 2: Territorial scopeartificialintelligenceact.eu
Bundesnetzagentur, High-risk AI systems overviewbundesnetzagentur.de
IAPP, Commission misses Art. 6 guidance deadlineiapp.org