The deadline is close, and most teams are not ready
The main enforcement date for the EU AI Act is 2 August 2026. That is four months away.
That sounds far enough until you look at the numbers. Vision Compliance’s 2026 EU AI Act Readiness Report says 78% of enterprises are unprepared for the obligations coming into force this year. Among the enterprises assessed, 83% had no formal inventory of the AI systems they use or deploy, and 74% lacked a designated internal owner or governance body for AI compliance.
The common failure mode is not bad intent. It is confusion.
Legal teams read the Act as a compliance exercise. Engineering teams read it as a policy document. Product teams assume someone else owns the problem. Meanwhile, the systems that may fall under the AI Act keep shipping.
This article is not a legal memo. It is a technical checklist for engineering teams that build, deploy, or integrate AI in the EU, or serve EU users with AI output. The goal is simple: understand what needs to be done before August 2026, and what can be started immediately.
What the AI Act actually asks technical teams to do
The AI Act is risk-based. That matters because the obligations are very different depending on the system.
For high-risk AI systems, the regulation requires technical work in six areas that engineering teams actually own:
| Requirement | What the Act expects | Articles |
|---|---|---|
| Risk management | Recurring process for identifying, assessing, and reducing harm throughout the product lifecycle | Art. 9 |
| Data governance | Data quality criteria that reduce bias, errors, and relevance problems. Documented data lineage | Art. 10 |
| Technical documentation | Detailed enough for internal review and external scrutiny. Living artifact, not a legal appendix | Art. 11 |
| Logging and traceability | Input/output logging, model versioning, decision traces, human override tracking, retention rules | Art. 12 |
| Human oversight | Real, not ceremonial. Clear escalation paths, override capability, operator training | Art. 14 |
| Accuracy, robustness, cybersecurity | Offline and online evaluation, drift monitoring, adversarial testing, access control | Art. 15 |
Source: Bundesnetzagentur high-risk AI systems overview
If your system is high-risk, compliance is not a policy PDF. It is an engineering program that touches architecture, evaluation pipelines, observability, QA processes, model documentation, incident handling, access control, human-in-the-loop workflows, and vendor management.
What counts as high-risk
This is the first question every team should answer, because everything else depends on it.
Article 6 says a system is high-risk if it is either a safety component of a regulated product under Annex I, or listed in Annex III.
Annex III is the practical list most engineering teams should review first.
Annex III categories
| Category | Example use cases |
|---|---|
| Biometrics | Remote biometric identification, biometric categorization, emotion recognition |
| Critical infrastructure | Safety components for power, water, gas, heating, road traffic, digital infrastructure |
| Education and vocational training | Admission, student placement, grading, proctoring, learning outcome evaluation |
| Employment and workers management | Hiring, candidate filtering, task allocation, promotion, termination, performance monitoring |
| Essential private and public services | Benefit eligibility, credit scoring, insurance pricing, emergency triage |
| Law enforcement | Risk assessment, evidence evaluation, polygraph-like tools, profiling in criminal justice |
| Migration, asylum, border control | Risk assessments, document and application support, identification |
| Administration of justice and democratic processes | Judicial assistance, systems influencing elections or voting behavior |
Source: EU AI Act, Annex III
A practical way to read Annex III
Your system may be high-risk if it is used to make, support, or materially shape decisions about people in contexts such as hiring, education, credit, insurance, public benefits, identity verification, law enforcement, migration, or safety-critical infrastructure.
If your product ranks, filters, scores, recommends, flags, or automates decisions in one of those contexts, do not assume you are out of scope.
The carve-out and its limits
Article 6(3) includes a limited exemption: an Annex III system is not high-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights. Four conditions can trigger this exemption: the system performs a narrow procedural task, improves the result of a prior human activity, detects patterns without replacing human assessment, or performs a preparatory task.
Critical exception: systems performing profiling of natural persons are always classified as high-risk, regardless.
The provider must document this classification before placing the system on the market (Article 6(4)) and register the assessment under Article 49(2). “We think it is low risk” is not enough. You need a documented, defensible classification decision.
The technical checklist teams can start this week
Here is the part most teams need: the work that is realistic now.
| Step | Action | Why it matters |
|---|---|---|
| 1 | Build a system inventory. List every AI use case with owner, purpose, users, jurisdictions, data sources, vendors, and deployment location | You cannot classify risk if you do not know where AI is used |
| 2 | Classify each system against Annex III. Decide: clearly out of scope, possibly in scope, likely high-risk, or definitely high-risk. Document the reasoning | The classification drives every other obligation |
| 3 | Write an intended-use statement. Define what the system is for, what it is not for, who may use it, which decisions it supports, which it must not make | One of the fastest ways to expose risky misuse |
| 4 | Add evaluation gates before release. Make releases fail if checks are missing: performance thresholds, fairness checks, bias tests, logging verification, owner sign-off | If compliance is not a release gate, it will be skipped under pressure |
| 5 | Define bias and error testing by use case. False positive/negative rates, subgroup performance, calibration, confusion matrix by cohort, stability across language and region | The point is to test harm, not just accuracy |
| 6 | Build audit logs that work. Log request/response metadata, model version, decision outcome, reviewer actions, override actions, timestamps, exceptions | Logging should answer: what happened, when, and why |
| 7 | Create a human oversight workflow. Document when humans review, who reviews, how exceptions escalate, what authority reviewers have, how they are trained | A policy on a wiki is not proof of oversight. Test the workflow |
| 8 | Write a model card or system card. Purpose, scope, datasets, metrics, limitations, known risks, fallback behavior, monitoring plan | Makes cross-functional review possible |
| 9 | Assign ownership. Engineering owner, product owner, legal/compliance reviewer, security reviewer, operations owner | If ownership is diffuse, nothing gets maintained |
| 10 | Prepare an incident response path. Cover user complaints, harmful outputs, model regressions, security incidents, regulatory inquiries | Include who can freeze a release and how a system is shut off |
A simple operating model
| Layer | What to do |
|---|---|
| Product | Define intended use, prohibited use, and risk class |
| Engineering | Implement logging, oversight, testing, and safeguards |
| Security | Review access, secrets, model supply chain, and abuse paths |
| Compliance | Verify classification, documentation, and accountability |
| Operations | Monitor incidents, complaints, and release drift |
You do not need a perfect governance program before you begin. You need a minimum viable control plane for AI.
What happens if teams do nothing
The AI Act has a serious penalty regime:
| Violation | Maximum fine |
|---|---|
| Prohibited practices (Article 5) | 35 million EUR or 7% of total worldwide annual turnover, whichever is higher |
| Other AI Act obligations | 15 million EUR or 3% of turnover |
| Supplying incorrect or misleading information | 7.5 million EUR or 1% of turnover |
For SMEs, the lower of the two amounts applies.
The practical risk goes beyond fines. If a system is found non-compliant, teams may face blocked deployments, forced redesigns, vendor replacement, delayed procurement, customer audits, enterprise sales friction, reputational damage, and internal trust loss. For B2B engineering teams, “we need to pause rollout” is often more expensive than the fine itself.
The territorial reach is broad
The AI Act is not only for EU-incorporated companies. Article 2 applies to providers placing AI systems on the EU market even if established outside the EU, deployers in the EU, providers or deployers in third countries where the output is used in the EU, importers, distributors, and situations involving affected persons located in the EU.
That is the GDPR lesson all over again. If your system touches the EU market or EU users, “we are based in the US” is not a shield.
What already applies since February 2025
Not everything waits until August 2026.
The AI Act’s prohibited practices under Article 5 became applicable on 2 February 2025. Penalty enforcement for Article 5 violations followed on 2 August 2025.
Prohibited practices include manipulative and deceptive techniques, exploitation of vulnerabilities, social scoring, certain biometric data scraping, emotion inference in workplaces and schools, and some forms of real-time remote biometric identification for law enforcement.
If your company is doing anything close to biometric scraping, emotion detection in workplace or education contexts, behavioral manipulation, social scoring, or prohibited law enforcement uses, the relevant risk is already here.
Real-world interpretation is still messy
Not everything is settled. The European Commission was supposed to publish guidelines and practical examples for Article 6 classification by 2 February 2026. That deadline was missed. As of early 2026, the Commission was still integrating feedback, with IAPP reporting that final guidance may not land until spring 2026.
What is still unclear
| Open question | Why it matters |
|---|---|
| How strictly Annex III edge cases will be interpreted | Borderline systems need a defensible classification now |
| How national authorities will prioritize enforcement | Enforcement style will differ across member states |
| What counts as “significant risk of harm” in borderline cases | Affects the Article 6(3) carve-out |
| How much documentation is enough for low-risk exemptions | Determines the compliance burden for edge cases |
| How vendor and customer responsibilities split in complex AI chains | Multi-party deployments create shared liability |
| How much evidence regulators will expect for bias testing | Sets the testing bar for engineering teams |
| How much human oversight is “meaningful” in practice | Distinguishes real oversight from checkbox compliance |
What that means for engineering teams
Do not wait for perfect guidance. Build for the most defensible interpretation you can support: document your classification, keep your evidence, test your system, make decisions reviewable, make overrides possible, and keep humans in the loop where required.
That will not solve every ambiguity, but it will put you in a much better position than improvisation.
Bottom line
The EU AI Act is no longer a distant policy topic. For many teams, it is a product engineering deadline.
If you build AI systems that touch hiring, credit, education, benefits, identity, infrastructure, or other consequential decisions, August 2026 should already be on the roadmap. The teams that wait for a legal memo will lose time. The teams that start with inventory, classification, logging, testing, and oversight will be in a much better position.
The law is broad, the penalties are real, and the technical work is knowable.
Start with the systems you already ship. Then make them auditable.
Sources
| Source | URL |
|---|---|
| EU AI Act implementation timeline | ai-act-service-desk.ec.europa.eu |
| Vision Compliance, 2026 Readiness Report (78% unprepared) | natlawreview.com |
| Article 5: Prohibited AI practices | artificialintelligenceact.eu |
| Article 6: High-risk classification rules | ai-act-service-desk.ec.europa.eu |
| Annex III: High-risk AI system categories | ai-act-service-desk.ec.europa.eu |
| Article 99: Penalties | artificialintelligenceact.eu |
| Article 2: Territorial scope | artificialintelligenceact.eu |
| Bundesnetzagentur, High-risk AI systems overview | bundesnetzagentur.de |
| IAPP, Commission misses Art. 6 guidance deadline | iapp.org |