ComplyAI Blog

How to Classify Your AI System Under the EU AI Act

Published 2026-02-16 ยท 12 min read

Most compliance mistakes in the EU AI Act begin with incorrect classification. If you label a high-risk system as low risk, every downstream control fails: documentation, transparency, testing, governance, and procurement responses. For SMEs, classification is the single highest-leverage step because it determines scope, timeline, and budget.

This guide gives a practical flow you can use with product, legal, and engineering teams.

Text flow chart: Unacceptable โ†’ High โ†’ Limited โ†’ Minimal

Step 1: Is the use case prohibited?
If the system enables banned practices (for example manipulative exploitation causing harm, prohibited social scoring, or disallowed biometric categorization contexts), stop deployment. This is Unacceptable Risk.

Step 2: If not prohibited, does it fall into high-risk categories?
Check Article 6 and Annex III use cases such as employment, education access, essential private/public services, law enforcement contexts, migration/border uses, and justice-related support. If yes, treat as High Risk.

Step 3: If not high-risk, are transparency obligations triggered?
If users interact with AI (chatbots), see or hear synthetic content, or face emotion/biometric-related inference disclosures, you likely have Limited Risk with transparency duties (Article 52 logic).

Step 4: If none of the above apply, it is likely Minimal Risk.
Examples include benign automation with negligible rights impact, such as spam filtering or routine workflow optimization. Voluntary best practices still matter.

Article 6: why context decides high-risk status

Article 6 is not a simple model taxonomy. A single model can have different risk classes depending on deployment context. An LLM generating marketing copy is usually not high-risk. The same LLM embedded into hiring screening or credit eligibility assistance can become high-risk due to impact on rights and opportunities.

This is why classification workshops must include business owners, not only engineers. You need to understand real decision pathways, not just technical architecture.

Annex III: common SME triggers

SMEs frequently trigger Annex III through practical products:

If your output significantly influences who gets access, money, employment, or care, escalate classification immediately.

Article 9 and Recital 47 mindset

High-risk classification activates lifecycle risk management obligations associated with Article 9 principles. Recital 47 reinforces the need to focus on foreseeable misuse and serious impacts, not only intended use. In practice, you should maintain a risk file with hazards, affected groups, mitigation controls, test evidence, and residual risk decisions signed by accountable owners.

For SMEs, this sounds heavy but can be managed with a lightweight governance cadence: monthly risk review, release gate checklist, and incident feedback loop.

Article 52: limited-risk transparency done right

For many customer-facing applications, transparency is the primary legal duty. Users should know when they are interacting with AI and understand key limits. Hidden automation may create trust and legal problems, even if no major harm occurs. Good transparency includes labels in interface, plain-language notices, and easy escalation to human support.

A strong practice is to maintain a transparency register for every AI-enabled feature, including UI label, policy text, and fallback human process.

Classification worksheet your team can use

  1. Describe feature purpose in one sentence.
  2. List inputs and outputs, including personal/sensitive data.
  3. Define affected users and possible adverse impacts.
  4. Map to prohibited use checks.
  5. Map to Annex III categories and Article 6 criteria.
  6. Assess transparency obligations (Article 52).
  7. Assign provisional class and confidence level.
  8. Approve classification with legal + product owner sign-off.

Repeat this worksheet whenever feature scope changes. Classification is dynamic, not a one-time tag.

Avoid these four classification errors

Error 1: classifying at model level only, ignoring business process impact.
Error 2: using marketing labels like "assistive only" while outputs still drive decisions.
Error 3: assuming vendor compliance automatically covers your deployment context.
Error 4: never revisiting classification after new features are added.

Correct classification protects your roadmap, improves customer trust, and reduces legal surprises. For 2026, it is one of the smartest investments an EU SME can make in product governance.

Advanced edge cases SMEs should review carefully

Decision support vs automated decision: Many products claim to be "support only" while managers follow outputs by default. If influence is systematically strong, regulators may treat impact similarly to automation. Assess real-world behavior, not intended marketing language.

Feature drift over time: A low-risk assistant can become higher risk as teams add integrations, scoring outputs, or workflow automation. Reclassification should be triggered by release events, not only annual policy reviews.

Combined systems: Individually low-risk modules can create higher risk when chained together in one process. Example: identity verification + behavioral score + onboarding decision. Evaluate end-to-end impact across the full user journey.

Cross-border deployments: If you serve multiple EU markets, ensure consistent classification rationale and localized transparency implementation. Inconsistent treatment across countries can create legal and reputational problems.

As a rule, when uncertainty is high, choose the more conservative class temporarily and document assumptions. You can always down-classify with stronger evidence later, but under-classification early is costly to unwind.

How classification affects contracts and sales cycles

Enterprise buyers increasingly ask suppliers to state AI risk class and provide supporting controls during vendor due diligence. If your team cannot answer clearly, procurement reviews slow down or stall. A documented classification rationale therefore has direct commercial impact beyond legal compliance.

For high-risk or borderline systems, prepare a concise customer-facing pack: use-case summary, classification decision, transparency controls, risk-management process, and escalation contacts. This can reduce repetitive questionnaire effort and improve trust with security and legal reviewers.

Teams that maintain classification evidence in a living register respond faster to changing product scope and customer questions. This agility is often a competitive advantage in regulated sectors where trust and speed both matter.

In short, classification is not paperwork. It is a core governance artifact that supports product safety, audit readiness, and revenue execution in EU markets.

Ready to simplify compliance?

ComplyAI helps SMEs map obligations, build checklists, and keep evidence in one place.

Try ComplyAI free