Framework — AI Act

AI governance starts with visibility

The EU AI Act introduces risk-based obligations for AI providers, deployers, and distributors. Sudory helps you inventory AI vendors, classify risk, and track compliance — across your organisation and your clients'.

Risk classification

Four levels of risk

The AI Act's risk-based approach determines what obligations apply. Classify your AI systems correctly — your compliance obligations depend on it.

Unacceptable risk

Prohibited AI practices — social scoring, real-time biometric identification in public spaces (with exceptions), manipulation techniques. These are banned outright.

Social scoringSubliminal manipulationReal-time biometric IDEmotion recognition at work

High risk

AI systems in critical areas require conformity assessment, technical documentation, risk management, and human oversight. Most enterprise AI falls here.

Recruitment toolsCredit scoringInsurance pricingCritical infrastructure

Limited risk

Transparency obligations — users must be informed when interacting with AI. Chatbots, deepfakes, and emotion recognition systems need clear disclosure.

ChatbotsDeepfake generatorsEmotion detectionAI-generated content

Minimal risk

No mandatory requirements, but voluntary codes of conduct are encouraged. Most AI applications fall in this category.

Spam filtersGame AIInventory managementTranslation tools

Key articles

What the AI Act requires

The regulation establishes obligations for high-risk AI systems and assigns responsibilities across the AI supply chain.

Article 6

Classification of high-risk AI systems

AI systems used in areas listed in Annex III are high-risk. This includes biometrics, critical infrastructure, employment, credit scoring, law enforcement, and border control.

Article 9

Risk management system

High-risk AI providers must establish a continuous risk management system. Sudory's risk register and policy engine map directly to this requirement.

Article 11

Technical documentation

High-risk AI systems require documentation before market placement. Training data, design choices, performance metrics, and known limitations — all must be recorded.

Article 25–27

Supply chain obligations

Providers, deployers, importers, and distributors each have obligations. When third-party AI components are integrated, the integrator inherits provider obligations.

Supply chain

Which role are you?

Like the EAA, the AI Act assigns obligations based on your role in the supply chain. Provider, deployer, or distributor — each has distinct responsibilities.

Provider

Develops or commissions an AI system and places it on the market under their name. If you build AI features into your product, you're the provider.

Risk management systemTechnical documentationConformity assessmentPost-market monitoringIncident reporting

Deployer

Uses an AI system under their authority — except for personal use. If you use third-party AI tools in your business processes, you're a deployer.

Human oversightMonitor AI performanceReport incidentsInform affected personsData protection impact assessment

Distributor / Importer

Makes AI systems available on the EU market. Similar to EAA economic operators — verify that providers have met their obligations before distributing.

Verify provider complianceCheck CE markingDon't distribute non-compliant systemsCooperate with authorities

Terminology

The AI Act's vocabulary

The AI Act introduces new terminology and redefines familiar concepts. Understanding these terms is essential for compliance — and for knowing what's coming.

AI system

A machine-based system designed to operate with varying levels of autonomy, that generates outputs such as predictions, recommendations, or decisions influencing environments.

General-purpose AI (GPAI)

AI models trained on broad data that can perform a wide range of tasks. GPT, Claude, Gemini — GPAI models have additional transparency and systemic risk obligations.

Conformity assessment

The process of verifying that a high-risk AI system meets the regulation's requirements. Can be self-assessed for most systems, but some require third-party assessment.

Post-market monitoring

Providers must actively monitor their AI systems after deployment. Collect data on performance, incidents, and misuse — and update risk assessments accordingly.

AI literacy

Article 4 requires providers and deployers to ensure staff have sufficient AI literacy. Training must be proportionate to the risk level and context of use.

Regulatory sandbox

National authorities can set up controlled environments for testing innovative AI before full compliance. A way to innovate within regulatory bounds.

How Sudory helps

AI governance, operationalised

You can't govern AI you don't know about. Sudory discovers AI vendors, tracks classifications, and produces the compliance evidence the AI Act demands.

AI vendor inventory

Track which AI tools and services your organisation uses. Sudory's vendor directory identifies AI providers — their data processing regions, subprocessors, and compliance certifications.

Risk classification

Document which AI systems fall into which risk category. Sudory's compliance ledger records classification decisions with timestamps and justifications.

Supply chain transparency

When you integrate third-party AI components, you inherit obligations. Sudory tracks your AI vendors and their subprocessors — so you know who's in your AI supply chain.

Cross-framework evidence

AI Act references ISO 27001 for security, GDPR for data protection, and product safety regulations. Sudory maps controls across frameworks — one evidence base, multiple regulations.

Continuous risk management

Article 9 requires ongoing risk management. Sudory's risk register tracks AI risks with likelihood × impact scoring, treatment strategies, and residual risk calculations.

Policy enforcement

Define policies for AI governance — approval workflows for new AI tools, review cycles for high-risk systems, separation of duties for classification decisions.

For MSPs

AI Act readiness across clients

Your clients are adopting AI tools faster than their compliance teams can track. Help them get ahead of the AI Act before enforcement begins.

AI inventory across clients

Your clients use AI tools they may not have classified yet. Sudory's shadow IT discovery surfaces AI vendors across client portfolios — the first step to AI Act compliance.

Classification as a service

Help clients classify their AI systems by risk level. Document decisions in the compliance ledger. Deliver AI Act readiness as a managed service.

Cross-regulation bundling

Clients subject to the AI Act likely also need NIS2, GDPR, and ISO 27001 compliance. Sudory's cross-framework mapping means one platform covers all four.

Discover the AI in your supply chain

Start with the vendor directory to identify AI providers in your stack. Shadow IT discovery surfaces AI tools your team adopted without approval. Classification and governance follow.