Skip to main content
The EU AI Act (Regulation 2024/1689) is the world’s first comprehensive artificial intelligence regulation. It establishes a risk-based framework that classifies AI systems into risk tiers and imposes obligations proportional to the level of risk.
The EU AI Act reaches full application on August 2, 2026, with some provisions already in effect. High-risk AI system obligations are expected by late 2027. Penalties can reach 7% of global annual turnover.

Who needs EU AI Act compliance?

AI providers

Organizations that develop or place AI systems on the EU market, regardless of where they are based.

AI deployers

Organizations that use AI systems within the EU for business purposes.

Risk classification tiers

TierDescriptionExamples
UnacceptableBanned AI practicesSocial scoring, manipulation, real-time biometric identification
High-riskStrict requirementsCredit scoring, employment decisions, healthcare diagnostics, law enforcement
Limited riskTransparency obligationsChatbots, emotion recognition, deepfake generators
Minimal riskNo specific requirementsSpam filters, AI-powered games

Key requirements for high-risk AI

  • Risk management system throughout the AI lifecycle
  • Data governance for training, validation, and testing datasets
  • Technical documentation and record-keeping
  • Transparency — users must be informed they are interacting with AI
  • Human oversight mechanisms
  • Accuracy, robustness, and cybersecurity requirements
  • Registration in the EU AI database

How DSALTA helps

  • EU AI Act controls mapped to risk tier requirements
  • AI system inventory and classification
  • Risk assessment tools for AI-specific risks
  • Documentation templates for transparency and governance
  • Cross-framework mapping — aligns with ISO 42001, NIST AI RMF, DORA, and NIS 2

Frequently asked questions

Yes, if your AI systems are placed on the EU market or used in the EU. The regulation has extraterritorial reach similar to GDPR.
Up to 7% of global annual turnover for prohibited AI practices, 3% for high-risk violations, and 1.5% for providing incorrect information.
Both use risk-based approaches. Implementing NIST AI RMF provides a strong operational foundation for EU AI Act compliance.