AI Act 2026: What's Changing in European Regulation for Your AI
A practical guide to making your AI compliant with the AI Act — obligations, timeline, and concrete steps for SMEs and mid-market companies.
Table of Contents
- 1. Introduction: Why the AI Act Concerns You Right Now
- 2. AI Act 2026: The Implementation Timeline
- 3. Risk Classification: Where Does Your AI Stand?
- 4. Your Specific Obligations According to Your Role
- 5. The 8 Steps to Make Your AI Compliant with the AI Act
- 6. AI Act + GDPR: Dual Compliance
- 7. Our Support: AI Compliant with the AI Act from Design
- 8. FAQ — AI Act for SMEs
- 9. Sources and References
1. Introduction: Why the AI Act Concerns You Right Now
The AI Act (European Regulation on Artificial Intelligence) came into force on August 1, 2024. It's the first legal framework worldwide specifically designed to regulate artificial intelligence. If you use, develop, or deploy AI systems in the European Union, you are directly affected — and the first compliance deadlines are already behind us.
Yet the reality is striking: according to a Center for Data Innovation survey from late 2025, fewer than 30% of European SMEs have begun steps to make their AI compliant with the AI Act. Many still believe this regulation only concerns tech giants. This is a strategic mistake.
Most companies that automate their processes with AI — whether CV screening, customer scoring, chatbots, or demand forecasting — are using systems that fall under the AI Act's scope. And the penalties are significant: up to €35 million or 7% of global turnover for the most serious violations.
This article is a practical guide. It's not about summarizing legal text — for that, see our AI Act white paper. Here, we'll show you concretely what you need to do to make your AI compliant with the AI Act, step by step, with a clear timeline and prioritized actions.
"AI Act compliance is not a brake on innovation — it's a competitive advantage. Companies that integrate compliance from the design stage of their AI systems are the ones that will win the trust of customers and regulators."
2. AI Act 2026: The Implementation Timeline
The AI Act does not apply all at once. Its implementation is progressive, with deadlines staggered between 2025 and 2027. Here's the complete timeline to plan your compliance:
Subliminal manipulation, social scoring, real-time remote biometric identification in public spaces (with limited exceptions). If you use any of these systems, you are already in violation.
Applies to foundation model providers (GPT, Claude, Mistral, etc.). Obligations for technical documentation, transparency, and systemic risk management.
This deadline concerns the majority of SMEs and mid-market companies. AI systems used in HR, finance, insurance, health, education — all subject to enhanced risk management, transparency, and human oversight obligations.
Concerns AI systems integrated into medical devices, vehicles, industrial machinery, etc. These products are already subject to sector-specific regulations (CE marking).
What this means for you: if your company uses AI to automate recruitment, credit scoring, claims management, or any other function listed in Annex III, you have until August 2, 2026 to ensure your AI is compliant with the AI Act. That's in a few months. The countdown has started.
3. Risk Classification: Where Does Your AI Stand?
The core of the AI Act is based on a risk-based approach. Each AI system is classified into one of four categories, and your obligations depend directly on this classification. Identifying where your AI stands is the first step to building AI compliant with the AI Act.
Unacceptable Risk — Prohibited
Certain AI practices are simply prohibited since February 2025:
- Subliminal manipulation or exploitation of vulnerabilities (age, disability)
- Social scoring by public authorities
- Real-time remote biometric identification in public spaces (except strict security exceptions)
- Emotion inference in the workplace or educational institutions (except for medical or safety reasons)
- Building facial recognition databases through mass scraping
For SMEs: if you're not using any of these systems, this category doesn't directly concern you. But verify anyway that your behavioral analysis or targeted advertising tools don't cross this line.
High Risk — Enhanced Obligations
This category concerns the largest number of SMEs using AI automation. A system is high-risk if it's used in one of the following domains (Annex III):
- Human Resources: CV screening, candidate scoring, automated recruitment decisions
- Finance: credit scoring, risk assessment, fraud detection
- Insurance: automated pricing, claim assessment
- Education: automated grading, school placement
- Essential Services: housing access, social assistance, public services
- Justice and Law Enforcement: recidivism risk assessment, profiling
- Migration: visa and asylum application evaluation
If your company uses an AI tool in one of these domains, you must implement a risk management system, technical documentation, human oversight, and transparency mechanisms.
Limited Risk — Transparency Obligations
AI systems that interact directly with users are subject to transparency obligations:
- Chatbots: users must know they're interacting with AI
- Deepfakes: AI-generated content must be labeled
- Emotion recognition or biometric categorization systems: affected individuals must be informed
- AI-generated content: texts, images, and videos created by AI must be identifiable as such
If your SME uses an AI chatbot on its website or generates marketing content with AI, this category applies to you. The obligation is simple: clearly inform the user.
Minimal Risk — No Specific Obligations
The vast majority of AI systems fall into this category: spam filters, content recommendations, internal logistics optimization, stock prediction, etc. No specific AI Act obligations apply, though adoption of a voluntary code of conduct is encouraged.
Not sure which category your AI falls into?
Our experts perform a free classification audit to identify your obligations under the AI Act. Within 48 hours, you'll know exactly what you need to do.
Request a classification audit →4. Your Specific Obligations According to Your Role
The AI Act distinguishes several roles in the AI value chain. Understanding your role is essential to know which obligations apply to you and to build AI compliant with European regulation.
Provider
You develop or have developed an AI system that you place on the market or put into service under your own name. This is the most demanding role in terms of compliance:
- Documented and updated risk management system
- Training data: quality, representativeness, traceability
- Complete technical documentation
- Automatic logging
- Transparency and user information
- Human oversight integrated from design
- Accuracy, robustness, and cybersecurity
- Compliance assessment (self-assessment or notified body depending on domain)
- CE marking and EU conformity declaration
- Registration in the EU database
Deployer — The Most Common Role for SMEs
You use an AI system developed by a third party (SaaS, API, integrated tool) under your authority. This applies to the majority of SMEs and mid-market companies that buy AI solutions rather than develop them in-house.
Your obligations as a deployer of AI compliant with the AI Act for high-risk systems:
- Compliant use: follow the provider's instructions
- Human oversight: designate competent people to oversee the system
- Input data: ensure the data you provide to the system is relevant and of good quality
- Continuous monitoring: monitor system performance and report incidents
- Fundamental Rights Impact Assessment (FRIA): mandatory for certain deployers (public bodies, essential services, certain private activities)
- Transparency: inform affected individuals when decisions affecting them are made or assisted by AI
- Log preservation: keep automatically generated logs for at least 6 months
Importer and Distributor
If you import or distribute AI systems in the EU, you have verification obligations: ensure the provider has met its obligations, that CE marking is present, and that documentation is available. If you substantially modify the system, you become a provider.
Attention to Role Change
A crucial point: if you substantially modify an AI system (for example by fine-tuning it with your own data, changing its purpose, or integrating it into a larger system), you become a provider under the AI Act. This means all provider obligations apply to you. This is a common pitfall for SMEs that customize open-source AI solutions.
5. The 8 Steps to Make Your AI Compliant with the AI Act
Here's a concrete action plan, adapted for SMEs and mid-market companies, to achieve compliance before August 2, 2026. Each step is ordered by priority and logical dependency.
Step 1: Map Your AI Systems
First and foremost, you must know which AI systems you use. Many SMEs underestimate the number of tools incorporating AI in their processes. Create a comprehensive inventory:
- SaaS tools with AI components (CRM, ATS, ERP, marketing tools)
- AI APIs consumed (GPT, Claude, computer vision services)
- Models developed internally or by service providers
- Automations using machine learning (even basic ones)
- Chatbots and virtual assistants
For each system, document: the provider, the purpose, the data processed, the people affected, and the level of autonomous decision-making.
Step 2: Classify Each System by Risk Level
Using the classification framework described in the previous section, assign a risk level to each system. Be conservative in your assessment — when in doubt, classify higher. A system misclassified as "minimal risk" when it's "high risk" exposes you to penalties.
Step 3: Verify the Absence of Prohibited Practices
Review your systems to ensure none fall into the "unacceptable risk" category. Watch for subtle cases: a behavioral scoring tool for your employees could constitute social scoring; an emotion analysis system in a video interview is now prohibited.
Step 4: Establish AI Governance
Designate an AI compliance officer within your organization. This person — or team — will be responsible for:
- Maintaining the AI systems inventory
- Coordinating risk assessments
- Ensuring liaison with AI providers
- Training user teams
- Managing incidents and reports
In an SME, this role can be combined with the DPO (Data Protection Officer), provided the person has the necessary technical skills.
Step 5: Document — The Pillar of Compliance
For each high-risk system, compile a compliance file containing:
- System sheet: description, purpose, provider, version, implementation date
- Risk assessment: identified risks, mitigation measures, residual risks
- Data: nature of input data, source, quality, potential biases
- Human oversight: who supervises, with what level of authority, deactivation procedure
- Transparency: how affected individuals are informed
- Logs: retention policy, duration, format
- Provider contract: AI Act compliance clauses, access to technical documentation
Step 6: Audit Your AI Providers
As a deployer, you must ensure your providers meet their own obligations. Incorporate into your contracts:
- Access clause to the system's technical documentation
- AI Act compliance guarantee and commitment to updates
- Notification of substantial system modifications
- Provision of usage instructions and system limitations
- Cooperation clause in case of audit or incident
If your provider can't provide these guarantees, seriously consider switching solutions. The regulatory risk falls on you as a deployer. Adopting AI automation compliant with the AI Act starts with choosing responsible technology partners.
Step 7: Train Your Teams
Article 4 of the AI Act imposes an obligation for "AI literacy" on all providers and deployers. Concretely, this means anyone who uses, oversees, or makes decisions based on an AI system must have sufficient competence. Implement:
- Training on the functioning and limitations of AI systems used
- Clear procedures for oversight and escalation
- Awareness of algorithmic bias and its consequences
- Accessible documentation of best practices for use
Step 8: Establish Continuous Monitoring
Compliance is not a one-time exercise. The AI Act requires permanent post-deployment monitoring. Implement:
- Performance monitoring: model drift, error rates, emerging biases
- Incident management: procedure for reporting serious malfunctions to the national authority
- Periodic review: annual internal audit of each high-risk system
- Change register: traceability of any update or configuration change
To dive deeper into compliance in an automation context, see our comprehensive guide on compliant AI automation.
Need support with these 8 steps?
JAIKIN guides SMEs and mid-market companies through each step of AI Act compliance — from initial mapping to continuous monitoring. We make your AI compliant with the AI Act without slowing your business.
Schedule support →6. AI Act + GDPR: Dual Compliance
If you're already compliant with GDPR — and you should be since 2018 — you have a head start. The AI Act and GDPR complement each other, and many measures required by the AI Act overlap with GDPR requirements. But beware: the two texts are not interchangeable.
What GDPR Already Covers
- Impact assessment (DPIA): GDPR already requires an impact assessment for high-risk processing. The AI Act adds a FRIA (Fundamental Rights Impact Assessment) for certain deployers. Both assessments can be combined.
- Transparency: GDPR already requires informing individuals about automated decisions (Article 22). The AI Act strengthens and broadens this obligation.
- Individual rights: the right to human intervention (GDPR) aligns with human oversight requirements (AI Act).
- Data governance: data minimization, quality, and traceability principles are common.
What the AI Act Adds
- Risk classification: GDPR lacks a system to classify AI systems by risk level
- Technical requirements: robustness, accuracy, cybersecurity specific to AI
- Automatic logging: specific logging obligations beyond what GDPR requires
- CE marking and registration: entirely new, no GDPR equivalent
- AI literacy: obligation for specific AI training
Dual Compliance Strategy
Our recommendation: adopt an integrated approach. Don't create two parallel governance systems. Extend your existing GDPR register to include AI Act requirements. Merge your impact assessments. Align your DPO and AI compliance officer.
For a detailed guide on GDPR compliance for your AI systems, see our dedicated article: AI Compliant with GDPR — The Practical Guide.
"Companies that have already invested in GDPR compliance are better prepared for the AI Act. Privacy by design and accountability principles directly apply to AI compliance. This is a real competitive advantage for European SMEs."
7. Our Support: AI Compliant with the AI Act from Design
At JAIKIN, we don't just make your existing AI compliant. We build your AI automation systems with compliance integrated from design — what we call compliance by design.
Our 4-Phase Approach
Phase 1 — Diagnosis (2 weeks)
- Complete mapping of your AI systems
- Risk classification for each system
- Identification of compliance gaps
- Diagnostic report with prioritized recommendations
Phase 2 — Action Plan (1 week)
- Definition of compliance roadmap
- Resource and timeline estimation
- Identification of quick wins (fast, high-impact actions)
- Validation with your legal and technical teams
Phase 3 — Implementation (4-12 weeks depending on scope)
- Writing technical documentation and compliance files
- Implementing human oversight mechanisms
- Setting up monitoring and logging
- Audit and renegotiation of provider contracts
- Team training (AI literacy)
Phase 4 — Continuous Monitoring (optional)
- Monthly performance and bias monitoring
- Regulatory watch and file updates
- Support during audits or inspections
- Full annual review
If you're starting a new automation project, we integrate AI Act compliance into the solution design itself. It's more efficient, less costly, and gives you a competitive advantage. Discover our comprehensive guide on AI automation for SMEs to see how we combine performance and compliance.
Why Choose JAIKIN for Your AI Act Compliance?
- Dual expertise: we are both AI developers and European regulatory experts
- Pragmatic approach: no unnecessary legal jargon — concrete actions adapted to your SME reality
- Integrated compliance: our automation solutions are natively compliant with the AI Act
- Permanent monitoring: we track regulatory developments, EU AI Office guidelines, and national authority decisions
- European network: presence in France, Germany, and beyond for multi-jurisdictional compliance
Get Ahead on the AI Act
Don't just comply with regulation — make it an asset. Contact JAIKIN for an AI Act compliance diagnostic and a custom action plan for your company.
Get an AI Act Diagnostic →8. FAQ — AI Act for SMEs
Does the AI Act apply to SMEs that simply use ChatGPT or SaaS AI tools?
Yes. As soon as you use an AI system in a professional context, you're considered a "deployer" under the AI Act. If this tool is used in a high-risk domain (HR, finance, etc.), you have specific supervision, transparency, and documentation obligations. Using a simple SaaS tool doesn't exempt you from these responsibilities. Making your AI compliant with the AI Act is an obligation, even if you didn't develop the system.
What penalties does the AI Act impose on SMEs?
Penalties vary by violation severity: up to €35 million or 7% of global turnover for prohibited practices, up to €15 million or 3% for non-compliance with high-risk system obligations, and up to €7.5 million or 1.5% for providing incorrect information. However, the AI Act requires penalties to be "proportionate" and SMEs/startups benefit from reduced caps. In practice, reputational risk and loss of customer trust concern SME leaders most.
Is my customer service chatbot covered by the AI Act?
Yes, but likely only at the "limited risk" level. Your main obligation is to clearly inform users they're interacting with an AI system, not a human. However, if your chatbot makes decisions substantially affecting individual rights (service refusal, claims evaluation, etc.), it could be classified as high-risk. A case-by-case analysis is necessary.
Does the AI Act apply to companies outside the EU?
Yes. Like GDPR, the AI Act has extraterritorial scope. It applies to any provider or deployer that puts an AI system on the EU market or whose results are used in the EU, regardless of where the company is based. If you're a French SME using an American tool, you're a deployer subject to the AI Act, and your American provider is also subject to its own obligations.
AI Act SMEs: Are there exemptions for small companies?
The AI Act includes several measures to limit burden on SMEs and startups: prioritized access to "regulatory sandboxes" established by member states, reduced penalty caps, and simplified compliance documentation options. The EU AI Office is also working to publish practical guides and templates adapted to SMEs. However, core obligations remain the same — risk classification, human oversight, and transparency are non-negotiable.
How do I know if my AI is "high-risk" under the AI Act?
See Annex III of the Regulation. It exhaustively lists domains considered high-risk: biometric identification, critical infrastructure management, education, employment, essential services, law enforcement, migration, justice. If your AI system is used in one of these domains AND influences or makes decisions affecting individuals, it's very likely high-risk. When in doubt, a classification audit will give you a clear answer. We offer this audit free of charge — contact us.
9. Sources and References
[1] Regulation (EU) 2024/1689 of the European Parliament and of the Council establishing harmonized rules concerning artificial intelligence — Official text on EUR-Lex
[2] EU AI Office — European Commission — European AI Policy
[3] CNIL — Artificial Intelligence and AI Act — CNIL AI Resources
[4] AI Act Explorer — Interactive tool for navigating the regulation — artificialintelligenceact.eu
[5] EU AI Pact — Voluntary company commitments — AI Pact
[6] JAIKIN Reference Article — AI Act: Our Complete White Paper
Article updated February 6, 2026. The information contained in this article is provided for informational purposes and does not constitute legal advice. For personalized guidance on AI Act compliance for your company, contact our experts.
Related reading
AI and GDPR: How to Deploy Compliant AI in Business
73% of companies worry about AI/GDPR compliance. This guide details your 7 obligations, a practical checklist, and our compliant-by-design AI approach.
ReadEU AI Act: The Complete Reference Guide
Comprehensive and neutral analysis of EU Regulation 2024/1689 on artificial intelligence: risk classification, obligations by actor, penalties, and implementation timeline.
ReadCompliant AI Automation: Balancing Performance and Regulation
Compliance is not a barrier to automation — it's a competitive advantage. Discover the 3 pillars of GDPR and AI Act compliant AI automation.
Read