GDPR and AI Act compliant AI: complete guide for SMEs and mid-market companies
Deploy artificial intelligence whilst respecting the European regulatory framework
Compliance as a competitive advantage
In 2026, regulatory compliance of AI systems is no longer optional. Between enhanced GDPR and the European AI Act being rolled out, companies using artificial intelligence must navigate a demanding legal framework.
But far from being a constraint, this compliance becomes a genuine competitive advantage. Clients, partners and contractors now demand guarantees on data protection and automated processing transparency. Compliant AI reassures, differentiates and opens markets.
At JAIKIN, we design compliant-by-design operational AI solutions: sovereign hosting, audited open-source models, complete logs, human-in-the-loop for sensitive decisions. Our clients sleep soundly. To discuss your project, contact us or discover our automation services.
GDPR and AI: your obligations in 2026
The European data protection framework fully applies to AI systems
GDPR (General Data Protection Regulation), in force since 2018, strictly governs all personal data processing. AI systems, which massively process data to learn and decide, are particularly concerned. Three articles are crucial:
Article 22: Automated decision
Prohibits decisions producing legal effects or significantly affecting a person if based solely on automated processing (including profiling), except strict exceptions (explicit consent, contractual necessity, legal basis). For AI, this means implementing human-in-the-loop for sensitive decisions (recruitment, credit, contract termination).
Article 35: GDPR Impact Assessment (DPIA)
Requires a Data Protection Impact Assessment for any processing likely to result in high risk to rights and freedoms. AI systems often fall into this category. The DPIA documents risks, security measures and processing proportionality.
Article 13: Transparency and information
Requires informing individuals whose data is processed: purposes, legal basis, retention period, existence of automated decision-making, individuals' rights. For AI, this means clearly documenting and communicating the system's operation.
In practice, any AI processing customer, employee or partner data must comply with these obligations. To go further, consult our detailed article on GDPR and AI.
AI Act: the new European regulatory framework
The European Union imposes a specific framework for artificial intelligence
Adopted in 2024, the AI Act is the world's first regulation to specifically govern artificial intelligence systems. It classifies AI systems according to their risk level and imposes increasing obligations:
Unacceptable risk (prohibited)
Social scoring systems, cognitive manipulation, real-time biometric identification in public spaces (except strict exceptions). These uses are purely and simply prohibited in Europe.
High risk (strict regulation)
AI systems used in critical infrastructures, education, employment (recruitment, HR management), access to essential public or private services, law enforcement. Obligations: risk management system, detailed logging, technical documentation, human oversight, robustness and accuracy, certified compliance.
Limited risk (transparency required)
Chatbots, content generation systems, deepfakes. Main obligation: clearly inform the user they are interacting with AI.
Minimal risk (no specific obligation)
Anti-spam filters, non-sensitive content recommendation systems, video game AI. The AI Act imposes no particular constraints but GDPR may apply if personal data is processed.
Implementation timeline
Prohibition of unacceptable risk systems (February 2025)
Entry into force of rules on general-purpose AI models (August 2025)
Full application of obligations on high-risk systems (August 2026)
To go further, consult our detailed article on the AI Act or download our AI and compliance white paper.
GDPR + AI Act: navigating dual compliance
In practice, companies must simultaneously satisfy GDPR requirements (data protection) and AI Act requirements (AI governance). The two texts complement but do not completely overlap. Here's a reading grid:
GDPR requirements
- Minimisation of collected and processed data
- Determined, explicit and legitimate purposes
- Limited retention period
- Individuals' rights (access, rectification, erasure, portability)
- Security and confidentiality (encryption, pseudonymisation)
AI Act requirements
- System classification according to risk level
- Complete technical documentation of AI system
- Decision logging and traceability
- Human oversight (human-in-the-loop) for sensitive decisions
- Robustness, accuracy and bias testing
At JAIKIN, we support SMEs and mid-market companies in this dual compliance with a proven methodology. Consult our complete compliance methodology.
Our approach: compliant-by-design AI
6 principles for artificial intelligence respecting the European framework
Sovereign hosting (France/Europe)
Your data never leaves European territory. We favour French (OVH, Scaleway) or sovereign European hosts. No American or Chinese servers, no Cloud Act, no surprises.
Audited open-source models
We deploy open-source language models (Llama 3, Mistral, Qwen) whose code is public and auditable. You know exactly what the AI does, how it processes your data and can audit the model at any time.
Complete logs and traceability
Every AI decision is logged: input, output, model used, timestamp, user. In case of CNIL audit or litigation, you have complete traceability. Retention period configurable according to your legal obligations.
Human-in-the-loop for sensitive decisions
For high-risk processing (HR, credit, contract termination), AI proposes but human validates. This complies with GDPR Article 22 and ensures no critical decision is taken without human intervention.
Minimisation and pseudonymisation
We only collect strictly necessary data for processing. Sensitive data (name, email, etc.) is pseudonymised or anonymised when possible. Training data is cleaned to avoid discriminatory bias.
Documentation and transparency
Each AI system is subject to complete technical documentation: purpose, architecture, processed data, identified risks, security measures. This documentation meets AI Act requirements and facilitates audits.
5 JAIKIN compliance guarantees
Our contractual commitments to sleep soundly
Sovereignty guarantee
Hosting in France or Europe with contractual guarantee that your data never transits through extra-European servers. Annual audit clause to verify compliance with this commitment.
Transparency guarantee
Complete technical documentation delivered with each project: system architecture, models used, processed data, data flows. You know exactly what the AI does and how.
Auditability guarantee
Detailed logs of all AI decisions, retained according to your legal obligations (1 to 5 years depending on context). Export capability for CNIL audit or litigation.
DPIA included
For any project involving personal data, we conduct a GDPR Impact Assessment (DPIA) documenting risks and protection measures. Delivered before production rollout.
Legal support
Our GDPR and AI Act experts support you in dialogue with your DPO, CISO or CNIL. We can participate in internal validation committees or compliance audits.
3 compliant use cases in practice
HR automation: CV screening
A mid-market company receives 500 applications/month and wishes to automate initial screening. This case falls under 'high risk' according to AI Act (employment) and GDPR Article 22 (automated decision).
AI analyses CVs and assigns a relevance score according to business criteria. The top 50 profiles are presented to HR with score justification. HR manually validates candidates selected for interview. AI proposes, human decides.
Compliance: human-in-the-loop ✓, complete logs ✓, DPIA completed ✓, transparent and non-discriminatory evaluation criteria ✓.
Customer scoring: churn prediction
A SaaS SME wishes to identify at-risk customers for cancellation to target them with retention actions. Processing of personal data and profiling.
AI analyses product usage, support tickets and purchasing behaviour to score churn risk. High-risk accounts are automatically assigned to Customer Success Manager with context. CSM decides on action (call, promotional offer, etc.).
Compliance: legitimate purpose (contractual interest) ✓, transparent customer information ✓, limited retention period ✓, right to object respected ✓.
Data extraction from invoices
An industrial SME processes 500 supplier invoices/month. Automatic extraction accelerates accounting entry but processes personal data (names, addresses).
AI extracts data from scanned invoices (OCR + NLP): amount, date, company registration number, VAT. Personal data (signatory) is pseudonymised. Extracted invoice is presented for validation before injection into Sage.
Compliance: data minimisation ✓, pseudonymisation ✓, retention period aligned with accounting obligations (10 years) ✓, extraction logs ✓.
These three examples show that compliant AI remains fully effective. Compliance structures the project and reassures all stakeholders. To discover how to orchestrate these automations, consult our AI agents guide.
Frequently asked questions
It depends on several criteria: where your data is hosted, which AI model you use, whether decisions are automated or validated by a human, whether you have documented data flows and conducted a DPIA. At JAIKIN, we offer a compliance audit in 2-3 days to precisely assess your situation and identify necessary adjustments. This audit is free and without commitment.
Yes, but with caution. The API versions of ChatGPT and Claude (not the free public versions) offer contractual guarantees on data confidentiality. However, your data transits through American servers, which raises sovereignty questions and may pose problems for sensitive data (health, HR, finance). For these cases, we recommend open-source models hosted on European infrastructure.
The Data Protection Impact Assessment (DPIA) is a study documenting risks to the rights and freedoms of individuals whose data is processed. It is mandatory whenever processing is likely to result in high risk: large-scale profiling, automated decisions producing legal effects, processing of sensitive data (health, ethnic origin). AI systems often fall into these categories. A DPIA takes 2-5 days depending on complexity and must be completed before production rollout.
It's a mechanism ensuring a sensitive decision is never made solely by AI. Human retains final decision power. For example, for a CV screening system: AI presents the top 50 profiles with score justification, but it's the human recruiter who decides who to interview. AI assists, human decides. This mechanism complies with GDPR Article 22 and AI Act requirements for high-risk systems.
It depends on the gap between your current situation and regulatory requirements. For an already well-designed system but hosted on American cloud, migrating to European infrastructure costs €3-8k. For a system requiring a DPIA, enhanced logging and human-in-the-loop, expect €10-20k in adjustments. At JAIKIN, we design compliant systems from the start, avoiding retrospective compliance costs.
Yes. GDPR sanctions can reach 4% of global turnover or €20M (whichever is higher). The AI Act provides for fines up to €35M or 7% of global turnover for the most serious infringements (prohibited systems placed on market). Beyond financial sanctions, reputational risk is major: loss of customer trust, litigation, obligation to cease processing. Compliance is a protective investment.
Yes, in most cases. GDPR requires communicating on the existence of automated decision-making (Article 13). The AI Act reinforces this obligation for limited-risk systems (e.g. chatbots). Concretely: if a chatbot responds to your customers, clearly indicate it's AI. If an algorithm processes applications, inform applicants. This transparency reassures and avoids litigation.
The right of access (GDPR Article 15) fully applies to data processed by AI. You must be able to provide: personal data processed, processing purpose, automated decision logic, data recipients. This is why complete logging and documentation are essential. Response deadline: maximum 1 month after receiving the request.
Only if you have a valid legal basis (explicit consent, legitimate interest, contract execution) and if you comply with the minimisation principle. In practice, training data must be anonymised or pseudonymised when possible. If you use a model hosted in Europe, you retain control of your data. If you send your data to OpenAI or Anthropic for fine-tuning, you lose this control and must obtain explicit consent.
Need a compliance audit?
We analyse your current AI system and identify necessary adjustments to comply with GDPR and AI Act.