EU AI Act 2026: What SMEs Need to Know (and Do) Before August
If your company uses ChatGPT, an AI-powered CRM, a chatbot on your website, or an automated CV screening tool, you are within the scope of the EU AI Act. The regulation takes full effect on August 2, 2026 — roughly six months from now.
Most small and medium businesses we talk to fall into one of two camps: those who have never heard of it, and those who assume it only applies to big tech companies. Both groups are wrong. The AI Act creates obligations for any business that uses AI systems in a professional context, regardless of size.
This guide covers what the regulation actually requires, how to figure out if you are affected, and what concrete steps to take before the deadline.
The AI Act in brief
The EU AI Act (Regulation 2024/1689) is the world’s first comprehensive legal framework for artificial intelligence. Adopted in 2024, it is being phased in between 2025 and 2027.
Its core principle is a risk-based classification. The higher the potential impact of an AI system on people’s fundamental rights, health, or safety, the stricter the obligations. This is a pragmatic approach: a spell checker and an automated hiring tool are not treated the same way.
Four risk levels
Unacceptable risk — Banned outright. This includes subliminal manipulation, social scoring by public authorities, emotion recognition in workplaces and schools (with narrow exceptions), real-time biometric identification in public spaces, and untargeted facial image scraping. These have been prohibited since February 2025.
High risk — Allowed but heavily regulated. AI systems that significantly affect people’s lives fall here. The regulation lists eight domains in Annex III: biometrics, critical infrastructure, education, employment and HR, access to essential services (credit scoring, insurance), law enforcement, migration, and administration of justice. For most SMEs, the relevant ones are automated CV screening, employee performance evaluation, and client creditworthiness scoring.
Limited risk — Transparency obligations. Systems that interact with people must disclose their nature. A chatbot must tell users they are talking to an AI. AI-generated content (text, images, video) must be identifiable as such, particularly deepfakes.
Minimal risk — No specific obligations. The vast majority of everyday AI tools (translation, spell checking, search, content drafting assistants) fall here. No legal requirements, though the regulation encourages voluntary codes of conduct.
Are you affected?
Almost certainly yes, if you use any AI tool professionally. The key question is your role.
Provider vs. deployer
The AI Act distinguishes four roles: provider, deployer, importer, and distributor. The one that matters for most SMEs is deployer — you use AI systems developed by someone else in a professional context.
You are a deployer if you use ChatGPT or Claude to draft communications, run a CRM with predictive scoring, deploy a chatbot through a SaaS platform, or use HR software with automated screening. The overwhelming majority of European SMEs are deployers.
You are a provider if you develop an AI system and place it on the market — for example, building a custom chatbot product for your clients or creating a scoring tool you sell as a service.
One important nuance: a deployer can become a provider by substantially modifying the intended purpose of a system. Using a product recommendation engine to assess creditworthiness, for instance, would shift your role — and your obligations.
The deployer misconception
Many businesses assume that being “just a user” means they have no obligations. This is incorrect. Deployers have real operational responsibilities, especially when the AI system impacts employees or customers. The obligations are lighter than those of providers, but they exist and are enforceable.
Concrete obligations for deployers
What the AI Act actually requires depends on the risk level of the systems you use.
Obligations that apply to everyone
AI literacy (Article 4). Since February 2025, every organization using AI must ensure that staff operating or overseeing AI systems have a sufficient level of AI literacy. This is not optional — it is already in force. In practice, it means training your teams on how the AI tools they use work, what their limitations are, and how to oversee them appropriately.
Banned practices check. Verify that none of your tools fall into the prohibited category. The most common risk area for SMEs is emotion analysis in video recruitment interviews.
Transparency obligations (limited-risk systems)
If you run a chatbot on your website, add a visible notice: “You are chatting with an AI assistant.” If you publish AI-generated content, label it accordingly. These are straightforward to implement and should be addressed now if they are not already.
High-risk obligations (the heavy lift)
If you use AI systems classified as high risk — automated hiring tools, credit scoring, performance evaluation — the requirements are significantly more demanding:
- Human oversight. You must maintain meaningful human control, including the ability to override or stop the system.
- Risk management. Document the risks associated with the system and the measures you have taken to mitigate them.
- Data governance. Understand what data the system uses, its origin, quality, and potential biases.
- Logging and traceability. Keep records of how the system operates and the decisions it produces.
- Incident reporting. Establish a procedure to report serious incidents involving high-risk AI systems.
- Registration. Ensure high-risk systems are registered in the EU database.
The timeline
Not everything kicks in at once. Here are the dates that matter.
| Date | What applies | Who is concerned |
|---|---|---|
| Feb 2, 2025 | Banned AI practices prohibited. AI literacy obligation in force. | Everyone using AI systems |
| Aug 2, 2025 | Rules for general-purpose AI models (GPAI) | Foundation model providers |
| Aug 2, 2026 | Full application: high-risk systems, transparency, deployer obligations | Anyone using or providing high-risk AI |
| Aug 2, 2027 | AI embedded in regulated products (medical devices, vehicles) | Product manufacturers |
Note that the first row is not a future date. The AI literacy obligation and the ban on unacceptable practices are already law. If you have not addressed these yet, you are technically already behind.
One caveat: the European Commission proposed a “Digital Omnibus” package in late 2025 that could push certain high-risk obligations to December 2027. As of February 2026, this proposal has not been adopted. The prudent approach is to prepare for August 2026 and treat any delay as a bonus, not a plan.
Penalties
The regulation carries real financial consequences.
| Infringement | Maximum fine |
|---|---|
| Using a banned AI system | 35M EUR or 7% of global annual turnover |
| Non-compliance for a high-risk system | 15M EUR or 3% of global annual turnover |
| Failure to meet transparency obligations | 7.5M EUR or 1% of global annual turnover |
For SMEs, fines are capped proportionally to revenue, so nobody is getting a 35-million-euro bill on a 2-million-euro turnover. But the reputational damage from a public enforcement action can be just as harmful as the fine itself. And national supervisory authorities across EU member states are currently standing up their enforcement structures.
6-step action plan
Step 1 — Inventory your AI systems
More than half of organizations do not know what AI systems they are using. Start with a complete inventory.
For each tool, document: the tool name, provider, how it is used in your organization, what data it processes, who it affects (employees, clients, candidates), your role (provider or deployer), and the estimated risk level.
Where to look: SaaS subscriptions (check if “AI” or “machine learning” appears in the product description), plugins and extensions (Copilot in Office, Gemini in Workspace, coding assistants), business tools (CRM, HR, accounting, marketing — many now embed AI), and internal developments (scripts, automations, chatbots).
Step 2 — Classify by risk level
For each system identified, determine the risk level using the four categories above. When in doubt, classify upward as a precaution.
Most SMEs find that the majority of their AI tools fall into the minimal-risk category. That is normal. The point of the inventory is to surface the one or two systems that might be high risk — an automated recruitment filter, a credit scoring module — because those carry the bulk of the obligations.
Step 3 — Verify your suppliers
As a deployer, you must ensure your SaaS providers meet their own obligations. Request their AI Act documentation or roadmap. Verify that high-risk systems are registered in the EU database. Add AI Act clauses to your contracts (compliance, documentation, audit rights, incident notification). Keep evidence of your due diligence.
Step 4 — Establish AI governance
Designate a person responsible for AI compliance. In smaller organizations, this could be the DPO (if you have one for GDPR), the IT security officer, a specialized legal counsel, or the CEO directly. What matters is that this person has a cross-functional view — IT, legal, and business operations — and direct access to leadership.
Assemble a project team that brings together IT, legal, HR, and the business units that use AI tools.
Step 5 — Document everything
For high-risk systems, the documentation requirements are substantial: system description and intended use, risk assessment and mitigation measures, data sources and quality controls, human oversight procedures, logging and traceability mechanisms, and known performance limitations.
For limited-risk systems, documentation is lighter: visible transparency notices and a procedure for labeling AI-generated content.
If you are already GDPR-compliant, you have a significant head start here. Your processing register, impact assessments, and DPO documentation are directly reusable as a foundation. More on that below.
Step 6 — Monitor and update
Compliance is not a one-time exercise. Set up regulatory monitoring (the text and its guidelines will evolve), an evaluation process for new AI tools before adoption, a periodic review of your AI inventory (quarterly is recommended), and an incident notification procedure for high-risk systems.
AI Act and GDPR: how they interact
The GDPR governs personal data. The AI Act governs AI systems. The two frameworks do not replace each other — they stack.
When an AI system processes personal data (which is very common), both apply simultaneously. CV screening involves personal data and is a high-risk AI system. Client scoring involves personal data and is potentially high-risk. A chatbot that collects information involves personal data and triggers transparency obligations.
What your GDPR compliance already covers
If you are already GDPR-compliant, you have a head start. Your processing register can be extended into an AI inventory. Your data protection impact assessments (DPIAs) can be supplemented with AI-specific risk evaluations. Your DPO can lead the AI Act compliance effort. Your breach notification procedures can be expanded to cover AI incidents. Your existing documentation provides a solid foundation to build on.
This overlap is significant. GDPR-mature organizations will find the AI Act less burdensome than those starting from scratch on both fronts.
What happens next
The AI Act is not theoretical — the first obligations are already enforceable, and the full framework arrives in August 2026. For SMEs, the good news is that most obligations are manageable with proper planning. The bad news is that six months goes fast.
Start with the inventory. That single step will tell you where you stand and what you need to prioritize. Everything else flows from there. If your inventory reveals no high-risk systems, your compliance path is mostly transparency notices and AI literacy training — achievable in weeks, not months. If it does reveal high-risk systems, you now know exactly where to focus.
The regulation will also continue to evolve. Implementing standards, sector-specific guidelines, and enforcement precedents will shape how the AI Act works in practice over the coming years. Building a compliance process now — rather than just checking boxes for August — positions you well for what comes next.
We run hands-on training sessions that walk SME teams through the full compliance process — from inventory to documentation — using your actual AI systems as working material. If you would rather not figure this out alone, get in touch.
This guide is provided for informational purposes and does not constitute legal advice. For analysis specific to your situation, consult a qualified legal professional.