
Most mid-sized companies don't realise they already have obligations, and the clock is ticking.
Here's a question we put to almost every ops or finance leader we work with:
Do you use any AI tools in your business – whether that's a chatbot on your website, AI features in your CRM, or automated reporting in your finance platform?
The answer is almost always yes.
Then we ask the follow-up: "Do you know what the EU AI Act requires you to do as a deployer of those systems?"
More often than not, the answer is "I'm not sure." And that's not a criticism – it's the norm. The regulation is new, the language is dense, and the guidance has been inconsistent.
That's the gap this article is designed to close. Not with legal jargon or regulatory deep-dives, but with the plain-English overview that ops, finance, service, and sales leaders actually need – plus a practical checklist you can start working through this week.
Why This Matters to You (Not Just Your Legal Team)
There's a common misconception that the EU AI Act only applies to companies that build AI. It doesn't. If your organisation uses AI systems in a professional capacity within the EU – or if the output of those systems affects people in the EU – you're likely classified as a "deployer." And deployers have their own set of legal obligations.
And if you're based outside the EU but your AI outputs affect people within it, you're still in scope. Like GDPR, the AI Act has extraterritorial reach.
Think of it like power tools on a building site. You didn't manufacture the circular saw, but you're still responsible for training your workers, maintaining the equipment, and following safety protocols on site. The EU AI Act works the same way. The companies that build AI systems ("providers") have the heaviest regulatory burden. But the companies that deploy those systems in their operations carry real responsibilities too.
And those responsibilities are not theoretical. The EU AI Act carries significant financial penalties – scaled to the severity of the breach and tied to global turnover. The fines are structured in tiers, with the heaviest reserved for prohibited AI practices and lighter (but still substantial) penalties for breaching deployer obligations. These are the kind of numbers that get a board's attention.
What's Already in Force
The AI Act didn't arrive all at once. It's being phased in, and some obligations are already live.
AI Literacy (Live Since 2 February 2025)
Article 4 of the AI Act requires all providers and deployers of AI systems – regardless of risk level – to ensure their staff have a sufficient level of AI literacy. This isn't limited to your tech team. It covers anyone involved in operating or using AI systems on your behalf, which could include your HR team using a recruitment screening tool, your finance team using an AI-assisted forecasting platform, or your customer service team working alongside an AI chatbot.
The Act doesn't prescribe exactly how to achieve AI literacy. It gives organisations flexibility to design their own approaches. But "taking no action" is not a compliant approach. At a minimum, you need to demonstrate that you've assessed your teams' training needs, delivered appropriate training, and documented what you've done.
What this means practically: if your people are using AI tools at work and you haven't provided any structured training or guidance, you're already behind. National market surveillance authorities can begin enforcing this obligation from August 2026 – but private enforcement actions and reputational risk are already live.
Prohibited Practices (Live Since 2 February 2025)
Certain AI practices are now banned outright across the EU. These include AI systems that manipulate people's behaviour through subliminal techniques, exploit vulnerabilities based on age or disability, or enable social scoring. Emotion recognition systems in the workplace are prohibited outright – not restricted, not conditional, but banned. If any of your current AI tools touch these areas, they need to be reviewed immediately.
The bulk of the AI Act's provisions take effect on 2 August 2026. This is when the rules for high-risk AI systems become enforceable – and it's where the most significant obligations for deployers sit.
Which of Your AI Systems Are "High-Risk"?
This is the first question every ops and finance leader should ask. The AI Act classifies certain AI use cases as "high-risk" based on their potential to affect people's fundamental rights, health, or safety. Here are the ones most likely to be sitting inside your organisation right now:
If you're reading this list and thinking, "we definitely use some of those," you're not alone.
Most mid-sized companies are deploying at least one high-risk AI system without realising it carries specific regulatory obligations.
If you've been through GDPR compliance, the AI Act has a similar flavour: risk-based, documentation-heavy, and with extraterritorial reach. The difference is it regulates how AI systems make decisions, not just how data is stored. And where your AI systems process personal data, you'll likely need to consider both regimes in parallel – including whether a Data Protection Impact Assessment is required alongside your AI Act obligations.
Your Obligations as a Deployer
Once a system is classified as high-risk, deployers have a clear set of obligations. Here's what each one actually means in practice:
1. Follow the Instructions
This sounds obvious, but it's more involved than it appears. You must use high-risk AI systems in accordance with the provider's instructions for use. That means reading the documentation, understanding the system's intended purpose, and using it for its intended purpose. If you buy a recruitment screening tool and repurpose it to assess employee performance, you may have just made yourself the "provider" in regulatory terms – with all the heavier obligations that come with it.
2. Assign Human Oversight
You need named individuals who have the competence, training, and authority to oversee how the AI system operates. This isn't a box-ticking exercise. These people need to understand what the system does, how to interpret its outputs, and when to intervene. Think of it like having a qualified pilot in the cockpit even when the autopilot is on.
3. Ensure Data Quality
If you control the input data, you're responsible for making sure it's relevant and representative. Feeding biased or incomplete data into a high-risk system doesn't just produce bad outputs – it creates regulatory exposure.
4. Keep Logs
You must retain the automatically generated logs from high-risk AI systems for at least six months, or longer if required by other applicable laws. These logs need to be accessible in case of an investigation.
5. Inform People Affected
When a high-risk AI system makes or supports decisions about individuals – whether that's a job applicant, a loan applicant, or a customer – those people generally have a right to know that AI was involved. Transparency isn't optional.
6. Conduct a Fundamental Rights Impact Assessment
For deployers using high-risk AI in areas like employment, credit, or access to essential services, Article 27 of the Act requires a fundamental rights impact assessment before putting the system into use. This means identifying the specific risks your AI system poses to the rights of the people it affects, documenting how you'll mitigate those risks, and describing how human oversight will work in practice. Think of it as a structured way of asking: "Who could this system harm, and what are we doing about it?"
7. Cooperate with Authorities
If a national authority comes knocking with questions about your AI systems, you're required to cooperate and provide information on request.
The "Monday Morning" Plan: Where to Start
Here's the good news: you don't need to hire a compliance army to get moving. You need to take a structured, proportionate approach. We'd recommend five steps:
Step 1: Map Your AI Systems
Conduct a thorough inventory of every AI system your organisation uses, develops, or distributes. This includes the obvious ones (your recruitment platform, your forecasting tools) and the less obvious ones (the chatbot your marketing team signed up for, the AI features embedded in your CRM). You can't manage what you can't see.
Step 2: Classify the Risk
For each system, determine whether it falls into the high-risk category by checking it against the AI Act's Annex III use cases. If you're unsure, err on the side of caution – or get expert advice. The cost of incorrect classification is significantly higher than the cost of a proper assessment.
Step 3: Close the AI Literacy Gap
This obligation is already live. Assess which teams are using AI systems, what training they've received, and where the gaps are. Implement a layered training programme: baseline AI awareness for everyone, and role-specific training for teams operating high-risk systems. Document everything.
Step 4: Build Your Governance Framework
For high-risk systems, you need documented processes for human oversight, data quality management, log retention, incident reporting, transparency, and fundamental rights impact assessments. This doesn't have to be a 200-page policy manual. It needs to be practical, proportionate, and actually used by the people doing the work. Think of it as quality control for AI – not bureaucracy for bureaucracy's sake.
Step 5: Review Your Vendor Contracts
Your AI providers have their own obligations under the Act. Make sure your contracts clearly define who is responsible for what. Pay particular attention to what happens if you modify a system or use it for a different purpose than intended – this can shift you from deployer to provider, with significantly heavier obligations.
The Bigger Picture
There's a temptation to treat the EU AI Act as a compliance headache – another regulation to manage, another cost to absorb. We'd encourage you to look at it differently.
The companies that treat this as an opportunity to get serious about how they use AI – to build proper oversight, train their teams, and understand what's actually happening in their operations – will be in a stronger position. Not just legally, but competitively.
Because the real risk isn't a fine. It's deploying AI systems that your people don't understand, your customers don't trust, and your board can't explain. The EU AI Act is essentially asking you to fix that. And honestly, you should want to fix that regardless of the regulation.
AI transformation is 70% people and process, 30% technology. The EU AI Act is just making sure organisations remember the first 70%.
Disclaimer: This article provides general information about the EU AI Act and is not legal advice. Organisations should seek professional legal guidance for their specific circumstances. Regulatory timelines referenced are accurate as of February 2026.
Found this useful?
Share on LinkedIn