Skip to main content
EU AI Act · Article 4 · AI Literacy

Are you EU AI Act
compliant?

Enforcement Starts August 2026

Article 4 has been in force since February 2025. Enforcement starts in August 2026 and the fines are significant. Here’s what HR leaders need to know, and how ThoughtFox can help you get there.

€35MMaximum penalty under the EU AI Act
Aug ’26Article 4 enforcement begins
71%Workers using unapproved AI tools*
40%Of enterprises will face Shadow AI incidents by 2030*
EU AI Act · Article 4

The clock is ticking on AI Literacy compliance

Article 4 of the EU AI Act — the AI Literacy obligation — has been in force since 2 February 2025. That means the legal duty to ensure your staff have a sufficient level of AI literacy already applies to your organisation today.

But the harder deadline is coming fast. National market surveillance authorities begin supervising and enforcing Article 4 from 2 August 2026 — just months away.

Article 4 does not carry a standalone fine for simply not having a training programme in place. What it does — and this is the part that should concern every HR leader — is act as a significant aggravating factor when something goes wrong. When an AI-related incident occurs and regulators investigate, the first question they will ask is whether your people were properly trained to use that system safely. If the answer is no, your exposure under Article 99 increases substantially.

€35M / 7%Prohibited AI practices (e.g. unlawful biometric surveillance, social scoring)
€15M / 3%High-risk AI system failures (the tier most relevant to employers)

The realistic scenario for HR leaders is this: an AI tool your organisation uses causes a discriminatory outcome, a data exposure, or a flawed decision affecting an employee or customer. Regulators open an investigation. They find your staff lacked the AI literacy to identify, question or prevent it. That absence of training doesn’t just fail Article 4 — it removes your ability to argue reasonable precaution and pushes you firmly into the higher penalty bands.

AI literacy isn’t the fine. It’s your defence.

This sits squarely with HR and People leaders. If your organisation develops or deploys AI systems — and most do — you are responsible for ensuring your workforce can use them safely, responsibly, and with appropriate oversight.

EU Definition

What is AI Literacy?

Article 4 of the EU AI Act is where the obligation to ensure AI literacy sits. The formal definition of AI Literacy lives in Article 3(56) — the Act’s definitions article. Together, they establish both what AI Literacy means and who is legally required to ensure it. It’s not about everyone becoming an AI engineer. It’s about giving people enough understanding to engage with AI systems safely and responsibly in their role.

“Skills, knowledge and understanding that allow providers, deployers and affected persons to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI.”

EU AI Act, Article 3(56) · Obligation under Article 4

The European Commission is clear that there is no one-size-fits-all approach. Literacy must be tailored — to the individual’s role, their technical background, the specific AI systems they interact with, and the level of risk those systems carry. That tailored, role-based approach is exactly what ThoughtFox delivers.

The ThoughtFox Approach

Our definition of AI Literacy

“AI Literacy is the role-appropriate capacity to work with AI systems safely, critically and accountably — understanding enough about how AI works to use it well, judge its outputs honestly and take responsibility for decisions made with its assistance.”

ThoughtFox
  • Board can interrogate an AI strategy
  • Managers can identify risk
  • Frontline teams can use AI tools productively

Without exposing your organisation to data, security or reputational harm. And everyone, at every level, understands where human oversight matters most.

Practical competence, not compliance checkbox ticking. From boards to frontline teams, we build real capability — the kind that changes how your organisation thinks about and works with AI, not just a course completion certificate.

Tiered AI Literacy

One size does not fit all

What AI Literacy looks like for your CEO is fundamentally different from what it looks like for your payroll team. Our tiered approach delivers the right depth, in the right language, for every level of your organisation.

1
Tier 1

Board & C-Suite — AI Fluency

Strategic oversight, governance accountability and authentic AI leadership. Board members and executives gain the fluency to ask the right questions, challenge AI proposals, champion responsible adoption and represent AI governance credibly to stakeholders and regulators.

2
Tier 2

HR, Legal, Compliance & Senior Leaders

Deep understanding of the EU AI Act, risk classification, policy development and workforce obligations. These leaders need to translate regulation into practice — building the frameworks, policies and training programmes that keep your organisation protected and compliant.

3
Tier 3

Managers & Team Leaders

Role-specific literacy focused on AI decision-making, responsible adoption and change management. Managers understand how AI tools affect their teams, how to identify and escalate risk, and how to support their people through AI-enabled change — not just implement tools.

4
Tier 4

Knowledge Workers, Administrators & Frontline Teams

Practical AI literacy for day-to-day work. People at every level gain confidence using AI tools productively, understand what not to share with AI systems, know when human judgement must override AI outputs, and can identify when something doesn’t look right.

Understanding the difference

AI Literacy vs AI Fluency — and why both matter

These terms are often used interchangeably — but they’re not the same thing, and the difference matters for how you structure your programme.

AI Literacy

The foundational layer. Everyone in your organisation needs it. AI Literacy means understanding what AI is, how it works in broad terms, what risks it carries, and how to use it safely within your organisation’s policies. It’s the baseline for EU AI Act compliance — and the starting point for everyone, from the CEO to the newest recruit.

AI Fluency

The advanced layer. For leaders, decision-makers and those who direct AI strategy or deployment. AI Fluency means being able to evaluate AI systems critically, ask probing questions about outputs, governance and bias, champion responsible AI adoption and operate confidently in a world where AI is embedded in every business process. This is what your board and C-Suite need.

ThoughtFox programmes are designed to take your people from Literacy to Fluency — at the pace and depth appropriate to their role. We don’t create dependency on us; we build genuine internal capability that stays with your organisation long after we’ve finished.

Quick Assessment

Where does your organisation stand?

Answer three questions honestly. There are no right or wrong answers — only useful ones. Use the notes boxes to capture anything specific, and we’ll use your responses to personalise the guide we send you.

Question 1 of 3

Does your organisation have an AI Policy?

A formal policy that covers acceptable AI tool use, data handling, risk assessment and employee responsibilities.

Question 2 of 3

Do you know what AI your people are actually using — right now?

71%of workers have used unapproved AI tools at workMicrosoft Work Trend Index
34%of all data employees put into AI tools is classified as sensitiveNetskope Cloud & Threat Report 2026
€3.9Maverage cost of a Shadow AI data breachIBM Cost of a Data Breach Report 2025
40%of enterprises will face Shadow AI security incidents by 2030Gartner

Shadow AI is the fastest-growing data security risk in organisations today. Your employees aren’t acting maliciously — they’re being productive. But without the right literacy and governance, they’re making decisions about your data that your organisation hasn’t sanctioned, and may not survive.

Question 3 of 3

Have you implemented any AI training for your people?

Any structured programme — formal or informal — that helps your employees understand AI, use it responsibly, or develop their AI capability.

Get your personalised guide

Tell us about you

Based on your answers above, we’ll email you your free guide — How To Guide: Building AI Literacy in Your Organisation — practical, no-fluff, and matched to where you are right now.

We'll email you your free guide. No spam, no sales calls unless you ask.

Data Sources

71% of workers using unapproved AI tools: Microsoft Work Trend Index (2024) · 34% of data input to AI tools is sensitive: Netskope Cloud & Threat Report 2026 · €3.9M average Shadow AI breach cost: IBM Cost of a Data Breach Report 2025 (Ponemon Institute; $4.63M USD converted at April 2026 exchange rate) · 40% of enterprises facing Shadow AI incidents by 2030: Gartner · EU AI Act Article 4 & Article 3(56): Official EU AI Act text, artificialintelligenceact.eu · EU AI Act penalty tiers (Article 99): €35M/7% prohibited practices; €15M/3% high-risk AI failures