New research from Anthropic’s economists has mapped where AI is actually displacing work, not where it theoretically could. The findings are less dramatic than the headlines, and more consequential.
The headline numbers are striking enough on their own. 94% of knowledge work is theoretically AI-ready. AI is currently handling 33% of it. Most organisations read that gap as reassurance: proof that the disruption is still coming, not already here.
We read it as a countdown.
But the most important finding in the Anthropic report isn’t the headline gap. It’s a quieter number buried further in: new hires aged 22–25 into AI-exposed roles are down 14% since late 2022. Nobody is being laid off. Nobody is raising an alarm. The bottom of the pipeline is just quietly stopping refilling.
That distinction matters enormously, and most leadership teams are missing it.
The Difference Between Disruption and Compression
When most people think about AI’s impact on the labour market, they imagine a dramatic event. A wave of redundancies. A restructuring announcement. Something visible enough to respond to.
What the Anthropic research describes is something different and, in many ways, harder to manage. It isn’t disruption. It’s compression.
Companies aren’t replacing their experienced people with AI. They’re simply not replacing the layer beneath them. Junior roles in AI-exposed occupations, the analyst positions, the entry-level coordinators, the graduate intake, are being quietly absorbed. Not eliminated. Just not refilled.
This matters because organisations don’t experience this as a crisis. There’s no moment of acute pain. The experienced team is still there. Output is still acceptable. The warning signals that would normally trigger a strategic response, rising unemployment, redundancy costs, public restructuring, never appear.
Instead, the structure quietly gets thinner. And it keeps thinning until one reorganisation, one leadership change, or one period of growth makes it suddenly, painfully visible.
Why the Traditional Response Won’t Work
The natural instinct when a talent gap appears is to hire into it. That’s how organisations have always solved pipeline problems: identify the shortage, recruit to fill it, build the bench.
That option is closing.
If the roles that used to feed your pipeline are the same roles that AI is quietly absorbing, hiring into them becomes harder to justify. The economics don’t support it, and increasingly the work doesn’t require it. You’re not filling a gap. You’re rebuilding a layer that the organisation has already started to route around.
The organisations that navigate this well won’t be the ones that try to rebuild the traditional pipeline. They’ll be the ones that build something different in its place: internal AI capability that lets their existing people do what the missing layer used to do, without burning out or lowering quality.
That’s a fundamentally different response. And it requires a fundamentally different kind of leadership.
The Leadership Problem Is the Pipeline Problem
Here’s the connection that most discussions of this research miss entirely.
The pipeline doesn’t thin because AI is replacing junior workers. It thins because organisations don’t have the internal capability to see what’s happening, respond to it clearly, and build the systems that replace what’s quietly disappearing.
That capability gap starts at the top.
Leaders who genuinely know how to use AI are the ones who can see where the gaps are forming. They’re the ones who can identify which workflows are candidates for augmentation, which roles are evolving rather than disappearing, and how to build the internal systems that keep the organisation functional as the bottom layer of the traditional ladder compresses.
Without that fluency at the top, the structural problem stays invisible until it’s too late to fix. Leaders end up reacting to a reorganisation that was years in the making, because nobody had the knowledge to see it coming, or the tools to do anything about it.
The pipeline problem and the leadership problem are the same problem, viewed from different angles.
What "Building Internal Capability" Actually Means
This phrase gets used a lot. It’s worth being specific about what it means in practice, and what it doesn’t.
It doesn’t mean buying AI licences. Most organisations already have them. Usage is scattered, adoption is shallow, and the investment is largely sitting idle.
It doesn’t mean running an AI strategy workshop. A room full of leaders nodding at slides about large language models is not capability. It’s awareness at best, and borrowed language at worst.
It doesn’t mean a training programme that teaches people about AI in the abstract. The workbook they’ll never open again. The framework they couldn’t apply to their actual work if they tried.
What it does mean is leaders who have genuinely used AI on real problems and understand what it can and can’t do. Teams that have built working systems, not prototypes, not pilots, not proof-of-concepts, but tools they use every week. An organisation where the knowledge of how to build the next system lives inside the people, not locked inside a consultancy engagement.
That last point is critical. The organisations that come out of this period with a structural advantage won’t be the ones with the best vendor contracts or the most sophisticated AI strategy. They’ll be the ones whose people know how to build. Because that knowledge compounds. Every system built makes the next one faster. Every leader who genuinely understands AI makes the organisation more capable of seeing what comes next.
Why It Has to Start at the Top
There’s a temptation to treat AI capability building as something you do with the teams who do the work: the analysts, the coordinators, the operations layer. Let them get on with it. Leadership will catch up later.
This gets the sequence exactly wrong.
Your teams are already using AI. Many of them are already experimenting with workflows, automating tasks, finding efficiencies you haven’t sanctioned and don’t know about. They’re watching how you talk about it. They’re drawing conclusions about whether leadership understands what’s happening or is simply nodding along.
The gap between knowing AI matters and genuinely leading it is one of the most visible credibility gaps in organisations right now. Leaders who speak about AI from genuine personal experience, who have built something themselves, who understand the constraints as well as the possibilities, lead differently. They ask better questions. They make better investment decisions. They recognise the difference between a working system and a demonstration dressed up as one.
And critically: they can see the pipeline thinning before it becomes a crisis, because they understand the shape of the problem.
One Afternoon. Lasting Capability.
This is the problem that Leading with AI is designed to solve.
Not a lecture. Not a demo. A focused half-day where senior leadership teams use AI on their actual work, on the real challenges in their function, in the tools they already have access to.
Every leader in the room leaves with a working AI tool they built themselves. Tailored to their function. Ready to use Monday morning. And, crucially, with the understanding of how to build the next one, without needing external help to do it.
That’s the distinction that matters. The goal isn’t a workshop that produces one useful tool. It’s a workshop that produces leaders who can keep building, who have closed the gap between knowing AI matters and genuinely leading it, and who can start to see the organisational landscape clearly enough to respond to what the Anthropic research is describing.
The pipeline is thinning. The organisations that build internal capability now, before the compression becomes visible, are the ones that will have a structural advantage that’s genuinely hard to close.
The question isn’t whether to start. It’s whether to start before or after the reorganisation makes it obvious.
Ready to build genuine AI capability in your leadership team? Leading with AI is a half-day workshop for senior leadership teams. Fixed price. No commitment beyond the session.
Source: Massenkoff, M. & McCrory, P. (2026). "Labor market impacts of AI: A new measure and early evidence." Anthropic.