AI Makes You Faster. But Are You Solving the Right Problems?

A figure at a crossroads — representing the choice between AI-assisted speed and building genuine judgment

A study published in February 2026 by researchers at Anthropic set out to answer a question most organisations are too busy to ask: when workers use AI to complete tasks that require new skills, what happens to the skills?

The results were uncomfortable.

In a randomised controlled experiment, 52 professional developers were split into two groups — one with access to an AI coding assistant, one without. Both groups were given the same unfamiliar library to learn, the same tasks to complete, and the same time limit. Afterwards, both groups took the same knowledge test.

The AI group completed tasks marginally faster. Not significantly — but marginally. What was significant was the quiz score: the AI group scored 17% lower. Two full grade points. The largest gap was in debugging — the ability to look at code and understand what is wrong with it.

In other words: the AI group got things done without understanding what they had done.

The productivity trap

This is the productivity trap of AI adoption. The output looks right. The deadline is met. The cost is invisible — until the moment you need to supervise, audit, or fix what the AI produced. At that point, the judgment that should have been forming quietly on the job simply isn't there.

The researchers identified six distinct patterns in how participants used AI. Three patterns led to low scores (24%–39% on the quiz). Three led to high scores (65%–86%). The difference was not about which AI tool was used, or how much time was spent on the task. The difference was cognitive engagement.

Pattern Quiz Score What they did
Generation-Then-Comprehension 86% Generated code, then asked AI to explain it
Hybrid Code-Explanation 68% Asked for code + explanation together
Conceptual Inquiry 65% Asked only conceptual questions; resolved errors independently
AI Delegation 39% Had AI write everything, pasted it in
Progressive AI Reliance 35% Started independently, then fully delegated
Iterative AI Debugging 24% Used AI to check every error repeatedly

Low scorers delegated. They asked AI to write code, pasted the output, and moved on.

High scorers engaged. They asked AI to explain what it had generated. They asked conceptual questions. When they hit errors, they worked through them — using AI as a sounding board, not a shortcut.

Same tool. Vastly different outcomes.

Which problems are you solving?

Here is the question the research points toward but does not quite ask: if AI lowers the technical barrier to solving problems, who decides which problems to solve?

Speed is only an advantage if you are moving in the right direction. A team that can implement faster but cannot evaluate whether the implementation is correct — or whether it addresses the actual underlying problem — has not gained capability. It has traded judgment for throughput.

This is where technical expertise retains its value, and why organisations that treat AI adoption as a purely operational decision are likely to underestimate the risk. The skills that erode fastest under passive AI use — debugging, conceptual understanding, critical reading of output — are precisely the skills required to supervise AI, catch errors, and ask the right questions of the system producing the answers.

A workforce that cannot debug cannot govern.

What happens with more advanced models?

The study used GPT-4o. A reasonable question is whether the findings hold as models improve — and the answer is split in a way that should concern business leaders.

The speed finding is likely model-dependent. One reason the AI group in the study showed no significant time advantage was the overhead of query composition — some participants spent up to 11 minutes just formulating what to ask. More capable models reduce that friction considerably. They understand intent better, require less precise prompting, and often anticipate the follow-up. With today's most advanced models, a real speed gain is probable — possibly a meaningful one.

The skill erosion finding, however, almost certainly gets worse as models improve — not better.

The more reliably correct the output, the easier it becomes to accept it without engaging. A model that never makes obvious errors removes the natural friction that forces understanding. Debugging — which was already the largest skill gap in the study — is exactly the skill that atrophies fastest when errors are rare and output is convincing. The gap between engaged and disengaged users is likely to widen, not narrow, as model quality increases.

With today's tools, the picture is probably this: real speed gain, larger skill erosion. The productivity headline looks better. The hidden cost is higher.

This does not make advanced AI tools less valuable. It makes the case for intentional adoption strategy stronger. The upside of capable AI is only sustainable if the people using it retain the judgment to supervise what it produces.

What this means for your organisation

The research suggests two distinct levers, one for individuals and one for leadership.

For individuals acquiring new skills in AI-augmented workflows: the mode of interaction matters as much as the tool. Asking AI to explain, not just produce. Treating errors as information rather than problems to be immediately resolved. Maintaining cognitive engagement even when delegation is available. These are learnable habits — but they are not the default.

For leadership designing AI adoption strategy: the question is not whether your teams are using AI. They are, or they will be. The question is whether adoption is structured to preserve the judgment your organisation depends on — or whether it is quietly eroding the expertise that makes your team capable of knowing a good answer from a bad one.

AI adoption without intentional skill development is not a neutral decision. It is a slow trade of depth for speed.

Source: Judy Hanwen Shen and Alex Tamkin, How AI Impacts Skill Formation, arXiv:2601.20245v2, February 2026. Shen conducted this work as part of the Anthropic Fellows Program.

Share this article

Ready to build AI capability, not just efficiency?

Transformer Labs works with organisations to design AI adoption strategies that preserve judgment and develop the skills that matter. Let's talk.

Start the Conversation
← Back to Homepage