Back to Home

Are AI Copilots Making Us Worse Engineers?

8 min read
Sumeet Zankar

Sumeet Zankar

AI Solution Architect & Full-Stack Developer

The uncomfortable truth about cognitive offloading and what the research actually says


I've been using AI coding assistants daily for over a year now. Cursor, Copilot, Claude—they're all part of my workflow. And honestly? They're incredible. I ship faster, prototype quicker, and tackle unfamiliar codebases with more confidence.

But lately, I've been asking myself an uncomfortable question: Am I actually getting better as an engineer, or am I just getting better at using AI?

Turns out, I'm not the only one asking. And the research coming out in 2025 and 2026 has some sobering answers.


The Studies That Should Make You Pause

Anthropic's RCT: 17% Lower Comprehension

In early 2026, Anthropic published a randomized controlled trial with 52 software developers learning a new Python library (Trio, for async programming). The results were striking:

  • Developers using AI scored 17% lower on comprehension tests—equivalent to nearly two letter grades
  • The biggest gap was in debugging questions
  • AI didn't even significantly speed them up (only ~2 minutes faster on average)

The kicker? Participants spent up to 30% of their time just composing AI queries. The productivity gains were largely illusory.

METR Study: 19% Slower (Yes, Slower)

If Anthropic's study was surprising, the METR study from July 2025 was downright counterintuitive.

They recruited 16 experienced open-source developers—people with years of experience contributing to repos averaging 22,000+ stars and 1M+ lines of code. Real experts working on real issues from their actual projects.

The findings:

  • Developers using AI took 19% longer to complete tasks
  • They believed AI was speeding them up by 24%
  • Even after experiencing the slowdown, they still thought AI had helped

The researchers identified five contributing factors to the slowdown, but the core insight was clear: there's a massive perception gap between how helpful we think AI is and how helpful it actually is.

Stack Overflow 2025: Trust Is Cratering

The industry-wide picture isn't rosier:

  • 84% of developers use or plan to use AI tools
  • But positive sentiment dropped from 70%+ to 60% year-over-year
  • Trust in AI accuracy fell from 40% to 33%
  • 66% of developers say they spend more time fixing “almost right” AI code
  • 46% actively distrust AI output (vs. 33% who trust it)

The #1 frustration? “AI solutions that are almost right, but not quite.”


The Three Patterns That Kill Learning

Anthropic's qualitative analysis identified six distinct patterns in how developers interact with AI. Three of them correlate strongly with poor learning outcomes:

1. Full AI Delegation

Just ask AI to write everything. Fastest completion time, lowest comprehension. These developers encountered almost no errors—which sounds good until you realize errors are how we learn.

2. Progressive AI Reliance

Start with a question or two, then gradually let AI take over. By the second task, these developers had completely delegated. Their quiz scores tanked on concepts from the second task.

3. Iterative AI Debugging

Copy error, paste to AI, get fix, repeat. Never actually read the error message. Never understand why something failed. This group was slow and learned nothing.


The Patterns That Preserve Learning

The good news? Not all AI use leads to skill erosion. Three patterns preserved learning even with heavy AI assistance:

1. Conceptual Inquiry

Only ask conceptual questions: “How does Trio handle cancellation?” “What's the difference between nurseries and channels?” Then code independently.

This was actually the fastest high-scoring pattern—faster than delegation in some cases.

2. Generation-Then-Comprehension

Generate code with AI, then immediately ask: “Explain what this does line by line.” “Why did you use this pattern instead of X?”

Looks similar to delegation on the surface, but the follow-up questions make all the difference.

3. Hybrid Code-Explanation

Ask for code and explanations in the same prompt: “Write the handler and explain each decision.”

Takes longer, but understanding compounds.


The “Perpetual Junior” Problem

Here's what concerns me most: cognitive debt.

When you offload thinking to AI, you're taking out a loan against your future competence. It feels free now—you ship faster, you look productive. But you're not building the mental models that let you debug at 2 AM when production is down and the AI is hallucinating.

For junior developers, this is existential. The whole point of your first few years is to build foundational understanding. If you delegate that process to AI, you end up as what some are calling a “perpetual junior”—someone who can prompt their way to working code but can't explain why it works or fix it when it breaks.

The Anthropic researchers put it bluntly: “Cognitive effort—and even getting painfully stuck—is likely important for fostering mastery.”


The Illusion of Competence

The METR study revealed something disturbing: developers felt faster even when they were slower.

This isn't unique to AI. It's a well-documented cognitive bias. When something reduces friction, we perceive it as making us more productive—even if the actual output doesn't improve. AI coding assistants are exceptionally good at reducing the feeling of effort.

But feeling productive and being productive aren't the same thing.


So What Should We Actually Do?

I'm not suggesting we abandon AI tools. That's neither realistic nor desirable. But we need to be intentional about how we use them.

For Individual Developers

  1. Use AI for the boring stuff, not the learning stuff. Boilerplate, repetitive tasks, syntax you already understand—fair game. But when you're learning something new, consider going manual.
  2. Read before you paste. If you're debugging, actually read the error message. Form a hypothesis. Then maybe ask AI to validate.
  3. Ask “why” not “what.” Shift from “write me a function that...” to “explain how I should approach...” The latter builds understanding.
  4. Time-box AI-free coding. Deliberately practice without AI assistance. It'll feel painful at first—that's the point.

For Engineering Managers

  1. Don't measure productivity by commit frequency. AI makes it trivially easy to ship lots of low-quality code quickly.
  2. Create space for learning. Junior developers need time to struggle. Schedule it.
  3. Code review AI-generated code more carefully. The developer may not understand what they're submitting.

For the Industry

  1. Build “learning modes” into AI tools. Anthropic mentioned Claude Code has a “Learning and Explanatory” mode. Use it. Demand more tools like it.
  2. Rethink interviews. If candidates learned with heavy AI assistance, traditional coding interviews may not reveal actual competence.
  3. Track skill development, not just output. We need metrics that capture whether engineers are growing, not just shipping.

The Meta-Skill: AI Supervision

Here's the paradox: as AI writes more code, humans become more important—not less.

Someone needs to catch when the AI is wrong. Someone needs to understand system design. Someone needs to debug the weird edge cases that AI training data didn't cover.

But developing those skills requires the very cognitive effort that AI makes it easy to skip.

The developers who will thrive aren't the ones who can prompt the best. They're the ones who maintained their fundamentals while also learning to leverage AI effectively. That's a harder path. It's also the only sustainable one.


Final Thought

Every technology shift creates this tension. Calculators didn't eliminate the need to understand math—but they did make it possible to coast through without really learning. GPS didn't eliminate the value of spatial reasoning—but many of us have atrophied that skill.

AI coding assistants are the same. Incredibly powerful tools that, used carelessly, will erode the very capabilities they're meant to augment.

The research is clear. The question is what we do about it.


What's your experience? Have you noticed changes in how you think about code since adopting AI tools? I'd love to hear your perspective—connect with me on LinkedIn.


References

AIDeveloper ProductivityCopilotSoftware EngineeringSkill DevelopmentAnthropic

Enjoyed this article?

Connect with me on LinkedIn for more insights on AI, automation, and full-stack development.