You're staring at 200 lines of code that AI just generated for you. It looks correct. The tests pass. But you have no idea what half of this code actually does.
Welcome to the developer identity crisis of 2026.
Something strange is happening in software engineering. The community is splitting into two camps. On one side, developers are ditching AI tools entirely, going back to writing every line by hand. On the other side, programmers are doubling down on AI, treating code as an "implementation detail" they rarely touch anymore.
This isn't just personal preference. Research published in January 2026 by Anthropic reveals the uncomfortable truth: how you use AI determines whether you're getting better or worse at programming.
The Return to Manual Coding
Let's start with the rebels, developers who tried AI coding assistants, then turned them off.
They're not technophobes. They just noticed something was wrong.
The Comprehension Debt Trap
You've heard of technical debt. But AI introduces a new kind: comprehension debt.
Comprehension debt is when code gets written without you actually thinking through it. AI writes it in 60 seconds. You glance at it, see it works, and move on. Then two weeks later, there's a bug. Now you owe interest on that debt, except the interest is paid in confusion.
You're debugging code you never understood in the first place.
GitClear analyzed 211 million lines of code written between 2020 and 2024 . The results are alarming.
Code churn nearly doubled. Code that gets revised within two weeks jumped from 3.1% (2020) to 5.7% (2024). That's code written so poorly it had to be rewritten almost immediately.
Code reuse collapsed. Refactored code dropped from 24.1% to 9.5%. Copy/pasted code shot up from 8.3% to 12.3%. 2024 was the first year copy/pasted code exceeded refactored code.
Duplicate code exploded. Duplicated code blocks increased eightfold in 2024. Research shows 57.1% of bugs involving duplicated code happen because someone forgot to update all the copies.
Why? As GitClear put it: "AI assistants tend to give suggestions for added code, but never suggestions for updating, moving, or deleting code."
The 9-to-1 Curse
Developer Steve Jones coined a term for this: The 9-to-1 Curse . AI saves you 1 unit of time writing code but costs you 9 units reviewing, debugging, and maintaining it later.
You feel productive shipping features. But three months later, drowning in bug reports and technical debt, you realize the trade wasn't worth it.
There's also a psychological cost. AI kills flow state. Every two minutes you're context switching between writing prompts, reviewing output, tweaking, testing. You're not creating anymore. You're managing a conversation with a bot.
Some developers describe it as going from "creator" to "homework checker."
The AI-First Architects
But some developers went the opposite direction. They restructured their entire workflow around AI.
These developers aren't trying to write code anymore. They're trying to orchestrate it.
Spec-Driven Development
Traditional workflow: Think → Code → Test → Debug → Ship
New workflow: Think → Spec → AI Generates Code → AI Writes Tests → Review → Ship
This is Spec-Driven Development. You write detailed specifications like what the function should do, edge cases, error handling, performance requirements. Then you hand that spec to Claude or GPT. The AI generates the code and writes the tests. You review both and ship.
If something's wrong, you don't debug the code. You fix the spec and regenerate everything.
Programming Languages as Walls
Developers in this camp argue programming languages are just friction. You have an idea, but to make it real, you have to translate it into Python or JavaScript. Remember syntax. Handle boilerplate. Write the same patterns over and over.
AI tears down that wall.
Stack Overflow's 2025 survey shows 84% of developers now are using or planning to use AI . The data shows AI excels at specific tasks: boilerplate code, unit tests, API documentation, scaffolding new projects. Why not let AI handle that and spend your brain on system design?
This is the "10x engineer" redefined, someone who ships 10x more by leveraging AI effectively.
This works great for prototyping, repetitive work, and exploring new frameworks. It works less well for complex novel problems, performance-critical code, and security-sensitive code.
What Science Actually Says
So who's right?
In January 2026, Anthropic published a study with actual data .
The Experiment
Researchers recruited 52 software engineers. All used Python regularly. None knew Trio, a Python library for asynchronous programming.
Split into two groups:
- Control: Learn Trio manually, no AI
- AI group: Learn with AI assistance available
After completing coding tasks, everyone took a quiz with no AI allowed. The quiz tested debugging, code reading, code writing, and conceptual understanding.
The Results
The AI group scored 17% lower on the quiz. Nearly two letter grades worse. Control group averaged above 60%, reaching 67%. AI group averaged below 60%, some dropping to 45%.
And the kicker: The AI group wasn't significantly faster. Maybe 2 minutes quicker. Not statistically significant.
They learned less and didn't even save time.
Why This Happens
When you solve a problem yourself, your brain builds neural pathways. You understand why the solution works.
When AI solves it for you, those pathways don't form. You get the answer, but not the understanding.
Think of it like an exam. Closed-book (no AI) vs. open-book (with AI). When the test is over, the closed-book group remembers more.
The study validates what manual purists feared: blind delegation destroys skill acquisition, especially for junior developers.
But Some AI Users Didn't Fail
Not everyone in the AI group scored poorly. The researchers identified six interaction patterns.
Low scorers (<40%):
- AI Delegation: Asked AI to write all code, never tried themselves
- Progressive Reliance: Started manual but increasingly offloaded thinking to AI
High scorers (>60%):
- Generate then Ask: Generated code, then asked follow-up questions to understand it
- Hybrid Queries: Asked for both code and explanations
- Conceptual Questions: Used AI like a tutor. Asking "Why does this work?" not "Write this for me".
Low scorers used AI as a generator. High scorers used AI as a tutor.
The Middle Path: Active Learning
You can use AI without losing skills. But you have to be intentional:
- Try solving it yourself first. That struggle is where learning happens.
- Use AI for research, not replacement. Ask "How does this work?" not "Write this for me."
- Review generated code line by line. Read it. Understand it. Modify it yourself.
- Ask AI to explain. "Why did you write it this way?" is more valuable than "Write this."
- Maintain ownership. You should explain every line in your codebase.
This is active learning, using AI to accelerate your understanding, not replace it.
What This Means in Practice
Junior Developers
The Anthropic study shows AI during skill acquisition is harmful. You're building your foundation. If AI does the thinking, you won't develop debugging and comprehension skills.
Better approach: Code manually for 6-12 months. Use AI for explanations, not generation. Build muscle memory. Once you have a foundation, then leverage AI more.
Senior Developers
You already have deep understanding. You can use AI more freely because you spot when it's wrong.
This is where Spec-Driven Development makes sense. You know what good code looks like. You can write detailed specs and review AI output effectively.
But watch for skill decay. If you stop coding in a language, you get rusty.
Managing Teams
Code review becomes critical. If devs use AI heavily, reviews need to verify understanding, not just correctness.
Consider different guidelines for juniors vs. seniors. Maybe juniors have limited AI until they demonstrate mastery.
Google's DORA research (2025) describes AI as an amplifier. Adoption is widespread and productivity is up but AI magnifies whatever system it enters. Strong teams see gains in throughput and performance; weaker ones risk amplifying instability.
The Real Answer
Both camps have valid points.
Manual purists are right: Blind AI delegation causes comprehension debt, reduces code quality, and destroys skill formation.
AI architects are right: For experienced developers on the right problems, AI dramatically increases output.
The Anthropic study reveals the truth: how you use AI determines your outcome.
Use AI as a generator, something that does your thinking. You'll get faster short-term but weaker long-term. Comprehension debt compounds. Debugging skills atrophy. You become dependent on a tool that can't solve novel problems.
Use AI as a tutor, something that accelerates your thinking. You learn faster and maintain deep understanding. You explore, explain, verify. But you stay in the driver's seat.
What Actually Matters
The future isn't about speed. It's about deep understanding.
AI can't innovate. It pattern-matches. When you hit a truly novel problem like weird edge case, complex architecture, production performance bottleneck. AI won't save you. Understanding will.
We're entering a world where writing code is easy. AI does it in seconds.
The hard thing (the valuable thing) is understanding what the code should do. Why it works. When it's wrong. How it fits the larger system.
That's the skill worth protecting.
The question isn't "Should I use AI?" It's "How do I use AI without losing the ability to think?"
The answer: Stay actively engaged. Keep thinking. Don't outsource your understanding.
Use the tool. Don't let the tool use you.

