I have been writing code for over a decade.
Last week, I wrote maybe 10 lines myself, and I shipped more than I usually do.
It’s not a flex, you’re probably doing the same thing. But it took me a while to get comfortable with it because writing was always the part that felt like real work to me.
Something has shifted though, and I think a lot of experienced developers are feeling it without quite knowing what to do with it.
The Role Has Changed.
There is a version of this conversation that goes: "AI writes code now, so developers are done." That is not what I am seeing.
What I am actually seeing is that the work itself has moved.
For a long time, writing software was the hard part. That is why you spent years getting good at your language, your stack, your mental models. Execution was where most of the effort went. That is no longer true.
And when execution stops being the constraint, everything that used to wait behind it becomes visible. Understanding the actual problem. Defining what the system should look like before a line gets written. Knowing which shortcuts will cost you six months from now. That stuff does not get easier just because the code writes itself. If anything, it matters more.
The job did not disappear. It just sits at a different point in the process now.
My days look different because of it. Less time writing, more time managing what AI produces. It feels less like craftsmanship and more like technical direction. And the catch with that is: a fast, capable system that makes mistakes in subtle ways requires more attention, not less. You still have to know enough to catch what it gets wrong.
What I Actually Do Now
Before I touch any AI tool, I think through what I am building. Not at the line level, but at the system level.
- What are the components?
- How does data move between them?
- What are the constraints?
- What should this thing never do?
This is the part that AI cannot do for you. Not because the models are not capable, but because the answers live in your head and in the business context around the project.
Then, I ask Claude to generate an implementation plan for the feature. I will review that part thoroughly, before asking for a list of TDD-style tests to be generated before any code gets written.
Starting with that allows me to define what “done” looks like for the current feature.
If you skip these steps, you end up reviewing code against a vague idea of what you wanted. That is where AI slop comes from, not from bad models, but from unclear input.
Once I’ve reviewed the tests, the implementation work can start. I’ll usually try to do it step by step, with smaller chunks of the feature that I can review as I go.
This matters more than people realize. When you let AI generate a large chunk at once, reviewing it becomes overwhelming. You end up skimming and you end up missing things. Breaking it down forces both the AI and you to think in sequence, and it keeps each review focused enough to actually catch problems.
Then a second AI reviews the output, then I review it myself one more time.
There is also something worth saying about the speed. I will be honest: there is something genuinely satisfying about watching an idea become working software faster than it used to.
Getting to the built thing faster just means more time for the parts that actually require a human.
What You Still Have to Bring
That last review step is where experience earns its place.
I am not only checking syntax. I am also asking whether this approach makes sense for the business and whether we are solving the right problem at all.
This is where AI has a specific blind spot worth understanding. It is very good at producing code that looks functional. Tests pass. The feature works. But "works right now" and "holds up over time" are different things. AI does not naturally think about what happens when requirements shift, when the codebase grows, or when another developer has to modify this six months from now. It optimizes for the output in front of it, not for the system around it. That gap is exactly where experienced judgment lives.
A junior developer might see the same output and think it looks fine, because it does look fine. What they cannot always see yet is how it will behave when something changes around it. That judgment is not something you can prompt your way into. It comes from having seen enough things break.
If you are earlier in your career, this is not a reason to panic, but it is a reason to be intentional. The repetitive work that used to teach you how things connect is being automated, which means you have to find other ways to build that understanding deliberately. The good news is that learning has also gotten easier. You can take a well-known open source project, something like Rails or Linux, and ask AI to walk you through how a specific part works. When you do, push further than the answer. Ask why it was built that way, not just how it works. That habit is what separates developers who are actually learning from the ones who are just collecting answers.
The developers who will grow in this environment are the ones who use AI to understand systems faster, not to avoid understanding them at all.
What This Means
I did not stop being an engineer.
I stopped being a typist.
The craft is still there. It just looks different now. Less time at the keyboard, more time thinking about systems. Less writing, more reviewing. Less implementation, more direction.
If anything, the parts of the job I find most interesting, the architecture decisions, the problem framing, the moments where you push back and say "I think we are solving the wrong problem", those parts have more space now.
The 10 lines I wrote last week? Those were the lines that actually needed a human.

