AI Can Write Your Code. It Can't Do Your Job.

I call my team engineers, not programmers. I've always done that. It wasn't a conscious decision at first. It just felt right. Engineers solve problems. Programmers write code. The difference used to feel like semantics.
It doesn't anymore.
The Job Was Never the Code
A few weeks ago, a couple of engineers on my team started leading an effort to build internal tooling for documentation. The goal: make our docs usable for non-technical folks. Product managers, support teams, stakeholders who need to understand what we built without reading source code.
The interesting part is where the work actually lived. It wasn't in the code. It was in the conversations before the code. Understanding who reads these docs and why. Figuring out what "usable" even means for someone who doesn't think in APIs. Defining the scope so we wouldn't over-build. Writing a clear spec so the intent survives the implementation.
The code came last. And honestly, that's the part AI handled well.
This is what I keep seeing: the engineers who are growing the fastest right now aren't the ones writing the most code. They're the ones getting better at everything that comes before it. Specifications. Tickets that actually say what needs to happen. Product conversations where they ask "why are we building this?" instead of jumping straight to "how."
The Data Says the Same Thing
There's a study from METR that tested experienced open-source developers on real tasks from their own codebases. The ones using AI took 19% longer to finish. But here's the twist: those same developers believed they were 20% faster. A 40-percentage-point gap between perception and reality.
The Faros AI report looked at over 10,000 developers across 1,200+ teams. Developers using AI merged 98% more pull requests. PR review time went up 91%. PR size increased 154%. And at the organizational level? No measurable performance gains.
Google's DORA 2025 report found no significant correlation between AI adoption and improvements on their four key delivery metrics. Their framing is the one that stuck with me: AI is a multiplier of existing conditions. It makes good teams better. It exposes dysfunction in struggling ones.
More code is not more value. It never was. AI just made that impossible to ignore.
The Old Ideas Are the New Leverage
Here's something that surprised me. The concepts we've been talking about for years, TDD, BDD, SOLID principles, clean architecture, all the stuff that half the industry treats as academic or outdated. Those are exactly what make AI produce good code instead of plausible code.
A programmer asks AI to "build me a login page." An engineer asks AI to build a login page with single responsibility in mind, with testable units, with edge cases defined in the spec. Same tool. Completely different output.
And here's the part that's uncomfortable to admit: AI is actually better at following these principles than most of us ever were. We knew about SOLID. We talked about TDD. But we cut corners under deadline pressure, we skipped the tests when the feature felt simple, we let the abstraction grow until it was wrong. AI doesn't have those impulses. If you give it clear constraints, it follows them.
The gap between knowing and doing was always the problem with software engineering. We had the principles. We just didn't apply them consistently. AI doesn't have that gap. But it needs you to define the constraints. It needs you to know what good looks like. That's the job.
The Honest Part
I don't have clean metrics for any of this. I can't show you a dashboard that says "AI made my team 37% more effective." Nobody can, and I'd be skeptical of anyone who claims otherwise.
What I can see is a shift in how people approach work. Engineers who used to jump into code first are now spending more time on specs. Conversations about the product are getting sharper. The documentation tooling project didn't start with a pull request. It started with a question: "Who is this actually for?"
That's not something I can put in a spreadsheet. But I trust what I'm seeing more than I'd trust a made-up metric. And honestly, I think the managers who are pretending they have this figured out are the ones making the most dangerous decisions right now. They're optimizing for numbers that don't mean what they think they mean.
The METR study showed developers felt faster while being slower. That's not just a research finding. That's a warning for anyone measuring AI adoption by gut feel alone, myself included. I'm aware I might be in the same trap. The difference is I'm not pretending otherwise.
What Didn't Change
The job didn't change. That's the part people get wrong.
Engineers have always been responsible for turning ambiguity into clarity. For understanding the problem before jumping to the solution. For making trade-offs that survive contact with production. For communicating across the gap between what stakeholders want and what systems can do.
We just spent the last twenty years pretending the job was writing code, because that was the most visible part. Now that AI writes the code, there's nowhere left to hide. The engineers who were already doing the real work barely noticed the shift. The ones who were leaning on code output as a substitute for thinking are the ones feeling the ground move.
I call my team engineers because that's what the job demands. It always did. We just stopped being able to pretend otherwise.


