The Trap of Convenience
March 2023. GPT-4 launched, and like many developers, I saw the potential immediately. Not the hype - the actual utility.
I’d been writing Go for over a decade. Built distributed systems, debugged production nightmares, made every mistake worth making. I knew what good code looked like.
But GPT was convenient. Why spend 20 minutes reading documentation when I could get an answer in 30 seconds? Why think through edge cases when AI could generate them? Why architect when I could prompt?
The shift was subtle. I went from “let me think about this” to “let me ask ChatGPT about this.” From considering tradeoffs to accepting the first solution that compiled.
I wasn’t learning from AI. I was outsourcing to it.
Decline You Don’t Notice
Here’s the insidious part: I didn’t get worse at programming. I got worse at thinking.
I still knew Go inside and out. I could still recite the memory model, explain channel semantics, debug race conditions. The knowledge was there.
What atrophied was the application of that knowledge. The judgment. The instinct that says “this feels wrong” before you even know why. The pattern recognition that comes from having solved similar problems a hundred times.
I asked GPT to implement a background worker. It gave me code that technically worked, but fucked me sideways:
 |  | 
Ten years ago, I would’ve immediately seen the problems: no context, no cleanup, no graceful shutdown. I would’ve written it properly the first time.
But I’d gotten lazy. I accepted AI’s solution, shipped it, and three weeks later debugged a memory leak that never should have existed.
The fix was obvious - though ignoring visibility, recovery and drainage, for simplicity:
 |  | 
But the damage wasn’t the bug. It was that I’d stopped applying expertise I already had. I’d traded judgment for convenience.
When Experience Becomes Irrelevant
Eight months in, I was staring at a k8s manifest I’d deployed a few weeks earlier. The pod was OOMing in production.
I couldn’t figure out why. Not because the problem was complex, but because I didn’t understand what I’d deployed. I’d prompted ChatGPT for a “production-ready” config and shipped whatever it gave me.
A decade of infrastructure experience, and I was reading my own deployments like a junior engineer reading someone else’s code.
That’s when it hit me: I’d let AI turn expertise into irrelevance.
The SBS Swiss Business School study calls this “cognitive offloading” - delegating mental work to AI until your brain stops doing it. Like a muscle, it atrophies.
The research focused on younger developers, which makes sense. They’re building expertise while using AI. But I’d already built that expertise over 13 years. I was actively eroding it.
Hard Way Back
I cut AI cold. Disabled Copilot, blocked ChatGPT, deleted the extensions.
The first few weeks were uncomfortable. Not because I couldn’t solve problems - I could. But I’d trained myself to reach for the shortcut. The neural pathway was problem -> prompt -> solution, and now the middle step was just… gone.
I had to rebuild the habit of thinking.
Small things made the difference. Forcing myself to consider a few different approaches before writing code. Reading documentation even when I thought I knew the answer. Debugging by reasoning instead of by asking.
The turning point came from a gnarly race condition in a distributed cache. Old me would’ve described it to ChatGPT and taken whatever fix it suggested. Instead, I spent an afternoon tracing through the actual problem, understanding the specific failure mode, considering different solutions.
I already knew how race conditions worked. I already knew distributed systems. But I’d stopped using that knowledge to think through specific problems. I’d outsourced the thinking to AI and kept only the facts.
After about seven to eight months, the sharpness came back. Not because I relearned anything - I hadn’t forgotten it. But because I’d rebuilt the cognitive muscles that apply knowledge to problems. The judgment that comes from actually thinking instead of prompting.
What I Learned By Measuring
Once recovered, I tested the productivity claim properly.
Three representative tasks, done twice: once with AI assistance, once without.
With AI:
- Task 1: 45 minutes -> found security issue in review -> 30 minutes to fix
 - Task 2: 90 minutes debugging AI-generated code I barely understood
 - Task 3: 60 minutes -> migration had no rollback -> 60 minutes to rewrite
 
Total: 285 minutes of activity, mediocre quality
Without AI:
- Task 1: 90 minutes, caught edge cases during implementation
 - Task 2: 30 minutes debugging code I understood completely
 - Task 3: 120 minutes, designed with rollback from the start
 
Total: 240 minutes of thought, high quality
The AI version felt faster. More output, more commits, more visible activity. But when you account for the rework, the debugging, the technical debt - it was slower and worse.
More importantly: the AI version left me with code I barely understood. The manual version left me with deep understanding of the system.
How I Use AI Now
The tool isn’t the problem. How you use it determines whether it helps or hurts.
Research assistant - “What’s the difference between PostgreSQL isolation levels?” Fine. It gives me a starting point, I verify against docs, I think through the implications for my specific system.
Syntax lookup - “What’s the Rust trait syntax?” Fine. This is just faster documentation.
Boilerplate generation - After I’ve written the pattern myself and understand it, AI can repeat it. But I write the first five versions myself.
Not for thinking - Architecture decisions, debugging strategy, algorithm selection, tradeoff analysis. This is where expertise matters. This is where AI makes you dumb.
The litmus test: If I can’t explain it clearly to someone else, I don’t ship it. If AI wrote it and I just verified it compiles, that’s not good enough.
Summing it up, right now I’m calling AI my Glorified Research Assistant.
Universal Pattern
This isn’t just about AI. It’s about any tool that lets you skip the thinking.
Copy-pasting StackOverflow answers without understanding them. Using frameworks without knowing what they do. Letting autocomplete write your code. Generating solutions instead of solving problems.
The pattern is the same: convenience that trades short-term speed for long-term competence.
You can have 13 years of experience or 13 months. Doesn’t matter. If you stop exercising judgment, it atrophies. If you stop applying your knowledge, it becomes trivia.
AI accelerates this because it’s so good at giving you something that works. Not something optimal. Not something you understand. Just something that compiles and passes basic tests.
And that’s enough to fool you into thinking you’re productive. Until you debug an incident and realize you have no idea how your own system works.
Real Cost
The cost isn’t just code quality or productivity. It’s something deeper.
Programming is thinking made concrete. The value isn’t in the code you produce - it’s in the understanding you build while producing it. The mental models. The pattern recognition. The judgment that only comes from having made the decision yourself a hundred times.
When AI does the thinking for you, you get the code but lose the understanding. You ship features but don’t build expertise. You stay busy but stop growing.
That understanding is what separates senior engineers from prompters - it’s what makes experience valuable.
And it’s exactly what AI erodes when you use it as a replacement for thought instead of a supplement to it.
Choice
AI is here. It’s getting better. More developers will use it.
The question isn’t whether to use AI. It’s whether to let it think for you.
You can use it as a tool that amplifies your expertise - faster documentation lookup, better syntax recall, quicker prototyping of ideas you already understand.
Or you can use it as a crutch that replaces your expertise - accepting solutions you don’t understand, shipping code you can’t explain, making decisions without judgment.
One makes you more effective. The other makes you dependent.
I spent probably a full year becoming dependent, and at least that much recovering (though I’m trying to convince myself it took just a few months to recover).
If you’re reading this and recognizing the pattern - you’re not too far gone. But you have to make the choice. Think or prompt. Build understanding or accumulate output. Exercise judgment or outsource it.
The AI train isn’t slowing down. But that doesn’t mean you have to let it drive.
Audit your next coding session: How many times do you think before asking AI? The answer tells you whether you’re using a tool or being used by one.