I am, by nature, a slow adopter. So it surprised me when I was among the first in my circle to pick up ChatGPT. I remember seeing a post by Arvid Kahl in early 2023. He wrote a simple prompt and was presented with a recipe for perfect pork chops. I had no idea what ChatGPT was at the time, but I knew immediately it was something I needed to get on top of.
But being early does not mean going all in. While others were handing their operating systems over to AI agents, I was watching from the sidelines thinking that sounded like an excellent way to break everything. For a long time I also held back on how much of the actual work I let it touch. It felt like cheating somehow, and I was not sure I trusted it enough to find out.
What changed recently is the scope of what I hand over. I started letting AI into larger, more central parts of my projects, pushing the boundary of what I would ask it to do. And as that boundary moved, something about my role became harder to ignore.
What I did not expect is what that shift would actually feel like from the inside.
I develop software. I used to get lost in problems. Hours would disappear in a debugging session, chasing down a stubborn bug, turning a tricky bit of logic over in my head until it gave in. That kind of deep work had a particular satisfaction to it. I am not doing much of it anymore. What I do instead is review. Every prompt. Every output. Every code block the AI produces in roughly the time it takes me to read a sentence. My job has shifted from crafting solutions to judging them, and from working on one thing at a time to coordinating four things simultaneously.
It is not easier. If anything, it is more draining. A Harvard Business Review piece from February 2026 put it plainly: AI does not reduce work, it intensifies it. I felt that sentence when I read it. The constant vigilance required to review AI output is a different kind of cognitive load than writing code yourself, but it is still cognitive load. AI creates a productivity paradox: it reduces the cost of production while raising the cost of coordination, review, and decision-making. You never fully switch off. Every output is a judgement call.
Reviewing well turns out to be its own skill too. You need to constantly know what it is that you’re working towards, the shape of your codebase and the question of whether the output is plausible, made up or actually working. It is not passive. It is a different kind of attention. We are used to reviewing pull requests, going over every line of code, judging whether it would break something when merged. It is just that we now have to do it hundreds of times a day.
There is something else I notice too. My own coding instincts feel quieter than they used to. The muscle is getting less use and I’m losing my skill. The agent does not get rusty, and it does not tire between problems either. I do. It can produce code at a thousand words a minute, I can barely sustain a hundred for more than a few minutes. There is no competing with that raw output.
None of this is reversible, though. AI assistants are part of the landscape now, and the productivity they enable is real enough that opting out is not a serious option.
What I keep coming back to is pace. Not the pace of AI adoption across the industry, but your own pace. The tools will keep changing. What worked last month might already be outdated. In that environment, the only sustainable thing is figuring out what actually works for you and your workflow and giving yourself room to adjust. Our brains are not built for change at this velocity, and pretending otherwise does not make the fatigue go away.
Part of that, for me, is accepting what the role actually looks like now. So now I am a reviewer and editor. I am not sure that is bad, exactly, but I am still figuring out what it means to be good at it.
