
My take on AI is changing
Published 14 days agoLess than half a year ago I was not only skeptical towards AI in my field, software development, but also not very impressed by the results it produced at the time. Now, however, I rarely write code myself anymore, and I'm not really observing the problems I envisioned would arise by the increased use. But. Some concerns I had, I still have.
Roughly the 14th of November 2025 I had my last day at work at my old company, Kvist. After almost three fun years there as an SRE/DevOps person/Fullstack Developer I wanted to try something new, and the 2nd of January 2026 I joined the observability tool company Dash0. Not to be confused with the Android Obfuscation tool DashO.
The observant reader might notice that there is a gap there of a month and a half. I went back to Spain where I used to live and got my drivers license, visited Norway where I'm from, and continued moving into my new home in Odense, Denmark — where I live now. Just a long vacation really. And I, for the first time, finished Advent of Code! It helped that it was only 12 days, and also that I didn't have anything else I needed to do.
And just so we're clear from the get-go; that em-dash (—), is written by me. I still will not let AI touch my articles. More on that later.
When I left Kvist back in mid-November, I recollect trying, really trying, to get AI to do my work. I didn't really want to debug Kubernetes issues all day (not that that was all I did) so if Claude could do it for me I'd be more than happy. I also, especially at work, am not that attached to the code that I write. I'm attached to the resulting piece of software, but not how it's made. What I'm trying to say is that I care about it being high quality and functioning well, but whether that's due to my code, AIs code or a library — it's not all that important to me.
But, at the end of the day, I just couldn't get it to produce results I was happy with. I recall it time and again hallucinating; writing code for the wrong library, inventing functions, variables, and so on and so forth. At the time we blamed it on the fact that we were a SvelteKit shop, and believed that it would be a lot better if we had only used React. In hindsight I think there was some truth to it, but I attribute it mostly to the immaturity of the models at the time. I also remember feeling that we were paying a lot of money for essentially very little return. A hundred US dollars a month for slop I had to fix myself seemed like a pretty stupid investment. And given that we were trying Cursor, Claude, and anything else that could help us, the subscriptions quickly racked up.
So, in the end, I just kind of stopped using it. In the month and a half I wasn't working, I don't think I touched a coding model.
The 22nd of December I for the first time in my life got on a transatlantic flight and celebrated Christmas in Canada. I had a wonderful time, and it was nice to see some real snow again.
Throughout my time in Canada, I had the chance to talk to a lot of people in tech who were, contrary to me, very positive to AI. They had already progressed on to the stage of "I can't imagine writing code without AI". Something which at the time seemed ludicrous to me. I did not waver during the celebrations though, and maintained by skeptic look on AI — preaching the dangers of relying too much on an AI model to write all your code. Some of these concerns I still think are relevant; writing and shipping code you don't fully understand can open up for business logic bugs if the AI doesn't make the same assumptions or validations as you. To illustrate: when you program, you usually have (or should have) a pretty clear idea of what you want to make. You are aware of what use cases and edge cases can occur and program with this in mind. If you don't make the AI aware of these (edge) cases, it might not write the code for it — even if it's obvious to you. In the end, it probably doesn't have the same product understanding as you.
Also, I think we don't fully know the consequences of shipping so much unread code. With AI today it's a lot easier to understand code as you can essentially use the AI as an "expert" on any piece of code, but with todays limited context windows, it's hard to keep up with larger code bases. I think we'll see the results of this in about a year. I don't necessarily think, or hope, it will be a problem, as I think the models will continue improving, but I am weary of it.
So then, coming back from Canada, naturally, I had to try out the new models. At the time, that was Anthropic's Opus 4.5. And boy, oh boy, it was a whole different game. I had joined a very pro-AI company, and the adoption was already quite high. So skills, commands, AGENTS.md and other infrastructure was already there to leverage AI. Since basically my first day at the company, I haven't really written any code "the old way" save for some tailwind classes here and there. I wanted to stay a skeptic, and avoid having to eat my words next Christmas, but no matter what I throw at it, it always seems to produce at least some value in the output. And more often than not, it's plain correct. Not to say that it's perfect — far from it, but it's definitely, for me, crossed the threshold where it now does more work than it produces for me.
I joined a company making an observability platform, and as you might imagine, there are a lot of moving parts. It's a company in growth and things are happening all over the place. This means that getting hold of people and having them explain things to you also isn't always the easiest thing, but now, I can ask Claude, and more often than not, it will give me a really good, and correct, answer. Plugging it up to our own MCP server (this is not a sponsored post I actually like it) I can have it correlate the code I'm trying to have explained to real OpenTelemetry traces. It's truly amazing. Just today I managed to mess up a Kubernetes deploy; turns out when you use subPath with projected service account tokens, it doesn't rotate them. It probably would have taken me quite a while to figure that out myself, but I could simply say to my OpenCode instance "<pod> is failing use Dash0 MCP server to debug" and it just went ahead and fetched logs, looked through the code, and drafted up a fix. All in 15 minutes. This would work with any observability platform with MCP server support. I just used Dash0 as an example because I (obviously) use that one the most.
I could go on; you can connect with agent-browser to have it perform ad-hoc E2E tests, have it write reusable scripts, and so many other things that just speed up not just the writing of code, but the debugging, testing, instrumenting, etc.
In general, AI for me, and I think for many people, has lowered the bar for a lot of work. Getting good instrumentation is now a prompt away; so is unit-, integration-, and e2e tests; debugging; documentation; refactoring and more.
But the one problem I haven't managed to have it solve yet, is reviewing all this new code. All of a sudden we're seeing this enormous influx of code. Not just at my job, but probably at yours too, and definitely in every popular open source project. And I'm not just talking about the slop. There is a lot of legitimate code too, but it's getting increasingly hard to keep up. Already pre-AI I felt that PR reviews was a large blocker.
So what is the future? Do we just not review code anymore? Have an AI review the PR combined with very strict CI pipelines? Personally, I think so. I think reviewing Vercel previews, tests and behavior will allow you to catch a lot of problems and spend way less time. Really good e2e tests I think can do a lot of the work that humans have been doing until now. Combined with a super-opinionated /review skill even more so.
But, not everyone has converted to this new way of working. Maybe for the better? That remains to be seen. I still can't seem to find two reports that agree on whether or not AI increases or decreases productivity. But, as a thought experiment, let's assume that AI actually is very productive. I'm afraid that this will now create a divide between the people using AI to develop their products, and those not doing so. The ones using it will pull ahead, leaving the rest behind. And while you could say "just use AI", I think there are a lot of people with the current pricing, iff we ignore things like Claude Pro/Max which sell discounted usage, that simply can't afford to use it.
One thing is for sure, whether or not to go all in on AI is a hot topic. /r/programming which I used to frequent seems to hate AI, and twitter seems to love it. HackerNews is pedantic and prefer arguing to arriving at a conclusion. I've changed camps, and for the time being am really enjoying the new way of working. I think it introduces a lot of new challenges and unsolved issues that I'm very excited to see solutions to; good worktree workflows, containerization, "async" programming (spinning up an agent in the background), security in AI, and lots more. I think we're going to have a few very exciting years coming up.
I'm wondering what will happen to the junior devs though. But that's a topic for another day.
On a final note, I want to talk about AI in art, and writing. I will still not let AI write any of my articles. Check for factual errors, maybe. But with all this AI content everywhere, I think it's important to keep our human art forms alive. AI writing to me reads like it's written by a lobotomized person. It is devoid of any form of personality, flare or any of the things that make reading enjoyable. If you're writing AI generated content, please stop and think about what you're doing. Why should I bother to read your content if you can't be arsed to write it yourself? The goal of programming is to make the computer behave — that's not the goal of writing, so I won't buy the argument that "they're the same". If you want to tell me something without writing a whole article, just give me a list of bullet points. LinkedIn and Twitter have become a shadow of themselves; ridden with slop, slop and more slop. I've taken to blocking people on twitter that pop up on my timeline if I suspect that they use some form of AI to write their posts. Come on. It's a 140 characters (or should be at least). I've also seen people use OpenClaw to interact with humans, such as the viral post of someone having it lowball people on Zillow. This is a horrible use of AI. You're bothering real humans, but hiding behind "it's not me it's AI" as an excuse for behaving like an asshole. You are still responsible for the actions of your OpenClaw, LLM, AI, Agent, whatever you use.
Thank you for reading. Have a nice day.