Reconsidering the Software Engineering Disillusionment
# Software EngineeringAbout six years ago, I read an article titled “Software Engineering Disillusionment.”
I don’t think comparing software development to airplanes or buildings is very appropriate, because those are processes that don’t change much; software development, by contrast, is built on continuous iteration, so the two have very different characteristics and are hard to compare. It’s like I wouldn’t ask a house that’s still under construction to have the functionality of Howl’s Moving Castle — a house that can move on its own.
Since the emergence of AI, my view has gone from originally thinking, back in 2023, that it wasn’t much use and was still far from replacing day-to-day development; to 2025, when I’d already gotten used to AI-assisted development and autocomplete had become an indispensable feature; to now, when more than 90% of the code is almost entirely generated directly by AI. The software engineering disillusionment has swept in again in a different form.
In the past, engineers wrote bad code themselves; now, AI produces large amounts of code that looks like it works, but no one truly understands.
With the rise of AI in software development, people can roughly be divided into two camps. One believes all programming should be handed over to AI, and that maintainability, readability, and extensibility don’t matter; humans only need to be responsible for confirming intent and direction. The other believes these kinds of AI slop are of no help at all.
The author of this tweet is the creator of Hono (if you’re still using Express, I strongly recommend giving Hono a try). He posted several tweets about how he’s recently encountered a lot of low-quality, obviously AI-generated pull requests, which have been very frustrating for him.
We are currently in this messy transitional period between the arrival of LLMs and their maturation.
The era of stable and mature LLMs will eventually come. Improvements in models will completely reshape the way software development looks, but we don’t know when that day will be. Before the singularity arrives, all we can do is go with the flow.
Caught between pain and excitement
On the one hand, I’m very optimistic about the future of AI. As model capabilities continue to improve and grow stronger, many things that currently require human involvement will inevitably be gradually replaced.
Whether it’s code quality, the quality of code produced by AI, or the effort required to make a piece of work better, the barriers will keep getting lower.
But some recent events have also made me feel somewhat distressed. In projects or in communities, you often see people simply taking what AI wrote and posting it without even reviewing it.
That means they haven’t considered the reader’s perspective. Whether it’s code or documentation, and sometimes even when discussing something, the other person will directly paste the conversation thread they had with Gemini. I think our minimum standard should be that you first have your own thoughts, then use AI to verify or strengthen your arguments before presenting them. And you should first do the initial organization yourself — I think that’s what respecting the reader looks like.
As Simon mentioned in Anti-patterns: things to avoid, if you generate thousands of lines of code with AI and open a PR without checking it yourself, you are effectively offloading the work of “verifying whether this code can be used” onto the reviewer.
The reviewer could just ask AI to write it too — so what exactly did you contribute?
A responsible PR should look like this: you’re confident the code works because you’ve verified it yourself; the scope of the change is small enough that the reviewer won’t want to close it the moment they open it; and the PR description provides enough context to explain why you made the change, rather than letting AI generate a description that sounds professional but that you yourself haven’t even read.
AI has made producing code too easy, so in turn, you need to proactively prove that you put in the effort.
Including a record of manual testing, an explanation of implementation choices, or even a screenshot — all of these are ways of telling the reviewer: I read this, and this isn’t garbage I dumped on you.
Taste
Another thing that makes me uncomfortable is the part about taste and experience.
After writing code for a long time, you naturally become able to recognize certain anti-patterns — you know that this will work for now, but sooner or later it will blow up, and some fixes only take a few lines.
But the problem is that many people now are used to letting AI generate code and then using it directly, without really going back to review it (and honestly, even I do this sometimes). That creates a vicious cycle: AI uses problematic code as context, and the code it produces afterward is built on that flawed foundation, drifting further and further off course while the user remains unaware.
Here are a few concrete examples.
Calling setState inside React’s useEffect, causing unnecessary re-renders; or architectural decisions like not using a CDN and instead reading static assets directly from the filesystem.
These things work perfectly fine in local testing. They’re also fine when traffic is low. But once traffic grows and the codebase starts to balloon, you realize it’s no longer a matter of changing a few lines.
And this isn’t necessarily because the developer was lazy and didn’t check — they may simply not know where the problem is. They haven’t stepped into that trap, so they can’t see it. But these things have a big impact on software quality, and in the end, the ones who care about quality are the ones who suffer. Huli and Ryoko have also said similar things.
You could also say that my taste is built on “solid fundamentals.” When other people’s output doesn’t meet what I consider those fundamentals, that becomes one of the sources of my discomfort, and over time it even starts to affect me.
I’m still looking for answers
Just yesterday, due to human error, Claude Code leaked a source map, making it easy for people to reconstruct the original source code. Even engineers at Anthropic, who earn more than 10 million TWD a year, make mistakes (though the cause may not have been AI). Does that mean that caring about fundamentals, security, and quality is no longer important?
Coding is largely solved
——Boris Cherny
Go and baseball
AlphaGo defeated the world’s top player Lee Sedol ten years ago, and later defeated Ke Jie. Today, no human Go player can beat AI. In a sense, Go has already been “solved” — but Go hasn’t disappeared because of that. People still play, there are still professional players, and people still get excited over a brilliant move.
Baseball is the same. Objectively speaking, having 9+1 people on a field throwing, hitting, and running bases does nothing substantive for how the world operates. Most competitive sports are probably like this. But humans still do these “useless” things, and they do them with great seriousness.
Perhaps once AI solves most productive work, programming will also become an activity more like Go or baseball — not something you do because you’re indispensable, but because you care about the process itself.
The next step
Go with the flow. That may be the best answer. Before the technological singularity, there are still some things I want to try.
- Turn the taste in my head into reproducible standards
- Share the software development experience I’ve accumulated over these years, including architecture and technology choices
- Thoughts on software development in Japan
There’s one other thing that I’ve been feeling quite deeply about recently, but I don’t yet have a clear formed idea. I’ll share it once there’s some progress.
Finally, I’d like to share a line from 100 Meters that I really like, spoken by Zaitsu:
不安は対処すべきではない。人生は常に失う可能性に満ちている。そこに命の醍醐味がある。 恐怖は不快ではない。安全は愉快ではない。不安とは君自身が君を試す時の感情だ。 栄光を前に対価を差し出さなきゃならない時、ちっぽけな細胞の寄せ集めの人生なんてくれてやればいい。
Anxiety should not be dealt with. Life is always full of the possibility of loss. That is the essence of life. Fear is not unpleasant. Safety is not pleasant. Anxiety is the feeling that arises when you test yourself. When you must pay a price in the face of glory, then just give up that insignificant life made of a collection of tiny cells.
What do you want to do?