
When the LLM generates the app just like you asked it to and everything from here on out is going to go great. (Image courtesy Warner Bros.)
There’s a joke in software development: Now that you’ve got the first 80% done, you need to do the remaining 80%.
Of course, this applies to most things—writing and editing, creating art, PhD dissertations—but in code the fact that things don’t ship gets a lot of attention because it’s so expensive. If a PhD doesn’t land on time, that’s suffering for one person and their family. When code doesn’t land, that’s a whole team, tons of money, and a big corporate plan going “whoosh.”
So right as you see things landing, right as marketing gets the press release in order, you realize you’re going to need six more months and six months more budget. There’s no way around it.
But then, if you have a very seasoned product manager nearby, something else might happen: Scope will get cut. Features that were absolutely necessary will be axed. Things that customers expect and desperately need will be pushed out. Tears will be shed, roles will shift, but the deadline will take precedence. You thought you were watching a Miyazaki movie, where everyone gets along and learns to work together. Suddenly it’s the second half of a Scorsese film, where main characters get brutally murdered, one by one.
No one loves this part of software development. It’s the saddest part. In general, software projects get sadder as they go on. “Planning” is whiteboards and sandwiches; “launch” is sitting at a terminal and waiting for things to explode. But it has to happen.
For a while, I was really hopeful that AI might help us out here. AI is actually good at tidying up bits of code, writing boring stuff, documenting, and testing—and I think it can be really additive here. If you have a set of discrete, well-understood coding tasks, it’s great. It can write shell scripts, help with devops, assist in building REST APIs, and so forth.
But I’ve been working on something a little more complex lately. And I’ve noticed something curious: The state of completion keeps moving out a little bit more each time I experiment. And because AI can work so fast, and do so much for me, the person pushing it out is me. It goes like this:
“Hey, this took that ugly old database format and turned it into something more legible and helped me with the migration script. Wow, that saved time! Now I just need to clean up the schema a little more. [Pause.] Of course…it would be really nice to have a better front-end to search the database.”
The problem is not that I then go on some wild expedition. It’s that in ten minutes, I have made real, substantive progress towards building that frontend. Is that a problem? Sounds great, right?
But it is a problem, because I’ve lost track of the goal, and when this technology is used without a really specific goal in mind, you end up with huge blobs of code and have to backtrack. And if you’re obsessive—which I am—the new, exciting problem you just created is the most important problem in the world, even though your wife is texting you after midnight reminding you that you need to be up early the next morning. And you still didn’t finish the tasks—the discrete, boring, perhaps-by-hand tasks—that you needed to finish in order to move on to the next project.
Which is why, as of now, I promised someone I’d try to transform an old database for them—I’m learning about how to do migration projects with Claude, basically—and I find myself several weeks in, trying to teach the AI to interpret and understand a complex data dictionary file and perform certain, probably unnecessary, recursive and self-referential tasks.
LLMs work pretty well at spitting out code, and they serve up huge portions of whatever you ask of them, but they have absolutely no juice when it comes to cutting scope. It doesn’t stop and emit, “No, you don’t need that.” That’s just not what this technology does. But cutting scope is absolutely necessary—not just to ship, but to simplify, prioritize, and focus on why you exist in the first place.
So there’s a real Zeno’s Paradox to this technology: Given how fast it can move and the way you can accelerate your work, the instinct is not to be done and go for a bike ride, but to do more—to finally tame the computer fully. And so I find myself perpetually almost there.
Of course, this is a test project done to learn—but I want to finish it and move on to the next thing, and so the next time I work on it, I’m going to make a brutal list, achieve those goals, and stop dabbling. Otherwise I’ll end up building an operating system on top of it, just because I can.
In general, I think the age of AI coding will produce levels of scope creep hithertofore unseen. It’ll start innocently—watch for it. “Hey, we could make onboarding into a fun game.” “Hey, we can instrument more analytics hooks into every function very easily now.” “Hey, we could add a lot more personalization with this release.” They’ll be good ideas, and they’ll be awesome, and they will seem trivial to accomplish using Claude or ChatGPT, and then you will be living through the back half of Goodfellas.
Code is suddenly, for the first time, possibly, really cheap—but complexity and incompleteness are human curses, and people’s time—creators and users alike—remains expensive. LLMs don’t care, because they’re just technologies. Product managers are going to have to get in there and break hearts. Just like always.