20 Percent Slower Is a Good Start!
The key thing missing from most AI conversations is a hyper-specific focus on what these tools do well.

These actually speed up traffic in the grand scheme of things.
A recent study by the Model Evaluation & Threat Research Lab—a think tank for AI—delivered a pretty striking conclusion: “We find that when developers use AI tools, they take 19% longer than without—AI makes them slower.”
In response, Simon Willison, who has been steadily using AI to accelerate his prodigious open source output, had a long, thoughtful response. “My intuition here,” he writes, “is that this study mainly demonstrated that the learning curve on AI-assisted development is high enough that asking developers to bake it into their existing workflows reduces their performance while they climb that learning curve.”
As Willison points out, it’s really hard to measure developer productivity—but, that said, he doesn’t outright dismiss the findings, either. The original study is carefully designed and worth reading and discussing. It’s an uncomfortable analysis. Perhaps this multi-trillion dollar super-industry is all hype? A lot of people think so.
Want more of this?
The Aboard Newsletter from Paul Ford and Rich Ziade: Weekly insights, emerging trends, and tips on how to navigate the world of AI, software, and your career. Every week, totally free, right in your inbox.
I went out to have lunch with my co-founder, Rich, and we discussed it. Our ultimate reaction—even as people who are building a company that accelerates software development with AI—was to go, “Sure, why not?” As in, sure, why wouldn’t Cursor usage slow down an engineer by around 20 percent?
Here is a list of things that I have seen slow down engineers by anywhere from 20 to 2,000 percent:
- Configuring their text editor or IDE to be smarter about their code.
- Focusing on immutable data structures.
- Migrating from React to Preact.
- Migrating from Python to Clojure.
- Making things more object-oriented.
- Switching to a different kind of Agile development.
Every single one of these endeavors was intended to radically accelerate everything, and yet each one can drag a project to a halt. This is not a novel thing to state. In 1986, Fred Brooks wrote an essay called “No Silver Bullet” about how there is no magical cure for software.
In 1991, Steve Jobs’s NeXT computer put out a fantastically dry video where it went head to head with Sun Microsystems, proving that they would make software faster and better using their IDE—which today we know as XCode. There are thousands of similar examples from the time before that, and thousands after.
All progress in the software development industry is measured in speed. This is because so many projects fail, and estimation is unbelievably hard. Meanwhile, LLMs are so, so fast at code generation from a prompt that it seems absolutely jaw-dropping. People aren’t just shocked—they’re shocked for months. I was! But then you find the limits, and you need to build guard rails, and craft prompts, and put in tests, and expect things to fail. And I could easily see that adding up to 20 percent slowdowns. To be frank, part of me is thinking, “Only 20 percent?” I mean I’ve started a ton of AI projects that just didn’t finish.
The key thing—the thing that is missing from most AI conversations—is a hyper-specific focus on what these tools do well. There are multiple forthcoming books about vibe coding, and I get why, but wow do I wish there were a book, like an old-school O’Reilly book, that was just about accelerating one part of development—say, LLM-Powered Database Schema Development in SQL. I’d want sample prompts, comparisons of different LLMs, multiple examples and tutorials, with all the caveats, common bugs and pitfalls, and GitHub access to all the code. Just 100 pages to read and then I could sit down and try it all out. I cannot tell you how happy that would make me.
One of the hardest things to process about this new technology is that it can get you something that feels incredibly real, but it just…isn’t. Humans remain the arbiters of completeness. Now that we’ve had a taste of that sweet, sweet speed, we want all of it, all the time. I get this. But trust me—I’m deep in this stuff, we’ve found ways to accelerate software in really thrilling ways at work, but I also buy that AI slows things down, too. Prompts and tools will not be enough. We need to build a culture of explaining things slowly and thoroughly so that people can do good work.