The AI Future Belongs to Those Who Zoom Out
AI can’t understand why something needs to be built—but software engineers can.

Don’t just zoom out, zoom up!
“Vibe Coding Is Coming for Engineering Jobs”
Scary headlines get clicks. But this is actually scary if you’re a coder. WIRED recently sounded the alarm over disappearing tech jobs and AI models supposedly replacing software engineers. But if you read the article, the story becomes much more nuanced.
AI is really great at that initial impression; we call it the “first mile” at Aboard. Without asking a single question, it just goes to town and starts coding. But any engineer knows coding good, quality software is hard. The minute things get even slightly complex, AI doesn’t just stumble—it can veer off completely, doubling down on the wrong path while maintaining that trademark enthusiasm.
Want more of this?
The Aboard Newsletter from Paul Ford and Rich Ziade: Weekly insights, emerging trends, and tips on how to navigate the world of AI, software, and your career. Every week, totally free, right in your inbox.
So is your job as an engineer doomed? The short answer: Maybe. The long answer starts with understanding one of software’s most timeless truths—bugs.
Bugs—A real downer
AI doesn’t really get bugs. It’s not that it can’t detect some of them—especially the basic logic ones. It’s that AI doesn’t instinctively understand why bugs exist in the first place. Bugs aren’t solely logical failures. Most of the time, they’re something very different. Most bugs are borne out of one key premise: Failing to meet some expected outcome.
Here are a few everyday examples:
- A system that charges an 80% delivery fee instead of 0.8%.
- A PTO tracker that deducts seven days for a five-day vacation, because it included Saturday and Sunday.
- A resource delete action that doesn’t ask for confirmation.
These aren’t syntax errors or exceptions. They’re violations of unspoken agreements between humans and software—what we call functional bugs. An expectation was set and the software failed to meet it. Oftentimes, the expectation is implicit. Most detailed specs won’t mention that PTO deduction logic shouldn’t include weekends (though some do).
AI doesn’t really reason through these expectations. It responds to prompts by pattern-matching with everything it’s seen before. It has no internal sense of “what seems right” in the broader context. And when it misses, it misses hard. You can feed it a functional spec, but even then, it traverses it in a dangerously linear way and slips up.
Deep reasoning isn’t actually deep reasoning
When people talk about AI “deep reasoning,” they often imagine a future where machines will not only follow instructions but intuit intent, weigh trade-offs, and adapt in real time. But today’s reality is far more limited. Apple recently commissioned a study where they found that all the big AI models collapse as soon as they’re confronted with problems of even modest complexity.
At its core, what we call “deep reasoning” in AI is still a form of probabilistic synthesis. AI doesn’t reason the way humans do. It doesn’t weigh conflicting priorities, wrestle with ambiguity, or ask clarifying questions when something feels off. Instead, it expands upon what it’s statistically likely to say next based on vast amounts of data.
To reason well, you often have to question the premise. You may do that in your own mind, or you may go back to others to have a conversation. It means asking the stakeholder why they want something before jumping to how to build it. It’s in the friction—between people, between interpretations, between goals—that brings clarity on what to do next. AI doesn’t do friction. It doesn’t negotiate.
To bring this down to earth: Relying on AI to “reason” through a software spec is like relying on an intern to write your will after overhearing a conversation about estate planning. It might sound plausible. It might even look polished. But there’s no guarantee it reflects what anyone actually wanted—and no process in place to find out.
Who should be worried?
Developers tend to fall into two categories:
- Those who understand why they’re building what they’re building.
- Those who just take the ticket and ship the code.
If you’re in the second camp—treating engineering like task execution, avoiding context, skipping product thinking—you’ve got company. AI should make you nervous. Because AI is very good at raw execution. It doesn’t ask “why.”
But if you’re someone who understands tradeoffs, questions intent, talks to users, shapes features, and anticipates edge cases, then your value is not only growing, but should in fact increase. The enemy that’s out for your job is redefined as a tool to help you execute.
As AI continues to improve, the opportunity for software engineers isn’t to code faster. It’s to zoom out. To understand what’s being built, why it matters, and how to shape it structurally and architecturally before the first line of code is even written.
Maybe the real question isn’t whether AI can do our jobs, but what our jobs were really about in the first place. For decades, we’ve been translating human intent and motivation into workflows, specs, diagrams, and code—into tools that carry meaning and purpose. But we never arrive at those outcomes in a straight line. We drift, we veer, we course-correct. And that wandering isn’t a flaw in the process—it is the process.