Why My Co-Founder Paul Is Totally Wrong About Everything
Individuals might be seeing huge gains with AI programming, but organizations aren’t there—yet.

Check out the cool skull!
Rich here. Let me paraphrase Paul Ford after a Claude Code-fueled bender:
We’re cooked. It’s over. Game over. Death is coming! DEATH IS COMING! I built a dozen VST-compatible synth apps for my MacBook! While I was sleeping! I built an app for my friend’s dad during Thanksgiving dinner and deployed it from my phone. Let’s grow our own food and go off the grid. Game over!
After handing my business partner a cup of mint-infused green tea and guiding him to the couch, I explain to him that he’s right. It is transformative. But also that organizations will resist, and it will go much slower than he thinks.
Want more of this?
The Aboard Newsletter from Paul Ford and Rich Ziade: Weekly insights, emerging trends, and tips on how to navigate the world of AI, software, and your career. Every week, totally free, right in your inbox.
At first he snapped back, “But you don’t understand…!” At which point I gently nudged him to sip some tea and get a little rest.
PF PSA
Paul Ford adores technology but is utterly driven and motivated by outcomes, not libraries, frameworks, or methodologies. He will barrel through the kitchen and grab whatever’s in the fridge and somehow concoct the most delicious souffle you’ve ever tasted.
There are very few people like this. This is probably a good thing. He is immune to disciplinary boundaries. Programming? Writing? Art? Business? He sees them as one big blob. Mostly, he loves to make stuff and give it away.
My job as his business partner is to take the large nest of caffeinated squirrels that he calls a brain and translate the chorus of squeaks into good, clear signal that we can use to grow a business (and help clients). So that’s what I’m going to do, right now, with this newsletter, before I let him write anything else about vibe coding. Let’s translate the squeaks.
What’s Real
First, let’s be clear: Vibe coding is more than a fad. Anthropic knew exactly what they were doing when they gave Paul and other “Max” users $1,000 in credits. With the right approach and thinking, you can ship software with AI very, very quickly, in impressive quantities—good software that used to take months. In the hands of a thoughtful technologist, you can create something of real value.
But this doesn’t mean everyone can have a great app just by typing in a box. You need to understand the entire picture and all the gears that have to lock together. If you don’t, well, you can take the time to learn about why things keep breaking. They’re both good outcomes, in my opinion. Figuring out why LLM-assisted coding breaks teaches you about software—and it teaches you about the limits of LLMs. Everyone should do it for this reason. But if you can code, it’s a wild accelerant.
Organizations Aren’t Software
Paul and I have worked together for many years, mostly advising and selling services to other organizations. When we try to introduce paths for businesses to benefit from technology, we almost always have to navigate a bunch of self-imposed complexity and bureaucracy that slows everything down.
So Paul may be right about how transformative this stuff is. But he’s also hacking away in his home office, with no meetings, no sign-offs, no budget approvals—no approvals at all, actually. The tightest feedback loop in existence: One hacker building stuff, now with a minion of digital monkeys at his beck and call.
Now: Walk these wonderful products of vibe coding into any organization defending its status quo. What’s going to happen? Are they going to be excited? Are they going to say, “You’re right, let’s replace our stack of proven, expensive, documented tools with these new systems we’ve never seen before!”
No. They’ll be horrified. It’s sort of like when AI kept adding fingers to people: What looks exciting and novel at first turns out to be uncanny and unpleasant on close inspection.
Groups resist change—that’s nothing new. People get good at the old ways. They also formalize and institutionalize those ways. Now that the AI mothership has landed, entire nations of expertise are under threat. But this isn’t just about any one person digging in and defending their turf. This is about organizations as organisms resisting change.
Why? Two reasons:
First, organizational inertia. An organization’s habits are far more powerful than any individual’s. Organizations are a sort of living system, and once stable routines take hold, they develop a defense mechanism to maintain those routines. A social order forms (i.e., org charts with responsibilities) that codify and institutionalize those routines. The result is not only formalized habits, but habits that reinforce dependencies between people. In larger orgs, this is summed up as: “This is just the way we’ve always worked.”
When the greater good is threatened, power is exercised. Meetings are added. Key decisionmakers push back and call into question the wisdom of moving fast with new technology. This often doesn’t happen consciously. It is often framed as “risky” or “reckless” to move too quickly.
Second, a threat to collective status. An org chart is a distribution of power. When a groundbreaking technology like AI lands, it’s a threat to how things work—and how people work. More specifically, it calls into question someone’s experience and expertise. Think of expertise as a rare commodity that any individual can possess. If a new technology renders that expertise less valuable, the instinct towards self-preservation kicks in. Not only do individuals get defensive and protective, but their teams, their groups, and that entire branch in the org chart hunkers down.
You’ve seen these patterns. This isn’t just your organization: This is every organization, from the grocery store to the Vatican. People insist AI is different, and will totally change everything—but they said that about every other technology, and organizations still exist. Take a breath.
Humans > Agents
When a technology lands that can disrupt your status, the best thing to do is appropriate it. Study it, embrace it, and put it behind your wall of expertise. It is now yours. Now you will decide when and if it will be leveraged more broadly in the organization.
Engineers are amazing appropriators. There’s an explosion of engineering tools powered by AI right now, and you can see the appropriation happening in real time. Tech jargon has taken hold, and the curious business stakeholder has been shown the door. They may have messed with Replit six months ago, but they’ve all but given up. It’s back in the “experts’” hands again.
Why do people get excited about AI replacing employees? It’s not just because they like firing people; hopefully they do not. It’s that they’ve been sold a lot of ideas about AI agents. Agents:
- Don’t care to get a bonus or get promoted
- Have no allegiance to any one group or person in your org
- Are inclined to do something rather than nothing
- Are unimpressed with your legacy systems
- Want you to be happy
Sounds great, right? Except that all of these things are change, and while they seem exciting, they run up against institutional inertia and threaten collective status. All those “good” qualities are “bad” for an organization trying to keep everyone on the chosen path. As a result, a lot of AI projects are getting shelved before anyone can figure out how this new technology works.
My advice to Paul—and to everyone else—is that yes, absolutely, there is immense potential in AI. It’s real. Sure, it’s overhyped, but it does things no other tech can do, and those things have a ton of business value, especially when it comes to shipping software. We’ve built a platform that seeks to take a lot of that value, put it on rails, and deliver it to organizations. Our clients are getting tools shipped much, much faster as a result—what used to take months now takes weeks.
At the same time, we’re realizing that while lots of people want to play with the new toys, they aren’t ready for the change. A lot of people come in for a pitch and spend their time telling us how our product seems interesting and they’d love to use it, but they just aren’t ready for it. And then they tell us they’ll be back in a few months.
That’s fine. We all have to learn together. But I would say: I think Paul is right in one way. The cat is coming out of the bag. The engineering team might say they need sixteen months, but then someone will go away and hack together a solution in a weekend. Engineering will say, “You can’t do that,” and list reasons that approach will fail, but then people keep using the software they built and find ways to hide it from IT. Humans are very, very wily.
Companies resisted the internet, the web, email, updating Windows, GitHub, mobile phones, Linux, Macs—always for good reasons. Empires don’t fall in a day, but even the biggest organizations do change. This is the ideal time to take command and learn this stuff. It’s powerful, it’s comprehensible, and it really is a big part of the future, ready or not. And we’re here to help.