the aboard newsletter

The Extremely Human Last Mile

Is AI a big scam, or will our jobs be on the chopping block by 2026?

by -
Image of three hikers climbing up an incline, silhouetted by a golden sunset.

Maybe AGI is at the top?

Two contradictory AI things are rattling around in my head, and I’m trying to put them together and make sense of where we are. The first is Builder.ai declaring bankruptcy after failing to meet revenue targets. The second is a set of statements by Anthropic founder Dario Amodei predicting mass AI-driven unemployment.

Builder.ai is (was?) a little hard to describe, but their basic pitch was something like: Come to our website with your app idea and answer a bunch of AI-prompted questions, and put everything you want your app to do into a shopping cart. We’ll use our AI bot Natasha to do a lot of the prep work for your software, then match you to a developer team to finish the work, and will keep using AI to speed up the whole process.

We have been checking in on Builder.ai for months, and I always found it hard to make sense of what they offered. “Our vision is that you order an app, like you would a pizza,” reads their website. “Then we produce that custom app, using the kind of assembly line used to build cars.” Metaphor-wise, that’s something. The features were also unusual—they said they “use facial recognition to check that the developer working on your code is the same one Natasha picked.” Absolutely how I like to have my app built.

From the outside, it looked like they were setting themselves up to be the one true AI-assisted future of IT consulting—accelerating everything with chatbots and self-service. It makes sense: IT consulting is a trillion-dollar industry that is absolutely ripe for AI disruption, given how reliant it is on (1) generating documentation no one reads; and (2) performing sub-standard code development and maintenance. 

Builder.ai raised a ton of money, much of it from the Qatari sovereign wealth fund and from Microsoft, and were valued at more than a billion dollars. But their plan didn’t work out, and now they are being accused of faking revenue numbers through dodgy accounting (which they deny), and they’ve filed for bankruptcy in multiple markets. 

It’s a pretty shocking collapse. It’s confusing, and there are all kinds of rumors out there, but the TL;DR is: They were supposed to be the AI future of business app development, they missed their promised numbers, and investors took back their investments, and now they’re bankrupt. 

Meanwhile, over at Anthropic, CEO Dario Amodei is doing a bunch of press hits about how AI is coming for our jobs. Amodei told Axios that half of all “entry-level white-collar jobs” are on the block, and that AI will “spike unemployment to 10-20% in the next one to five years.” Which is a sort of ridiculous range of outcomes and durations. Unemployment is currently around 4%, so things could either be twice that bad in half a decade, which seems difficult but maybe manageable…or we could be in a shock AI depression after Christmas. Who knows!

For a little more, less statistical context, here’s what Amodei said to Anderson Cooper on CNN:

You know, people have adapted to past technological changes. But I’ll say again: everyone I’ve talked to says this technological change looks different. It looks faster, harder to adapt to, and broader in scope. The pace of progress keeps catching people off guard.

This is true; it really has been a wild couple of years. I can’t identify any other technology that’s reached hundreds of millions of people this quickly—not even social media. Amodei again: 

And so I don’t know exactly how fast the job concerns are going to emerge. I don’t know how quickly people will be able to adapt. It’s possible that everything will turn out okay—but I think that’s too sanguine an approach. I believe we need to raise the alarm. We need to be concerned. Policymakers need to worry about it.

I like Anthropic and I see Amodei as a huge nerd (complimentary) but he’s also an AI/AGI true believer (check his website for lots of details), so you have to factor that in. He is pretty consistent in his public statements and his company’s actions. He has some sense of civic responsibility, and he seems focused on one industry at a time. In 2025, when tech leaders are either on shrooms, scanning your eyeball to create currency, talking about robot overlords, or dabbling in fascism (or all of the above?), I find this is a person worth paying attention to.

Image of a man in an office wearing a snorkel and goggles, with a chart beyond him showing the numbers going down so much they have to extend it with post-it notes.
Portrait of a reasonable person.

So a reasonable person, looking at this industry and not really into all the drama, might ask: Which is it? Is AI the big scam, or is everything on the chopping block by 2026?

I think the answer is actually really simple, and it’s something I keep mumbling to myself: AI is great at the first mile because it seems human, but bad at the last mile because it’s not human.

First-mile tasks include “writing a summary,” or “looking for interesting companies in the cannabis space” or “reviewing inbound emails and classifying them in the CRM” or “finding the right SaaS tool for my church-membership drive” or “listing the kind of software components my new online hat store needs”—and for all of these, LLMs often work surprisingly well. They simulate basic human skill sets to differing degrees of talent, and they’re fast, too.

But then there’s the last mile. That includes “launching the app at the same time the marketing campaign rolls out in four languages” or “finishing the five-hundred-thousand line migration from COBOL to Java“ or “completing the oral defense of your PhD thesis.” This set of skills might be in reach of AI, but I don’t buy it yet. 

You can make convincing video clips with AI, but have you ever seen what it takes to make a movie? You need ideas, storyboards, script rewrites, casting, and filming—and once you’ve filmed, you need things like color grading and cutting trailers and distribution. The storyboarding step? Utterly vulnerable, perfect AI use. You could even make a complete film from lots of little clips to explore ideas. But the last-mile things like marketing and the red-carpet rollout? Less vulnerable.

Builder.ai did a lot of the first mile—those tools still work on its website, and they’re very trippy. I’d highly recommend checking them out while everything is still online. But when it came to actually delivering, you needed humans in the mix, and that didn’t add up to enough revenue to keep investors motivated. Something big was missing. 

Meanwhile, note that Amodei is being specific—he’s not saying all jobs are going away. What’s at risk are entry-level white collar jobs—first-mile jobs—which, to be fair, are a big part of any large, bureaucratic company. Researchers, anyone with “assistant” or “junior” in their title—that work won’t be understood as “labor” in the future as much as “features.” So an enlightened, AI-centric policy would be one that helps land young or new employees deeper into the process, not just as document-compilers, but adding more human value further along the journey.

I wish I could wave a wand to give everyone control over their own destiny, but I can’t, so all I can say is: If I was just coming to the workforce right now, I’d try to figure out where the last-mile tasks were being done, and then I’d try to be as helpful as I could over there, even if it meant doing a lot of coffee runs. I would gladly be a production assistant helping to schedule conferences; I wouldn’t want to be one of 30 junior research analysts working in a big bank. I didn’t fully buy Builder.ai, and I don’t fully buy Amodei—but there’s something there, too.