AI’s Hot, Hot Mess
Karen Hao’s Empire of AI is less about computers becoming human—and more about humans being very human.

They’re sort of like monopolies, but worse.
I recently read Karen Hao’s Empire of AI: Dreams and Nightmares in Sam Altman’s OpenAI. I had a lot of little thoughts—and one big one.
The book covers the history of OpenAI, from its founding around a decade ago until now. In that sense, Empire of AI is useful, because the story of AI development is dense, confusing, and punctuated by eruptions of bizarre behavior.
Take, for example, the way that OpenAI was a not-for-profit and then suddenly presented itself as a Google competitor. Or the way so many of its employees publicly believe that computers will soon become gods and rule over us—and that we must build the god computer in order to stop it. Or the time the OpenAI board fired Sam Altman, but then he was CEO again soon after.
Want more of this?
The Aboard Newsletter from Paul Ford and Rich Ziade: Weekly insights, emerging trends, and tips on how to navigate the world of AI, software, and your career. Every week, totally free, right in your inbox.
The AI era is a devilish, abstract, and complex subject, filled with incredibly slippery people like Altman and Elon Musk. Hao takes a lot of big swings, and they don’t all land. The first third sometimes feels more like it was written for The Internet instead of the general reader, complete with footnotes about extractive capitalism.
But after that, it really gets going. What emerges is a relatively normal story that’s less about computers becoming human, and more about humans being very human. Sam Altman is a ridiculously charismatic leader with unbelievable wealth and access to all of Silicon Valley, and he buys into all the big Valley myths: AI, blockchains, startups, and so forth. He tells everyone what they want to hear, loves dealmaking, and also wants to be, at the very least, the governor of California (and does focus groups to that end).
Altman builds an organization with an almost messianic vision: Create true intelligence with computers! With money from Musk and others, he creates a not-for-profit to ethically bring that vision into the light. Early employees connect to the vision. Things meander for a while. But then, almost shockingly, cool applications of the technology start to emerge (like ChatGPT 4), as does a need for more and more capital.
Rather than transferring the technology to larger firms in exchange for funding or proactively re-organizing into a more traditional org, fiefdoms emerge—“boomers,” who believe in AI as the cure for all ills, and “doomers,” who believe AI represents enormous danger. All the normal fiefdoms of a scaling tech company develop, too. Since no one fully deals with anything, a series of uncontrolled implosions occur. Musk departs in a huff; Dario Amodei leaves to found Anthropic; the relationship with Microsoft is a continual mess in every direction; the product keeps breaking; money flies everywhere; people get fired all the time, including Altman, who gets fired and rehired. It’s a corporate tech story as old as time.
Hao also reports on lesser-known—and more grim—parts of the industry, spending time with content moderation and classification gig workers in Venezuela, Kenya, and elsewhere. These are subcontractors of subcontractors, sitting by their computers waiting for tasks to arrive so that they can make a few dollars, or pennies. This is a very special part of the book, and I could have done with ten times as much of it—I wish the stories of the Venezuelans and Kenyans had been woven into the narrative from the beginning. These gig workers trying to survive on weird dregs of work that beep into their inboxes in the middle of the night are the most likable characters in the book; you root for them, and they get let down.
One thing that stuck with me—Hao spends time with a woman named Oskarina Veronica Fuentes Anaya, a well-educated Venezuelan woman who, in the wake of her country’s economic collapse, cobbles together enough content-classification gig work to sometimes, but not always, scrape by:
It wasn’t the work itself Fuentes didn’t like. It was simply the way it was structured. In reimaging how the labor behind the AI industry could work, this feels like a more tractable problem. When I asked Fuentes what she would change, her wish list was simple: She wanted [the gig work company] Appen to be a traditional employer, to give her a full-time contract, a manager she could talk to, a consistent salary, and health care benefits. All she and other workers wanted was security, she told me, and for the company they worked so hard for to know that they existed.
In an awkwardly ironic way, that’s also what a lot of the OpenAI employees want—they have salaries and benefits, but they’re clearly beholden to the needs and desires of Altman and his small cohort. He’s not the worst CEO by any means, and in some ways, he’s even a good one. But everyone is craving some sense of order.

AI is like that as well: It’s a technology that promises to bring order, perhaps “total” order in the form of AGI that rules over humans, or at least thousands of little assistants doing your tasks for you. But everything around it is incredibly messy, the absolute opposite of what the tech industry says it is like and what it stands for.
Given its need for energy and tons of GPUs, and the effect of the same on the climate: AI is the most literal hot mess in the world. (Hao spends much of the last third of the book detailing the ecological effects of AI. I like that stuff, but it might drag a little for others.) Everything about AI has been disorderly. The products often just don’t work. Vibe coding tools don’t complete tasks, or blow away your database. Everything hallucinates. There just don’t seem to be any guardrails—and in an era where people are ripping up existing guardrails and throwing them into the (warming) ocean, we desperately need global agreements and regulatory frameworks. We’re not going to get them.
The sad part is that OpenAI anticipated this—it was founded to align AI with human values and to provide stability for the world in an age of intelligent machines. But the fact that it was founded by Elon Musk and Sam Altman should have been a clue that it would have some challenges achieving this goal. Along the way, it struck absolute product gold in the form of ChatGPT, and that particular part of its mission was lost.
The last chapter, “How the Empire Falls,” gives you a sense of where Hao lands. I land someplace a little different: I like these new technologies and I’m hopeful that they can be of utility to lots and lots of people. I want AI to get cheaper and faster and more efficient, and more reliable, even if that makes it more of a tool and less of a simulated human. I don’t know when stability returns to the world, or if it looks like how “stability” used to look—but nobody wants to spend all day thinking about Sam Altman or Donald Trump or Elon Musk while a computer makes up random facts and sets the ocean on fire.
And there was one big thing missing from the book: The users. We show up in the shadows. Hundreds of millions of people are using these products, and giving OpenAI money. Even if we’re all being duped, there are hundreds of millions of us, experiencing this technology both individually and collectively. Our careers are shifting, code is being auto-generated, and the prompt has entered the collective consciousness. There’s more to the tech industry than the harms it causes.
That said, I don’t think we’ll get to the next step under the guidance of OpenAI and Altman. They launched a hell of a thing, but people need stability to take risks. It can’t all be opportunities and AGI. We need craft and discipline, and people need to know the companies they buy from or work for. They need to see them as human. One person working on this, not surprisingly, is Tim O’Reilly. He’s been describing how an open ecosystem around AI could work—one that is not OpenAI dependent. Empires fall in lots of different ways, and their citizens keep going.