I’ve been telling everyone to check out the AI company DeepSeek for weeks. Weeks! But I didn’t expect that when people finally did pay attention to it, it would crash the economy. Or at least Nvidia’s stock price. Which is apparently “the economy” now.
Why are people and/or markets upset? Well, as we animatedly discussed on the podcast this week, it’s because DeepSeek is: (1) a Chinese AI company; (2) with a commercial product that is really cheap to use; (3) is also comparatively cheap to make; and (4) is openly distributed, with published research.
All of this means:
- Sanctions and restrictions aren’t working to limit competitive LLM development. (Sorry, USA.)
- You may not need as many GPU chips as predicted to make AI happen. (Sorry, Nvidia.)
- You may not be able to buy as big a moat as you thought. (Sorry, Microsoft and Google.)
- If you offer AI services you will have to compete on price. (Sorry, OpenAI and Anthropic.)
- If you’re making an open-sourced LLM at great cost to your company in order to keep everyone on their toes, you just got slapped across the face in a crowded restaurant. (Sorry, Meta.)
As the market came to these conclusions and many others, tech stocks tanked and, for a day or so, everyone ran around like wind-up toys. I guess all the hyperwealthy goofuses who were rambling about AGI in the East Room of the White House have to fly back to Silicon Valley and ask their personal chefs to prepare a big dish of crow—and then go call their nuclear reactor guy and to ask about the return policy. Or not! They said they weren’t mad.
Except now they’re saying they are mad (I swear to God dumb things keep happening as I write this post) because OpenAI claims that DeepSeek used OpenAI’s models to extract knowledge. Which, given that OpenAI has generously helped itself to vast swathes of culture and is getting sued by the NYTimes for something comically similar, means that a meta-hypocrisy vortex has opened in the middle of the Pacific Ocean and is about to swallow the entire earth.
Anyway, like I said, the market is already bouncing back, because the invisible hand has an invisible brain—except that this entire technology tempest has inspired the national security apparatus of the United States (which, as of Wednesday, apparently still exists) to start “investigating” DeepSeek as a threat to whatever the United States is at this moment.
Oh and there’s also the possibility it’s all a big scam and we’re being played for fools. There are rumors of secret GPU mills, fake APIs… Who could even tell? Anything’s possible in 2025, even good things. But the research is open; we’ll see if it’s reproducible, assuming anyone can think a thought.
So…now what? Well…personally? Look, I’m very worried about the world, but at the same time, this parade of hubristic folly, compounded with geopolitical clownery, is the greatest entertainment on earth. And I also like it because it validates my priors.
It’s been clear for a while now that the general trend in AI is “these LLM things will be really fast and really cheap, and likely free or open-sourced”—we just don’t know when. At what point will you be able to train a useful LLM for a dollar and run it directly on your phone? I don’t know that date—but it just moved up.
The deeper I go into this tech, the more the AGI claims start to feel like blockchain claims: There is an assumption that everything will change, including human behavior. I’m just too damned old to buy it, and if they hatch the cosmic egg in San Francisco and give it the nuclear codes, then all to the good. But I’m an old-school nerd, and I’m excited for three things when it comes to LLMs: Cheaper, faster, and local.
Cheaper will let us try more stuff. Because if you can try a thousand approaches to the same problem—or a million—in an automated way, you’re going to find all kinds of new patterns and ideas. This glut is less exciting when generating essays to cheat on your homework, but more exciting when before you go to bed, you ask your Roomba to write some new code to help it avoid cat toys, encouraging it to try and measure as many approaches as it wants. When you wake up, it sends you a report. Do you want a PDF emailed to you by your Roomba? To me, that is paradise.
Faster is the corollary to cheaper, and will let us see results and find new patterns more quickly. This stuff is so, so slow compared to other things a computer can do, and I’m excited that people are finding ways to make it faster to train and faster to evolve. All of my ideas for “faster” are currently terrible—like, imagine a mouse cursor that is automatically generated as a set of funny shapes based on what you’re doing, like a version of Tumblr from hell. But cheaper and faster brought us the windowing interface, the World Wide Web, and Playstations.
(I should note, as co-founder and marketer, that the next version of Aboard is entirely about cheaper and faster for organizations. Stay tuned!)
Local means we can be sloppier with our inputs and not be constrained by bandwidth. I have a lot of emails and PDFs that would be fun to feed to an LLM to make a custom search engine. I’d love to try thousands of different prompts running overnight and then get my LLM to summarize the results. I mean, imagine the discovery process if you’re suing OpenAI? What do you use to summarize the terabytes of emails?
In a very abstract way, I think that the real power of this technology is that you can compress much of human knowledge into a big, sloppy, slightly inaccurate blob, and fit it on a USB stick. It’s much more exciting as a kind of always-on, free utility built into your computer than as a big internet service—and much more private and trustworthy. Is there a future where instead of going to the App Store, I just tell an LLM to make me the app I want? I don’t know! I don’t see the path yet, actually. But trying to get closer to that is sure interesting.
In any case, seeing a nice order-of-magnitude-or so, utterly disruptive jump in a week gives me hope that yeah, this is less a product, not even a platform, but more of a…tool. Once it’s fast, cheap, and local we—and not just nerds like me—can start exploring where the limits are, and see more of where it breaks. Then we can figure out what it means to all of us, instead of having that dictated to us. I think that would be good.