Checking In on the Magic
Last year, Eric Schmidt made some dramatic claims about what AI could build. Have his predictions come true?

Watch me pull a stable global economy out of this hat!
About a year ago Eric Schmidt, the former CEO of Google, got a little too excited about AI. He was trying to explain the software development process of the future, and suggested to a class of Stanford students that, were the U.S. to block TikTok, they could then go ahead and clone it.
“Say to your LLM the following,” he offered. “‘Make me a copy of TikTok, steal all the users, steal all the music, put my preferences in it, produce this program in the next 30 seconds, release it and in one hour, if it’s not viral, do something different along the same lines.’”
“That’s the command,” he continued. “Boom, boom, boom, boom.” Later, he walked that back a little—especially the IP violation parts—and asked for the video of the talk to be taken down.
Want more of this?
The Aboard Newsletter from Paul Ford and Rich Ziade: Weekly insights, emerging trends, and tips on how to navigate the world of AI, software, and your career. Every week, totally free, right in your inbox.
He didn’t offer a time frame to achieve this. But it’s such an extreme vision, and it was so jarring to read the first time, that I think it’s worth revisiting a year later and talking about where we are when it comes to building software with LLMs. Let’s take it bit by bit:
“Make me a copy of TikTok.” If I type the entire prompt Schmidt provided into Claude Code, it says, “I can’t help with copying TikTok’s code, stealing user data, or infringing on music copyrights. If you’re interested in building a short-form video platform as a learning project, I can help you create an original implementation with proper licensing and ethical data practices.” Good job Claude! Keeping me honest.
If I type in “make me a short-form video platform,” Claude goes to work, and after a while it produces a single HTML file that promises to let me upload a video to my browser. It’s a social network of one. It also doesn’t work; I click the “Upload Video” button and nothing happens. So it built the wrong thing, for the wrong cohort, at the wrong platform layer, and it fails to do the one thing required of it, but other than that, it’s very interesting.
Anyway—it’s conceivable that with a few hours or maybe a day of work, I could produce a really basic copy of TikTok that works for, say, a dozen people. Or not! That’s AI-assisted coding. We’ve made the most progress towards the Schmidtian vision there, but it still requires skill and intervention to build even a simple app—although less every day. (Check out Aboard!)
“Steal all the users.” From a practical POV, the infrastructure to fulfill this using LLMs doesn’t exist. Asking an LLM to hack into TikTok and “steal” all the user data, given the time allowed, wouldn’t work, and you’d be directly breaking the law. The same goes for spinning up thousands of user accounts and spidering the userbase.
Let’s enter a grayer area. Say that capability existed. An LLM would need to know where to look, what Discords to log into, and how to gain access to the data—perhaps by negotiation with some other bot in a virtual marketplace. Then it would need to confirm the user base is valid, download the data, load it into a database, and invite them to the app without hitting spam filters. Pure sci-fi—none of that works.
“Steal all the music.” You run into a lot of the same issues as above. From where? I suppose we’re back to spinning up thousands of accounts to download the music from TikTok itself. Or you could pay a tool like Suno to make lots of garbage glazed AI TikTok-friendly music, though. It would be expensive. Or you could make a system to automatically hire thousands of musicians to make music for you and pay them fairly. That might even be your easiest path. No one would ever do that.
“Put my preferences in it.” Here, a lot has been done. LLMs are developing smarter profiles, which can be unsettling when suddenly, in the middle of a narrative, the LLM starts talking about your preferences and hobbies.
For example, last week I asked ChatGPT to explain the Many-Worlds interpretation of quantum physics and it said, “Given chaos and compounding effects, five years later that small difference could snowball into you living in another city, meeting different people, or never discovering modular synths.” (Emphasis mine.) It’s actually very eerie when it switches from the generic second-person to dropping in little details about my life. But anyway.
“Produce this program in the next 30 seconds.” We’re nowhere near that. The latest version of Replit sometimes debugs itself for three hours. In general, an idea has taken hold that the way to make AI better is to always use more AI, but my guess is that we’ll see an increasing understanding that LLMs are good for getting things going, summarizing things, and looking for issues and bugs, but that the state of “finished” is going to involve a human operator for a long, long time. Then again a “long, long time” in the current milieu could be, like, six months.
“Release it.” AI is surprisingly good at devops—it has really helped me configure servers. Self-releasing software at this level of complexity is a ways off. It’s better at suggesting deployment strategies than it is at actually deploying.
“In one hour, if it’s not viral…” I suppose an LLM could interpret command this to set up logging and watch its own traffic, but good luck with that.
“…Do something different along the same lines.” This is what I think we’re calling “agentic loops”—the idea that you could keep iterating on an idea and having the LLM build relatively large things. But given the above, that’s nowhere near working.
So where have we landed? What’s tricky about AI—and where I have a lot of sympathy for Schmidt—is that he probably saw some demos of coding assistants, extrapolated what he saw to full-on hyper-mega-scale (the scale where he lives and works), and then went, welp, that’s the direction we’re headed.
That’s what the “hype cycle” really represents—the human tendency to extrapolate without concern for existing norms, social friction, or technological challenges. For the last year, we’ve been living in the shadow of this extrapolation, but now we’re seeing the limits and we can start to work against them.
I have faith in the tech industry here. We’re not immune to hype at all, but when the hype starts to fade, we enjoy making incremental progress and discovering all the new things computers can do. So while we’re not near the future promised—and I don’t know if we’ll ever get to that future—we are somewhere very different than we were a year ago, and the next year will probably be even weirder. Onward we go!