
Hegel in hyperspace.
This weekend I saw someone online talking about the philosopher Hegel, and I thought to myself, What’s all that about? I opened up Wikipedia, and I will say: Georg Wilhelm Friedrich Hegel has one of the most forbidding faces I’ve ever seen. The gravity of his countenance! But also his wiki page is enormous—countless sections and footnotes. I know Hegel is very important, but I had to make dinner and get the laundry done. All I wanted was a taste. A little Hegel dabble.
So—you know where this is going—I used ChatGPT. What is the dumbest question I could ask, the most humiliating way of learning? So I thought for a while and I typed, “How would Hegel interpret Star Wars?”
I won’t quote from the reply, because you can do it yourself, and reading other people’s AI-generated text is about as interesting as listening to people describe their dreams. But it wrote me an essay that explained how the Republic was Thesis, the Empire Antithesis, and the Rebellion and New Republic represented Synthesis. It went on a tangent about how Anakin/Darth Vader represents a dialectical journey to self-awareness. Okay!
Then I tried a few other philosophers. It produced a dialogue between Socrates and Yoda; told me that Marx would have loved the Ewoks (anti-colonial resistance); Hobbes would have liked the Empire; and that Popper would say that the Sith are the enemies of an Open Society.
This all sounded reasonable, in a goofy way. My thesis of AI is that translating from one big idea space you know less (in my case, philosophy) to another you know better (Star Wars has been part of my life since I was four years old), is a very fast way to learn about things in a kind of rough-and-tumble way.
To test this hypothesis, I asked ChatGPT to do a similar exercise: Design an operating system for the Death Star. I know a lot about operating systems—way more than I know about philosophy.
It suggested “a Unix-like environment for administrative tasks,” either Linux or an IBM mainframe system like z/OS, and Kubernetes to orchestrate the necessary cloud. It also suggested the name “DS-OS.” It wasn’t creative enough to invent a whole new kind of computing with faster-than-light fiber optics and helpful droids for sysadmins. It didn’t really design an OS as much as describe what’s out there now—it fell back on what it knew about operating systems, and translated that into what it knows about Death Stars. It’s not thinking; it’s just converting.
So what am I actually doing when I translate Hegel into Star Wars? I’m not having a conversation, no matter how much it looks like that. I’m querying an online database and asking it to smush together a blob of Hegel-adjacent numbers with a blob of Star Wars-adjacent numbers, then render that as text.
And I would argue that I am learning from that output, in the same way that one learns by hanging out with nerdy friends, or by reading online message boards while drinking. I now know that there are key Hegelian concepts—Thesis, Antithesis, Synthesis—that are wrapped up in a dialectical process, and that these concepts can map to big sociopolitical structures like governments, empires, Death Stars, Skywalkers. Let’s be clear: That’s all I know. And the computer doesn’t “know” anything, it’s just taking stochastic swings at the pitches I throw. But—and this is critical—I now know more about what I don’t know about Hegel.
I’ve been thinking a lot about our conversation with Clay Shirky on the podcast this week. Clay talked about how ambiguous the relationship between AI and education is. Students are using ChatGPT so much that they can’t conceive of a world in which they don’t use it—they’re getting cut off from other ways of learning. The future of AI in education is incredibly unclear, somewhere between “this should be in every classroom” and “we must ban this forever.”
It’s like that everywhere. People on BlueSky want to hurl AI into the sun; people on LinkedIn want to let it run the world. I increasingly feel trapped between these cohorts, because the fact is, I engaged with philosophy in a way that appealed to me for an hour on a Sunday afternoon. I’m now emotionally ready to read the Hegel Wikipedia page, and I’ll probably do that on the train ride home. Maybe in 2030 I’ll be ready to read some Hegel.
I had Claude explain some basic electrical concepts to me last week (low voltage, don’t worry!) and when I ran it past my wife, who actually understands electricity, she confirmed it was okay. And, of course, the code I write with AI is real and runs well, and the code artifacts we’re generating at work are real and run well. The computer doesn’t care that a computer wrote the code, and increasingly, neither do I. I don’t know what the “AI center” looks like, but I hope we eventually can find it.