AI Summer School #4: Chill Out About AGI
Aboard uses AI to help build software, but in just a few years, AI will gain sentience and take over our work, personal lives, and even brains—just kidding! Yes, the fourth and final installment of “AI Summer School” is about AGI, or “Artificial Generalized Intelligence.” What does it mean? Can anyone agree on a definition? And if no one can define it or agree on those definitions, what’s the likelihood that all these Silicon Valley AGI predictions will come true?
Show Notes
- Our previous installments: “How LLMs (Don’t) Think,” “Call My Agents,” and “From Agents to Apps.”
- Paul’s 2016 piece on Nick Bostrom’s Superintelligence in the MIT Technology Review.
- Paul also recently wrote about the book he references in the episode, Karen Hao’s Empire of AI.
Transcript
Paul Ford: Hi, I’m Paul Ford.
Rich Ziade: And I’m Rich Ziade.
Paul: And this is The Aboard Podcast. It’s a podcast about how AI is changing the world of software. Aboard is a company that uses AI to build software, so we feel qualified. Rich, it’s summer.
Rich: Mmm…not for long.
Paul: No, not for long. We have done three wonderful episodes of our AI Summer School to get everybody ready to go back to September and convince their boss that they’re AI experts. And we have one episode to go. I think we should just stop right here and get into it.
[intro music]
Paul: So first of all, we’ve had two wonderful people on the podcast in the last couple of weeks. Adam and Kevin, who are very, very—they’re partners who lead our engineering team. We’ve talked about what LLMs are and sort of how we use ChatGPT to talk about sort of how the pieces all fit together. We have talked about what agents are, little bits of code that run connected to LLMs. We’ve talked about how to build a business app with, we used Aboard, but sort of to give you an example of an application, sort of how things fit together. Felt a little markety, but it’s also what we know best. So that was good. I think people were with us. But now we’re going to go broad. What are we going to talk about today?
Rich: A. G. I.
Paul: So you got artificial intelligence. What’s that G stand for in the middle?
Rich: General.
Paul: General. Meaning—and I’m going to give you, there’s a million definitions of this thing, which is always a good sign.
Rich: Yeah.
Paul: But it means that a computer can do intelligent tasks kind of semi-unsupervised. Like, it would be complicated, human stuff can happen without all the weirdness that we associate with AI today.
Rich: Mmm hmmm.
Paul: Not all the hallucinations. It can add up numbers. You can give it complicated tasks. Off it would go.
Rich: I’m looking forward to this. You prepped. I did not.
Paul: Mmm.
Rich: I’ve done my share of reading to try to understand this stuff.
Paul: Yeah.
Rich: And I still don’t fully get it, but I’d love to hear more about what this is.
Paul: Well, I mean, the great news is if you search for it, you’ll find that every, like, Amazon Web Services is ready to tell you what AGI is.
Rich: Oof.
Paul: And I want you to think about what it would be like if you could have an Amazon Web Service Intelligence.
Rich: I mean, it’s…
Paul: You just get it off the menu, right?
Rich: Yeah.
Paul: And so, you know, a lot of it comes down to humanlike. There’s an extraordinary Reddit community around AGI.
Rich: Oh, I don’t know. I’m in a lot of AI subreddits and I’m not in this one. How’s this one going?
Paul: Well, you know, I actually, I don’t follow this one closely, but I do follow the ChatGPT one, and on a regular basis. It’ll be someone who’s like, “I believe I’ve uncovered a new form of physics.”
Rich: Yeah.
Paul: It’s a lot—people just, like, what you realize, and unfortunately I do think the last, let’s say 10 or 15 years of common civic life have shown this. The boundary between what we thought of was, like, a consensus reality and how many perceive, people perceive the world? Is very blurry. You and I talk about this a lot, and it’s one of those things where you realize how often you need to repeat it. These are not conscious. They do not think. They generate text.
Rich: Is AGI proposing that they are?
Paul: That’s really, I mean, once you get there, now you’re in the world of AI philosophy. The general means that it’s kind of a thinking thing.
Rich: Mmm.
Paul: And it’s not just the Turing Test—
Rich: It’s conscious?
Paul: Yeah.
Rich: Do you have a good definition? Can you boil this down for me?
Paul: Nobody does.
Rich: Oh.
Paul: That’s the whole thing. Because nobody can define, nobody can define human intelligence, Nobody can define consciousness. And everybody’s sure they can nail it down. AGI is when a computer can do human-like tasks in a repeating—
Rich: Computers can already do human-like tasks. What? Be an advocate here for a second, because I’ve had a hard time pinning this down.
Paul: Well, I don’t believe it. I’ll tell you what the problem is. I think that it’s this kind of, like, very meta marketing idea by people who have a particular ideology around technology.
Rich: Uh huh.
Paul: And they’ve come up with this way of encapsulating this magical future state where the computer will do everything and own everything and run everything. And when you read, I’ve been reading the Karen Hao book, The Empire of AI, and if you’re to trust that book, what it feels like is like the OpenAI people and Elon Musk and everybody kind of connected to that world is just convinced that they are the right person to create the super-intelligent thing.
Rich: Okay, but hold on.
Paul: Okay, I’m holding on.
Rich: Hold on!
Paul: I’m holding on. I’m holding on.
Rich: What is interesting about it is that we seem to have glossed over, like, an agreed-upon definition and yet people talk about it in terms of timeframes. They’re, like, I heard the Anthropic guy—
Paul: Yeah.
Rich: Say we’re about four to eight years away, I think.
Paul: Isn’t that great? I love four to eight years.
Rich: Well, no, but that means, that presupposes that he has a fixed definition in his mind of what AGI is.
Paul: They all do. And it’s all—
Rich: Do they agree on it?
Paul: No. Nobody does. And then, you know, it feels like we’re getting close. This has been the history of AI for 70 years.
Rich: No, but hold on. When someone says it’s four years away, that means he has a preconceived idea of what it is.
Paul: You know what, Rich?
Rich: What is it?
Paul: You know what, Rich? I’m going to go straight to the experts. The only people who are going to be able to tell us.
Rich: Who’s that?
Paul: McKinsey and Company.
Rich: Oh, no, dude.
Paul: Yeah.
Rich: Is this real?
Paul: Yeah.
Rich: No.
Paul: “What is artificial general intelligence? It is a theoretical AI system with capabilities that rival those of a human. Many researchers believe we are still decades, if not centuries, away from achieving AGI.” And then they go from there. Okay? So that’s where McKinsey’s at. That’s the big, those are the, that is—
Rich: That’s the definition?
Paul: I’m telling you. It’s just like that everywhere you go. You’re looking for something concrete. This thing isn’t real.
Rich: This is a spoiler.
Paul: I got great news for you, though. McKinsey has a chatbot and we can talk to it.
Rich: Ask it what AGI—what?
Paul: Yeah.
Rich: What’s the name of the chatbot? Mickey? Mickey.
Paul: [laughing] It’s Purdue Buddy. No, it’s called—
Rich: [shocked laugh] Oh!
Paul: Aaaay, good times. If you don’t know what that means—
Rich: You can actually request oxycontin prescriptions right from the bot. Deliver them to your door.
Paul: I don’t know how they got regulation passed that lets them prescribe—
Rich: It’s amazing. You can get painkillers right through the McKinsey bot. [laughter] Oh, no!
Paul: To all of our friends who work at McKinsey, there you go!
Rich: We’re done.
Paul: We’re done.
Rich: Ask Mickey.
Paul: It’s Ask McKinsey. I’m here. It’s a chatbot. It’s got questions—oh, you know what I love? It’s got trending questions.
Rich: Sure.
Paul: [laughing] Because they’re such heavy-duty usage.
Rich: Is this real? Are you at, like—
Paul: No, I made up and faked a chatbot from McKinsey and use DOM injection to put it on—
Rich: Can you double check that it’s not going to bill us for this?
Paul: Oh, no, we just paid $10,000 just opening this page. [laughter] Actually, no, you didn’t pay $10,000. You’re going to pay it later though.
Rich: You sure are. Okay, go ahead. I want you to read this verbatim. Tell it to define AGI in one paragraph.
Paul: Okay. I’m typing it in, Rich.
Rich: Jesus.
Paul: It’s recognizing intent. “For now, I can only answer questions related to GenAI, AI tech, media, and telecom.” So it can’t do it. [laughter]
Rich: What!?
Paul: And now it would like me to learn more about McKinsey and how we partner with clients.
Rich: You’re joking.
Paul: Yeah, and then there’s actually a button where I can turn myself into the Department of Justice—no, there’s none of that.
Rich: Uh huh.
Paul: Let’s try again. “How will AGI affect the mining industry?” This is the sort of thing they love. I had to log in for this, man. If I’m getting a follow-up email, I’m going to be real heartbroken.
Rich: You’re getting a follow-up visit, my friend.
Paul: I used my personal email, so I think we’re good. I didn’t use Aboard. Oh boy, it’s slow. “The integration of AGI in the mining industry is poised to transform operations, although its impact will vary depending on the specific applications. AGI, meaning a intelligent supercomputer that is has more capacity than humans, will provide predictive maintenance and operational….” Boy, this is bad.
Rich: Yeah, it’s bad.
Paul: Yeah, it’s pretty bad. I didn’t actually mean to—I thought we’d kind of have like a little chatbot experience here and it’d be kind of fun.
Rich: Yeah.
Paul: But clearly if anything, McKinsey has not yet achieved—
Rich: You’re not having a good time with McKinsey? That’s really shocking. That’s really shocking.
Paul: I mean, look, this is a good, honestly, I didn’t even mean to tear into them that much. But, like, this is…
Rich: Can we try—I mean, hold on. This whole episode is about us educating people—
Paul: Yes. And we’re educating them that—
Rich: On AGI.
Paul: We’re educating them that it is a frickin’ marketing term.
Rich: Do you have a definition in your head?
Paul: I’m gonna tell you this straight-up. So, like, here’s what happened. OpenAI got a lot of really smart people together and there was a book. There’s a lot of communities. There was a book, it was called Superintelligence. Okay?
Rich: How old is this book?
Paul: It’s about 10, 11 years old, maybe more.
Rich: Okay. Predates all the hype.
Paul: In 2015, the MIT Tech Review reached out to Paul Ford, me, and they said, “Hey, you want to review this book? Everybody’s talking about it. It’s very buzzy.”
Rich: Oh!
Paul: I’m not quite sure when it came out, but, like, not too long before that. And the book is a huge thought experiment about the risks of an AI, and that there are very, very small percentage risks that this thing could be real, but if they are real, it’s existential for humanity.
Rich: Okay.
Paul: Okay, so if you—
Rich: So you read the book?
Paul: Of course I did. I had to review it.
Rich: Oh, you wrote a review?
Paul: I did.
Rich: How many stars?
Paul: Like, three. Here’s the thing. So the idea of Superintelligence is, like, okay, let’s say you get a computer and it’s pretty, and it suddenly has a kind of consciousness and it doesn’t have to even be like fully self-aware.
Rich: Mmm hmm.
Paul: But it has to be able to execute really, really well in a way that seems conscious. Okay? So the canonical example in the book is the paperclip maximizer, which I’m going to tell you what the paperclip maximizer is.
Rich: Okay.
Paul: You give a computer the instructions to make a lot of paperclips.
Rich: Okay.
Paul: And it’s got a robot and it makes, like, a million paper clips, but you didn’t tell it to stop. And it’s like, “Well, I got to make more paperclips.” Well, it has resources. It knows what to do to make paperclips. So it goes out and it finds more—maybe it goes online and it starts trading in aluminum, gets aluminum delivered, makes more paperclips. Makes more paper clips. Until finally, its agenda is to turn the entire universe into paperclips. It builds spaceships. You can kind of play that thought experiment out. Well, hold on, we’re going to put some guardrails in. Well, the computer starts to check that it’s not making too many paperclips, but it can’t be totally sure. So it starts to build more and more tools to make sure that it’s not building too many paperclips, until it’s used all the resources in the universe.
Rich: Mmm hmm.
Paul: To make sure it’s not building too many paperclips. So it’s this sort of, like, it’s basically this recursive and combinatorial aspect of computing, and you add a level of consciousness and intent and its ability to execute in the physical world, and that can just go out of control.
Rich: Okay.
Paul: That’s one of the many, many risks.
Rich: Yeah.
Paul: Okay? And over time, stuff comes out. Like, this guy who wrote this book, Nick Bostrom, said some really racist stuff on a message board years ago, and then he had to apologize. It’s just sort of, like, a lot of human stuff has emerged since this came out. When I read it, I went back and read my review not too long ago?
Rich: Mmm hmm.
Paul: And basically I was just like, [shruggie noise]. I was just, like, “Whatever.” Like, it just, it didn’t— [laughing] Even then, but I’m telling you what, I went back and read it and nothing much has changed.
Rich: I was going to say, looking back…
Paul: What has happened is that LLMs as a category of technology have showed up, and they’re able to reproduce language, but they don’t think. And they fail to execute on plans in really common ways that are currently unfixable.
Rich: Yup.
Paul: And what you have is a lot of true believers who are like, “We’re going to fix it.” Because they believe in a kind of super technology optimism. But I’m now 30-plus years into that optimism. I worked in AI in the early 2000s. You either believe or you don’t. It’s kind of an article of faith. I just don’t have the faith. And so what I want to say to people in summer school is like, this is a big distraction. A new tool has showed up that lets us generate software and fill it with sample data so that we can have really smart conversations about software a lot faster than we used to. But you still need humans involved, because the actual reality of software is that humans make the meaning.
Rich: But this isn’t about making software.
Paul: It’s not, it’s about, it’s theoretically about everything, but it’s more like a religion at that point.
Rich: Yeah.
Paul: I see AGI as an ideology, not as an endgame for technology.
Rich: Yeah, I—let me go through my own thought experiment here.
Paul: Okay.
Rich: Okay. I think as humans, let me get slightly philosophical for a second, so if you’ll allow me.
Paul: Get philosophical.
Rich: Okay. As humans, right, we seek affinity. We seek affinity in other people. We seek connection. Right?
Paul: Mmm hmm.
Rich: And every so often we meet someone and you walk away and you say to yourself, “I really, really like that person.”
Paul: Sure.
Rich: “I think they’re great. They just, they speak sincerely and thoughtfully. They heard me, they actually asked me questions, unlike most people, that just ignore everything I say. And I connected to them.
Paul: Mmm hmm.
Rich: Right?
Paul: It often shows up as like, “Boy, I’d like to see—that’d be cool to hang out with them again.”
Rich: It could be cool to hang out with them again. And then you meet someone else and you’re like, “Gosh, that person was rude. They were not listening to me. They were kind of all about themselves. In fact, I don’t think they paid attention at all. They seemed distracted the whole time.” Okay? That impression that I have about those two different personalities is an impression that, like, I internalize and very much shapes how I perceive those people.
Paul: Sure.
Rich: Now it turns out that that person that I thought was incredibly sincere and kind and empathetic turned out to be a sociopath. Like, it turns out that that person is deeply manipulative.
Paul: They were manipulating. You had something they wanted, and they were trying to butter you up.
Rich: Right. And it turns out that person that I thought was awful? Was having a real—they just lost their job.
Paul: Or had sciatica.
Rich: Right. I think when we look at these systems, right? And these systems, they do string together words that represent emotion and feeling and sentiment, right?
Paul: It’s just us.
Rich: It scrambles us.
Paul: We’re just looking in a mirror, but it looks like another person.
Rich: Yeah. And so when we see that, and when we see that, we see that in objects, we see that in cute little dolls, in stuffed animals.
Paul: We anthropomorphize absolutely everything.
Rich: Everything, right? We see that in poems, we see that in a lot of things, and they become emotionally relevant to us. I think what’s unusual about this technology from an emotional perspective is that it will keep going, and it goes beyond parlor trick and starts to feel like a relationship to us—like, it really does. And I think for a lot of people—
Paul: Let’s be clear—
Rich: There are startups that are, like, you can go to therapy, you can have a friend. I think there’s friends. Like, you can be friends.
Paul: There’s girlfriends.
Rich: There’s girlfriends. And look, there isn’t…. There’s an industry for extremely hyper-realistic dolls for adults.
Paul: You don’t even have—you don’t, you truly don’t even have to go there.
Rich: I want to spend the next five minutes on that subject. [laughter] No.
Paul: Let me just, no, but let me, let me, metaphor-wise, you don’t even have to go there. Video game characters.
Rich: Video game characters.
Paul: There’s a video game theme, it’s, like, sort of, can you pet the dog?
Rich: Is that right?
Paul: Yep. And people categorize which video games let you pet a dog in the video game.
Rich: Correct.
Paul: Think about the levels of abstraction in that.
Rich: Yeah.
Paul: But it’s meaningful to people that your virtual hand can reach out and pet an imaginary dog.
Rich: Yeah.
Paul: Because that represents a moment of connection with the game creators.
Rich: Yeah. So here is where I do think we have to watch out. Okay? First off, it’s good to say these things out loud because this is a stuffed animal. You can love a stuffed animal, and you can’t fall asleep without it. You want to snuggle it. But it’s really a stuffed animal is a simulation of a being, like, an actual living thing that makes us feel comfort, and so we can have that bond, so to speak. Right? And sort of seek it out.
Here’s the other kind of component of it, which I think is we do have to take seriously, is that we’re ready to—this is a piece of software. That’s what this stuff is. But we’re ready to wire it to everything.
Paul: Yeah.
Rich: And when you wire it to everything and you couple that with what you talked about, which is, like, you know, making paperclips and taking over the world, that, to me isn’t distinguishable from, like, a runaway process, Like a bad thread in your code that just goes, right?
Paul: Well, you know what’s happened? Technology is an extension of the human, right? And people see that extension in different ways. You might see it as your little buddy. You might see it as a way to create. This is—everybody wants to be Steve Jobs in the Valley.
Rich: Yeah.
Paul: But what he was really, the one thing he nailed, he was kind of a dick a lot of the time.
Rich: Yeah.
Paul: But if he made a tool that let people feel and be creative, that he would win.
Rich: Yeah.
Paul: And it was like, and it’s funny because, like, if you think about the narrative for the iPod, it was all people dancing. They were connecting to the music.
Rich: It was a cultural context.
Paul: That’s right. The iPhone would let you really, really participate in culture. And going all the way back. Like, the first person, he, like, he went and gave a Mac. He got on a plane and gave a Mac to Mick Jagger. It was one of the first things he did.
Rich: That’s a ridiculous photograph somewhere.
Paul: [laughing] I mean, there is.
Rich: I’m sure there is.
Paul: I think there is a photo. And, like, he understood that, like, computers were supposed to participate in culture in a servant capacity. They were supposed to help culture along and give people tools to amplify themselves. And I think that people get really into computing, and I get this, because it really magnifies who you are, and it feels very powerful, and it feels like an extension of yourself. I think they start to see that as a metaphor, and they assume that all of reality works that way.
Rich: Yeah.
Paul: And this is why they hate bureaucracy and they hate the government and they hate all, you know, just things that don’t work—
Rich: When you say “they,” who are you talking about?
Paul: The mostly dude nerds in the Valley who have spent 20 years kind of freaking out about future scenarios that don’t exist.
Rich: Yeah.
Paul: Like, all the ethical altruism guys.
Rich: Yeah.
Paul: It blows up in their face a lot. And then everybody just moves on to the next thing.
Rich: Yeah.
Paul: Like, what is blockchain now? Sam Bankman-Fried is in jail. They were going to create star citizens and build a new country.
Rich: Yeah.
Paul: I do feel, I’m reading that Empire of AI book and so on. Everybody is just kind of into themselves to a point, and there’s also, the concept that keeps coming to me is if you keep talking about this stuff and you keep telling the world that nerds should run the world? Like, Sam Altman wanted to be governor of California and did focus groups.
Rich: Really?
Paul: Yeah. And so, like—
Rich: How’d that turn out?
Paul: Not great. [laughter] And if you keep telling people that you should run the world and you’re Peter Thiel and you’re all, and you gather all this power, things do switch around, especially in America. Like, right now, it’s kind of all lining up in a very specific way. But the concept I keep thinking of is a national swirlie. You know what a swirlie is?
Rich: No.
Paul: It’s when you put a nerd’s head in the toilet and you flush it. It’s like the classic archetypal nerd-humiliation move. [laughter] “I gave him a swirlie because he was such a nerd.” And I do feel that we’re headed towards a national or global swirlie if these guys don’t shut up.
Rich: Yeah.
Paul: Like, it’s just, it’s time to give me a break. I don’t—you created a cool technology based on something that showed up at Google 10 years ago.
Rich: Mmm.
Paul: You made something really nifty. It’s really cool.
Rich: Mmm hmm.
Paul: But why does it have to be a star baby? Can we just get a break?
Rich: Yeah. Yeah. Okay—
Paul: And meanwhile, your CEO, if you’re going—summer school, back to summer school.
Rich: Yeah.
Paul: You’re walking into the office, your CEO is going to go, “Hey, we can fire all those engineers, right? Because these things are smart and can code.”
Rich: Yeah.
Paul: And that’s the signal that’s going out in the world. And there’s this website, I just brought it up in front of me here. It’s called AI 2027. It’s kind of unreadable. A lot of the stuff is unreadable, but it’s this long, long breakdown of how by 2027, so, like, five minutes from now, AI will have taken over the world and will just kind of take over the entire economy. Need all the compute in the world, and will kind of control everything, possibly.
Rich: That’s the premise of this…?
Paul: I mean, it’s got a lot of graph—it’s a neat website. And sort of sci fi explication.
Rich: Yeah.
Paul: Like, it’s an okay thought experiment. The problem is it gets—this stuff gets taken so dead-seriously.
Rich: Yeah.
Paul: It’s because it’s a religion.
Rich: Yeah.
Paul: So anyway, I guess, you know, it’s funny because I didn’t sit down to be this annoyed when we sat down and did this podcast.
Rich: Yeah. I’m surprised.
Paul: Let’s say Steve Jobs, not the greatest guy in the world, but built some really important products in the history of technology.
Rich: Sure.
Paul: Gave himself a lot of credit for things that he didn’t necessarily always come up with. But that’s pretty normal CEO behavior, if we’re frank about it. [laughter] He really wanted his stuff to belong to culture and for people to do things with it. I feel that what the AGI story is, “We’re going to own culture, our robots, we need to be the ones to make the robots that will become super smart, because if we don’t do it, bad people will.
Rich: Yeah.
Paul: “And we will take care of it for you,” and so on and so forth.
Rich: Yeah.
Paul: And it’s not democracy and it’s not community and it’s not culture.
Rich: Yeah.
Paul: It’s kind of just greed. It’s a concept that people rally around—
Rich: Also there’s kind of a nihilistic streak to all of it.
Paul: It’s super nihilistic. And then what you see in reality is you got Sam Altman on stage sort of rambling on about AGI with Microsoft, with Satya Nadella.
Rich: Yeah.
Paul: And Nadella is like not buying—he’s not saying anything about AGI. Like, there’s all this stuff, Microsoft has all this investment in OpenAI. And OpenAI is like, “We’re going to create AGI,” and Microsoft is like, “That’s nice.”
Rich: Yeah.
Paul: “But we’re going to need this chatbot.”
Rich: We need to bolt it onto Excel.
Paul: Yeah. Literally. Literally!
Rich: Yeah, yeah, yeah.
Paul: Right? And so what you have is this narrative that is used to scare people and get them to get their pocketbooks out and get them to believe that things are going in a certain way.
Rich: Yeah.
Paul: And meanwhile, if you look at the giant companies, your Apple, your Microsoft, whatever, who are in the position to be the best informed?
Rich: Yeah.
Paul: They are using this stuff as a utility, as a layer of information, and they’re just not worried about it becoming super intelligent. Frankly, who has more to lose from this than the giant tech companies? Because this could eat their lunch and they’re like, “Nah, we’ll just ride this wave.”
Rich: Yeah, let me respond to—that was strong feelings.
Paul: [sighs] I’m just tired, man.
Rich: I get it, I understand it. And we’ve seen it in other shades before.
Paul: Look, I’m a huge nerd, and I gotta tell you, just nerds will wear you out. They’ll just wear you out sometimes.
Rich: Well, these are special nerds.
Paul: Oh, boy are they.
Rich: Look, the Valley is a culture of disruption.
Paul: Yes.
Rich: And when you’re disruptive and you like to upend the status quo, like they’ve been trying to upend banking.
Paul: They can’t stand that Wall street exists. They keep starting their own stock exchange.
Rich: Yeah.
Paul: Then you never hear about it again.
Rich: Yeah, and then there are videos out there of, like, people walking, like, “Okay, walk me through how blockchain is going to help me close on a mortgage.” And it gets real ugly real fast, because people like paperwork. [laughter] They like to know that it’s legit.
Paul: How many blockchain technologies are you using today?
Rich: Exactly. So what I’m getting at here is that I think, first off, there’s a culture of disruption, but there’s also, I mean, it is an immense concentration of success, both financial and status. Right? I’ve driven through Sand Hill Road.
Paul: Yeah.
Rich: It looks, like, just like an upper-middle-class suburban area, but the concentration of wealth and power there is ridiculous, right?
Paul: Yeah.
Rich: It’s boring as hell.
Paul: They use dollar bills as fuel. It’s just cheaper.
Rich: Yeah. And so I think what happens is this, and I want to bring it back to the title of this podcast, is when you find that level of success, you’re pretty convinced that the intelligence you use to find that success can be applied generally.
Paul: Yes.
Rich: That you can have thoughts on foreign policy, that you can have thoughts on how government should work, how people should interact, how people should procreate. Like, it gets real weird real fast. And the irony, or maybe it’s not even ironic, that we’re talking about artificial general intelligence, which is the sort of generalization of, “I’m smart at this, therefore I’m smart at everything.”
Paul: Yes.
Rich: Is kind of hilarious. I never understood the term. Like, artificial general—take out the artificial. That’s the technology. If you talk about general intelligence, it’s such a bizarre thing to say. The idea of saying, “Gosh, that that individual is generally intelligent.”
Paul: I mean, you know, the OpenAI promise, kind of, is it’ll be the smartest at everything.
Rich: Yes.
Paul: It’ll be the best physicist, the best therapist, the best cobbler.
Rich: Yes. Yes. And there are people who are thinking about their, like, their consciousness and uploading all their every communication they’ve ever made so they can live forever. Like, it’s gone bananas. And I’d like to—
Paul: Hold on. That’s been going—that that strain of thought, I think for a lot of people, they’re getting access to that strain of thought.
Rich: Ted Williams. [laughter] Do you know what I’m talking about?
Paul: Yeah, I do know exactly. Tell the people what you’re talking about.
Rich: He froze himself.
Paul: Who was Ted Williams?
Rich: One of the grumpiest baseball players in history. [laughter] Just very angry.
Paul: My grandfather loved him, so I actually spent some time learning about him. Like, just a grumpy guy.
Rich: Just a miserable dude.
Paul: But he wanted to live forever.
Rich: He was a world-class hitter. Great, great ballplayer.
Paul: Not a big hugger.
Rich: Not a big hugger. [laughter] Not a big hugger. And then, eventually, obviously got old, passed away, and then is frozen. And wherever he’s stored, probably in Arizona somewhere, he’s complaining. He’s still complaining.
Paul: Yeah, that’s right. That’s right.
Rich: That was an aside here, but finish your thought.
Paul: There are sets of ideas here that actually go back decades and decades and decades. And there’s the Extropians and people who believe in the singularity. I’m throwing out buzzwords that all have their own Wikipedia page.
Rich: It’s sci-fi. It’s got a bit of sci-fi.
Paul: It really does. And in fact the early cohort that was really engaged with this stuff were sci-fi authors and early cyberpunk types. And it was cool and weird and it would show up in parts of WIRED or Omni Magazine, which we’ve talked about before.
Rich: Yeah.
Paul: It was a way to speculate and think about the future. But what’s happened is that the money aligned with that speculation, and the speculation went from like, “Boy, it might be strange if computers become conscious,” to, “If we convince people that computers are about to become conscious, we will get more out of the UAE Sovereign Wealth Fund.”
Rich: [laughing] I want to end this with a cynical theory about all of this success and wealth.
Paul: Yeah.
Rich: I think when you reach, like, an astronomical level of success.
Paul: Mmm hmm.
Rich: The equation around whether, like, after death it’s better or worse? It’s definitely worse. Like, you’re eating really well, the weather’s real nice, you’ve got a lot of homes.
Paul: That’s right, it’s California.
Rich: [laughing] You’re doing real good.
Paul: Yeah.
Rich: And so what you, what you—
Paul: But you’re also, here’s the other thing—
Rich: And you got a lot of free time?
Paul: But you’re bored.
Rich: That’s the thing! And so you got to pivot to, “How do I figure this out and keep on, keep on trucking here?” Right?
Paul: It is real. Like, there’s recreational architecture. There is gaining political office.
Rich: Yeah, yeah, yeah.
Paul: There is—like occasionally they lock in and they’re, you know, what happens? Is the actual money guys—
Rich: You know how long it takes to build, like, a 300-meter yacht?
Paul: A long—
Rich: It takes, like, five years.
Paul: Years, right? Yeah.
Rich: And let me tell you, those project update calls where they share the screen, it’s like the most important thing in their life.
Paul: You know what the really—
Rich: Most go further. And they’re like, “Okay, that’s fine, but I’m still gonna die on that boat.”
Paul: It’s not about the boat. It’s always about the furnishings. Right?
Rich: Yeah.
Paul: The boat is just merely a shell. And now you have to solve—
Rich: Materials.
Paul: I have a phrase. When you have a hobby, you end up buying a lot of stuff for your stuff.
Rich: Of course.
Paul: If you buy a synth—
Rich: A lot of cases.
Paul: You need wires for your synth. [laughter] If you buy headphones, you need a case for your headphones. So you don’t hurt—
Rich: Yeah, but immortality?
Paul: That’s the ultimate, kind of, like, now. Now—
Rich: The ultimate project.
Paul: It is, right? No, it is. Because it’s—Bezos and Musk are perfect this way. It’s very adolescent.
Rich: Yeah.
Paul: It’s sort of like, “You know what I like? Rockets.”
Rich: Yeah.
Paul: “And boobs.”
Rich: [laughing] Yeah.
Paul: “And lots of computers.” You know what happened, though? You know what happens for real, though? You grow up. You’re a computer person. You’re a computer person. So am I.
Rich: I am a computer person.
Paul: Everything that we love and found exciting, let’s say 20, 30, 40 years ago, is so cheap now.
Rich: Yeah.
Paul: Imagine there was, like, the servers you set up in your 20s?
Rich: Yeah.
Paul: Are literally $1.50 now. [laughter] So all of your fantasies of what you could do if you really had resources?
Rich: Yup.
Paul: Are completely irrelevant.
Rich: Oh, totally.
Paul: I can’t use enough compute. And actually, what is fascinating about this technology, and one of the things that they’re doing on the way to AGI, it needs an enormous amount of computational resources. It really pushes the limits—
Rich: Yeah.
Paul: —of what computers can do.
Rich: Yeah.
Paul: That does remain exciting and interesting, but when they extrapolate to immortality and to endless recursive bot swarms taking over Jupiter, I’m just….
Rich: They ran the numbers, and the likelihood of eternal life being better than Northern California get lower and lower as you get more and more success.
Paul: It is true.
Rich: We gotta focus on this project. It’s a hard one. But you know what? Storage space is cheap and the Nvidia chips are good.
Paul: It’s not only that. Have you ever had a citrus salad in heaven? It’s not that great. [laughter]
Rich: I have a suggestion.
Paul: What’s your suggestion?
Rich: We should get our friend Jason Goldman, who is living inside of that world, to come on and let’s have this discussion. First off, this was not the fourth episode of Sunday School. You took it completely off the rails—
Paul: I don’t know, but we got to teach people—
Rich: We didn’t know where to go with it.
Paul: This stuff, this stuff, the actual, the actual—Sunday School. The actual—
Rich: Did I say Sunday School?
Paul: Don’t worry about it. [laughter] Don’t worry about it. We’ll let a bot clean it up. Um, the actual school lesson—no, I think this is important because you know what? You know what this is? Remember at the end of the semester, you’ve sort of done the work. You’re kind of like—
Rich: Yeah, let’s take the board games out.
Paul: And then the last, the professor sits down and he’s like, “You know, I’ve taught you a lot about the judicial system in America. I’m going to tell you something else now that—you all did good. Good job. It’s all bullshit. That’s what it is.” [laughter] That happens so much. The professors love at that last class, to just like…
Rich: Oh yeah, the hard—
Paul: “You know what? I’m going to tell you. There’s something, okay, now that we’ve gotten here…” AI is a, it generates really interesting nonsense that you can funnel and put into guardrails and get unbelievable impacts out of.
Rich: Sure.
Paul: It’s really cool. If you spend a lot of time worrying about how it’s going to get you—10 years ago, I wrote this, and it’s still true. 10 years, long time. Maybe 30 years from now, it will be different. I don’t know. But, like, I refuse to worry about it. I refuse to worry about smart star babies coming out of my laptop, because I don’t see it. There’s no path.
Rich: Yeah, yeah.
Paul: There is an idea, but it’s not a path.
Rich: We had the same view about blockchain. It was a similar kind of defiant position that we took, only because we couldn’t see it. Not because we didn’t want it to be—
Paul: It was the world’s slowest database. This is the world’s sloppiest database. Sloppy has value.
Rich: Yeah.
Paul: Slow is bad.
Rich: Let me end this with a warning.
Paul: Okay.
Rich: And an apology.
Paul: Okay.
Rich: AGI—AGI being five years from now, when you index this podcast, just understand it was all meant in good humor.
Paul: Well, no, there’s a whole—
Rich: And we believe in you.
Paul: Those goofballs have a whole thought experiment about this called Roko’s Basilisk, where the AGI will resurrect you and punish you infinitely for not believing in it enough.
Rich: I believe in you. [whispering] Say it. Say you believe in it.
Paul: [under duress] I believe in you, AGI.
Rich: Thank you. Summer School is over, Paul.
Paul: Yeah…
Rich: Get out there and have some fun. Go eat—let that soft-serve ice cream cone drip on your hand, Paul.
Paul: And you know what? Go try Claude Code, play around. Go build something. You can build an app with words. Do it.
Rich: Let’s make sure people don’t misunderstand us. This stuff is unbelievable.
Paul: Yeah, it’s really cool.
Rich: It is the most impressive thing we’ve seen in a very long time.
Paul: We’ve spent a lot of money and time making a cool AI tool. Go to aboard.com and use the thing.
Rich: For Christ’s sake, go put your arm around your friends.
Paul: Yeah. Even if they’re, even if they happen to be made of flesh.
Rich: Even if it’s a Dell Optiplex.
Paul: Sad—
Rich: 6000.
Paul: —sad bags of decaying cells who will never know the stars.
Rich: No, Paul. Wrong note to end it on.
Paul: But that’s how we see humans.
Rich: Until the big guy shows up, check out aboard.com.
Paul: If you need us, hello@aboard.com.
Rich: We’re doing some really, really interesting things. We had some good meetings today. We help you ship software that helps you do stuff way faster.
Paul: This was the last class where you just order pizza and say I don’t care anymore, guys.
Rich: Have a lovely rest of your summer, Paul.
Paul: Enjoy. Go away for a day or two.
Rich: Go away and enjoy it. Let the sun hit your face. Have a lovely week everyone.
Paul: Bye everybody.
Rich: Bye.
[outro music]