AI Summer School #1: How LLMs (Don’t) Think
Feel a little lost when it comes to AI? It’s time to go back to school—Aboard’s AI Summer School! Over the month of August, Paul, Rich, and a few special guests will break down the basics of LLMs, agents, AGI, and a host of other AI-related topics. In the first installment, they discuss how LLMs think—or rather, don’t think—and compare the major players on the AI scene right now.
Show Notes
- Rich calls Paul from Lebanon: “The View from the Lebanese Tech Scene.”
- Treat yourself to this compilation of Sesame Street’s pinball counting segments.
- On last week’s episode, a discussion about how AI doesn’t “think” structurally.
Transcript
Paul Ford: Hi, I’m Paul Ford.
Rich Ziade: And I’m Rich Ziade.
Paul: And this is The Aboard Podcast, brought to you by aboard.com. Check it out. We’ll talk about that more in a minute.
Rich: Yes.
Paul: It is a podcast about how AI is changing the world of software. How the world of software is changing the world of AI. All those things. All of it. But you know what, Rich? It’s summer. You have any summer plans left? You traveled a little bit.
Rich: Traveled a little bit. We just did an episode on Lebanon, which you should go check out.
Paul: I just went to Ireland.
Rich: You went to Ireland, and there’s another month left. We might sneak somewhere. I don’t know. I don’t know.
Paul: But it’s August, and so we thought we’d record a few episodes, which we’re going to call AI Summer School.
Rich: Cool.
Paul: Yeah! Summer school is always where the best kids went to learn the most.
Rich: No.
Paul: [laughing] Not at all.
Rich: I went to summer school.
Paul: Did you really?
Rich: I played hooky a lot in high school.
Paul: Mmm. I was close, actually.
Rich: I got, like, a 40 in tennis.
Paul: Did you really?
Rich: I was like, “This is dumb. I don’t want to hit a ball with a racket.” And they’re like, “Well, you failed. You can’t graduate, so you got to summer school.” So I went to summer school.
Paul: You know, I got a C in pottery in college.
Rich: That makes sense.
Paul: Yeah…
Rich: Makes a lot of sense.
Paul: I just couldn’t get the pots right.
Rich: Yeah, no, you went to class. I was just cutting class.
Paul: No, I was a disaster in high school, to be frank.
Rich: Okay, cool.
Paul: Anyway, that’s not why we’re here today. [laughing] We’re….
Rich: We’re going to teach you the basics of AI.
Paul: The very, very basics.
Rich: Which no one bothers to do anymore. Everybody’s like, “Just use the thing!” Let’s learn how this stuff works.
Paul: We’re going to go to the very, very basics, and then we’ll talk about how agents work, and we’re going to talk about how apps are built, and then we’re to talk about what AGI is over the month, over August. So just like, when you—
Rich: Four classes.
Paul: This is just a nice, nice little refresher. Going to keep these short. So we better get started.
Rich: Let’s go.
Paul: All right, let’s go.
[intro music]
Paul: So you ever watch Sesame Street as a kid?
Rich: Dude, I learned to speak English on Sesame Street. I came to America when I was five, and—
Paul: You had just Arabic? A little French, maybe?
Rich: Arabic and French.
Paul: Yeah.
Rich: Forgot the French.
Paul: [laughing] That’s too bad.
Rich: It’s too bad.
Paul: You still have a lot of the Arabic.
Rich: Didn’t learn from my parents, because they didn’t speak English.
Paul: Yeah.
Rich: So they just put me on the TV, and I’d watch Sesame Street and Electric Company on public television.
Paul: [singing the Sesame Street pinball song] 1, 2, 3, 4, 5. It’s so good.
Rich: All of it. And I learned English in a year. Like, it took less than a year. So yes—
Paul: Big Bird—
Rich: Long-winded answer.
Paul: And do you remember the Count?
Rich: Yes. He counted.
Paul: That was really good. Really good.
Rich: Yes.
Paul: Had an accent and—Count’s a great character, okay? I’m going to take the Count’s method and I’m going to teach you the difference between classic computing and AI real quickly.
Rich: Okay.
Paul: Okay? Now the first part of this, I’m going to ask you to pretend to be an old-fashioned computer, like a Commodore 64.
Rich: Okay.
Paul: Okay? And let’s say I wanted you to use a Commodore 64 to count from one to ten. How would you do that?
Rich: n=1, n=n + 1, and then you put it in a loop, and when it gets to 10, you break out of it.
Paul: You write a little tiny program.
Rich: Tiny little program. Five lines.
Paul: Five lines. And it would say, “Okay, I got 1—”
Rich: Unless it was Clojure. Then I’d write 30, 40 lines.
Paul: They don’t have that on the Commodore 64. [laughter] No, but it’s sort of—for those who don’t know, Clojure is very, like, high-end, very thinky programming language. But that’s it, right? Like, a little bit of one number in memory, and then you add one to it and you spit that out and you do that a couple times—
Rich: Print it. Yeah, it was a way to like—
Paul: Calculators work that way.
Rich: Yeah.
Paul: Like, this is just how, when we think about computers and what computers do, we think about that. They’re sort of glorified calculators.
Rich: We hand them specific instructions.
Paul: And then they give us back sequences and lists.
Rich: Outputs of some sort.
Paul: Data structures.
Rich: Yeah.
Paul: Now I’m going to be an LLM. I’ll be the LLM.
Rich: Okay.
Paul: And I want you to ask me the same question.
Rich: Count from 1 to 10. Mr. LLM.
Paul: First of all, great question. Thank you, really appreciate it. Second of all—and now I’m going to think for a minute.
Rich: Okay.
Paul: Okay, he said “count 1 to 10.” I broke those up. Now to…1 and 10. 1 and 10 are numbers. Count implies a sequence. What usually happens when somebody asks for a sequence with numbers in it? I usually kind of go—maybe 2 would be a good number after 1. I’ll start with 1. I’ll go to 2. What happens after 2? 3, 4, 5. Okay, I got them all at once. 3, 4, 5. And I got a 9 over here. Maybe I’ll put that 9 in there, and the 7. I’m missing something. I’m missing something. Let me go to this other layer. Okay. Okay. And then mmm, aaaah…11? Nope, nope, nope. No, I don’t know why, but nope. And……1, 2, 3, 4, 5, 6, 7, 8, 9, 10.
Rich: Okay. Very different.
Paul: Yes.
Rich: What’s happening there?
Paul: Well, the way LLMs work, they’re basically databases. They’re big storage facilities. Okay? And then you—
Rich: Of what?
Paul: What they do is they take all sorts of stuff. They search the whole web, all kinds of data, every book, every whatever they can find.
Rich: They digest it all.
Paul: They break it up into little tiny pieces, and they look at where things are related to other things. So when—
Rich: It’s vast, what you’re saying.
Paul: Vast—
Rich: Billions and billions of pieces.
Paul: But they squeeze together in this very compressed way, so that you can search them quickly.
Rich: Okay.
Paul: So when it, when you ask it to count from 1 to 10, it doesn’t count from 1 to 10, because it can’t think. What happens is, statistically, it’s pretty likely that when somebody says count from 1 to 10, that the next most likely output would be counting from 1 to 10. It would be the numbers from 1 to 10, right?
Rich: Yeah.
Paul: Those are the tokens that make sense—
Rich: Mmm hmm.
Paul: —when somebody asks for that counting to happen.
Rich: Mmm hmm.
Paul: Okay, so now, that’s a weird question for an LLM, but you might say, “Hey, help me with this resume.” You might say, “Write me a story about a dog.” And what it does is it says, [robotic repetition] “Write. Me. A. Story. About. A. Dog.” And it breaks those up—
Rich: Mmm hmm.
Paul: —and it says, “When people ask this, when I usually get these, what’s the thing that usually follows?”
Rich: Yeah.
Paul: Well, it’s a story about a dog, and it’s not quite that simple. It doesn’t—but essentially, it’s like, imagine the world was a jigsaw puzzle.
Rich: Mmm hmm.
Paul: And you took all the pieces and you kind of, like, kept track of where they were in relation to each other.
Rich: Mmm hmm.
Paul: And you put that into a giant bucket, like, jigsaw puzzle pieces connected with little strings.
Rich: Yeah.
Paul: And then you pull them out and they’re still, they’re kind of out of order. And then you sort of, like, start to generate as much order as you can.
Rich: Mmm hmm.
Paul: And a lot of jigsaw puzzle pieces have gone missing, but hopefully we’ll be able to fill in the blanks. And maybe when I ask you for a picture of a whale, it’ll make you a whale, but it might put pants on it.
Rich: Yeah,
Paul: Okay?
Rich: So just want to call out a couple of things. What you’re essentially saying is all AI is doing is sort of assembling whether, let’s focus on words for a second, words together. It’s stringing words along out of a giant, massive bucket of options.
Paul: Well, that’s LLMs, which everybody talks about, stand for Large Language Models.
Rich: Yeah.
Paul: There are other kinds of ways to do this process.
Rich: It’s a sophisticated recall. When you are mimicking what an LLM is doing, you are essentially rummaging through the bin. It’s a recall machine, which I think highlights something which I think everyone should internalize, which is that it’s a recall machine that is not doing any sort of thinking. People have really treated these things, this incredible breakthrough, as kind of human, because it sounds human—it speaks human. It compliments you. It’s very flattering, sometimes. If you call it out on a mistake, it apologizes. So it feels very human. But what it’s really doing is just stringing stuff together.
Paul: You know what’s interesting, and this is, I’ve never about it this way before, but you just jogged this in my brain. When it types back to you, you chat and it types or talks, it does it in a sequence, right? That sequence is not actually quite as organic as it seems, because it’s going through all these different layers of perception and kind of sort of assembled things for you.
Rich: Yeah.
Paul: But it comes out in a linear way, much like you and I are talking and taking turns talking.
Rich: Yeah.
Paul: So it seems very human. But have you ever seen anybody draw a picture? Like, just like a video of somebody, like, drawing an image, right?
Rich: Yeah.
Paul: Compare that to the way that the images get produced. When you ask it to draw a picture, it goes from that, like weird, blurry…
Rich: Yeah, it sort of sands it down.
Paul: That has nothing to do with how humans actually draw.
Rich: Correct.
Paul: So it’s, you realize how artificial it is while it’s doing it.
Rich: Correct.
Paul: And it doesn’t feel like it’s maybe as smart.
Rich: Yeah.
Paul: It feels like it’s a little more just a computer program.
Rich: Yeah. It’s not top-down.
Paul: That’s right.
Rich: It’s left to right. And that, we’ve talked about this in countless episodes when we were talking about code.
Paul: Yeah.
Rich: That it can’t do anything, really, of any substance or real depth because it’s not thinking top-down.
Paul: That’s right.
Rich: Which is architectural or structural. Instead it’s just kind of going, right? And they’re trying to make it pause and step out of it, they’re trying to simulate sort of stepping back and thinking big picture.
Paul: And that’s actually worth talking about for a sec. Reasoning is this big concept, right?
Rich: Yeah.
Paul: As far as I can tell, and this is where like there’s a lot of stuff going on, so everything I say is going to be a little too boiled down.
Rich: Yeah.
Paul: But what reasoning really does is it kind of creates a larger space. So it actually adds ambiguity to the process. So it’s like, “Hey, wait a minute, they asked me to identify really good places to visit in Ireland. So…what do I know about Ireland?”
Rich: Yeah.
Paul: It sort of like starts to ask itself these very broad questions.
Rich: Yeah.
Paul: And then it bullets them out. And what it’s doing, just as, like, everybody, I think, listening to this knows what a prompt is.
Rich: Yeah.
Paul: It’s kind of expanding and adding a little bit of softness to the edges of the prompt.
Rich: Yeah. I don’t fully understand reasoning. I’d like to study up on it a bit.
Paul: No, it’s really kind of that, like, it’s just adding more text to itself.
Rich: My read is it’s just doubling back over on its output.
Paul: That’s what it’s doing.
Rich: Yeah.
Paul: But it often adds ambiguity. If you look at reasoning steps?
Rich: Yeah.
Paul: It’ll be like, “You know, maybe I don’t know.” [laughter] And it’ll just kind of, like, start to—
Rich: Searching, yeah.
Paul: Yes, because what that does is it expands the kind of space that it’s searching and it makes it look through more stuff.
Rich: Yeah—so what you’re saying is it’s a fancy database.
Paul: It’s a weird database. Very complicated, very—
Rich: It’s very sophisticated.
Paul: Deep math going on.
Rich: This wasn’t a dumb mistake that we’re here now.
Paul: No, no. In fact, a lot of these technologies have been going on for 70 plus years.
Rich: Yeah.
Paul: It’s just—
Rich: There’s a handful of big breakthroughs a few years ago.
Paul: That’s right. Google sort of, like, started to figure stuff out. And then you also had this thing, which is why Nvidia is the biggest company, one of biggest companies in the world, if not the biggest today, which is it turns out that this problem really aligns well to breaking things up into zillions of little tiny programs—
Rich: Yeah.
Paul: —that run and kind of report back. And that’s how you build these databases and process all this stuff.
Rich: Yeah.
Paul: And that turns out to align incredibly well to 3D chips.
Rich: GPUs.
Paul: Because what they used to do, why do 3D chips, why can they do, what was the point of making them do zillions of transactions, zillions of calculations per second as opposed to, to a typical CPU, which is really fast, but only is sort of doing relatively fewer things?
Rich: Yeah.
Paul: I mean, actually answer. Like, what’s the point of a 3D chip?
Rich: I mean, it’s processing sophisticated, realistic, photorealistic imagery.
Paul: It’s making all those polygons, right? Like, it’s…
Rich: At once.
Paul: Yeah.
Rich: I think that’s the thing. It’s not as narrowly focused, but rather, it’s running a billion processes at once to do it.
Paul: And that’s how the PlayStation, it’s like…
Rich: I mean, Nvidia is a brilliant company. It was actually taking off pre-AI and pre-crypto.
Paul: Yeah.
Rich: Like, it jumped on crypto and then it jumped on AI. But boy, that’s arguably one of the luckiest companies in the world. No one said, “Can I make an Nvidia chip talk back to me like a human?”
Paul: No.
Rich: No one said that, right? Like, it’s credit to them for continuing to pivot to where things are. But it’s funny.
Paul: Call of Duty was supposed to just be awesome.
Rich: It is kind of awesome.
Paul: It is, right? Just like…
Rich: My son wants to play it.
Paul: Mmm. Not yet.
Rich: And I keep saying…
Paul: Not yet.
Rich: There’s a World War II one, but it’s just as bloody. Turns out blood hasn’t changed much.
Paul: No, it really hasn’t. You go back even further, I think.
Rich: [laughing] Yeah.
Paul: No, so, like, Nvidia, just incredibly lucky, but also just this fundamental technology that has kind of been around for a while.
Rich: Yeah.
Paul: And I remember talking to somebody 20 years ago and they were doing some physics stuff and they were trying to figure out how to be smarter and smarter about 3D chips. Like, everybody’s known that there’s been this wacky power out there.
Rich: Yeah.
Paul: One thing that I don’t think a lot of non-nerds might know is it’s hard to program these things. They don’t work like—
Rich: Yeah.
Paul: So that first, that Commodore 64 model where we wrote the little BASIC program and it ran? Doing that so that it runs hundreds of thousands of programs at once, and they all report back—
Rich: Sophisticated stuff.
Paul: Things like memory management and storing data and so on. They just don’t work.
Rich: I mean, games have all, like, a lot of programmers, to go into gaming programming, they have to have a prerequisite of deep math. Otherwise it’s hard.
Paul: Yeah.
Rich: I mean, you could do like, you know, you could design games, but to do the heavy-duty stuff, math was always deep in the mix in all of this. Right? And here we are again.
Paul: Yes.
Rich: AI—very, very computationally complex. It’s just life. I mean, these things are doing a lot under the hood.
Paul: Let me, let me ask you a question. I’m looking at a company like OpenAI.
Rich: Mmm hmm.
Paul: Big company.
Rich: Sure.
Paul: Worth a lot of money now. It’s got ChatGPT.
Rich: Yeah.
Paul: And really, it does stuff. People use it all day. I’m not going to explain what it does. We can maybe use it a little bit later and we’ll talk about that.
Rich: OK.
Paul: But like, OpenAI and all these other companies are hiring these AI people, and they’re paying them millions of dollars.
Rich: Yeah.
Paul: Why?
Rich: I have a couple of theories about it. One is, I think we are still riding the initial innovation of a few years ago and so much money rushed to this breakthrough that it’s starting to normalize. I mean, I open sometimes Perplexity or Claude or OpenAI and I kind of get the same results. And so there’s this frenzied rush right now to, like, have the next breakthrough.
Paul: And they’re poaching each other’s people.
Rich: They’re poaching each other’s people. It is above my pay grade to really conclude that there is another breakthrough. Like, I’m skeptical there is. I think we’re going to be riding and refining this one for a long time, no matter how many PhDs you throw at it. I think—
Paul: Well, let me throw, let me give you a statement that’s out in the news and then you give me some feedback, which is OpenAI just announced that a special research group has been able to pass a really, really complicated math competition.
Rich: Yeah.
Paul: Like, they’ve just, I think it was, like, four out of five of these unbelievably hard problems.
Rich: Yeah.
Paul: Using generalized reasoning, not specific stuff. You won’t get that in the next version. They had actual mathematicians who’d won the contest before evaluate the answers, and it did a really good job.
Rich: Okay.
Paul: So, like, tell me, when I say that, like, what do you think?
Rich: I think that…
Paul: Because you just told me that there’s no, like, it’s not going to make a big jump.
Rich: That doesn’t shock me. I’ll tell you why. I have to assume that some sort of prerequisite, like, deep mathematical knowledge was fed into the LLM to take that test.
Paul: I think that the idea is that the larger ChatGPT-5 corpus that it’s ingested, it’s just got a lot of math in it and they can kind of…
Rich: So it’s, it’s strike—I mean, it’s impressive, but it does strike me as kind of more of the same. I think if—
Paul: So, actually, to put a point on that, the model that we talked about earlier when I was pretending to be an LLM?
Rich: Yeah.
Paul: It’s still doing that.
Rich: Yeah!
Paul: Like, it’s not thinking like a person.
Rich: Yeah. I mean, look, look, the Valley is always, Silicon Valley is always looking for the next god to worship. Right? And the truth is, good God. We just met this god. Like, we’re just getting to know him. I see these as iterative improvements. Like, it’s getting faster. It’s getting a little bit better. They’re heading off a lot of the weirdness. So I think people are saying hallucination, yes. But I think we’re still going to iterate on this for a while.
Paul: It is a little weird because we’re in this age of things that 10 years ago just would have looked like miracles. Now we’re starting to take them for granted.
Rich: Yeah.
Paul: And so, like, we have a product. Okay. Our product is not the marketing part of the podcast, but our product will do a thing. It will assemble actual software for you in about five minutes.
Rich: Yeah.
Paul: That looks a lot like normal software. It still needs humans involved.
Rich: Yeah. Yeah.
Paul: But like, I’ll give it a good prompt. I’m doing, like, one a day on LinkedIn.
Rich: Yeah.
Paul: And it will build me something that used to take four or five months in about five minutes.
Rich: Yeah.
Paul: Might be the wrong thing. [laughing] But it’ll build it. And everyone is already taking it for granted. So, Rich, what’s OpenAI? What do you think, when we—let’s compare these companies just a little bit. What’s OpenAI?
Rich: General-purpose prompting, I think, is the way people perceive it.
Paul: It’s a big one.
Rich: ChatGPT. Most people probably know ChatGPT more than they know OpenAI.
Paul: It does images.
Rich: It’s a consumer tool. It’s an all-purpose—it’s like an all purpose Windex cleaner. All surfaces.
Paul: It kind of wants to be the AI operating system.
Rich: Yeah.
Paul: It’s going to kind of run—okay, contrast that, and just sort of, I’m actually, actually just asking for you to riff on a little bit. I’ll give some input, too. But so ChatGPT, that is Samuel Altman, that is, like, that’s the original.
Rich: It has an Apple vibe, which is like, “We’re going to insinuate ourselves into your kitchen.”
Paul: Okay. Now, Anthropic, founded by ex-OpenAI people, ex-ChatGPT types.
Rich: Yep.
Paul: What do you think of when you think of Anthropic?
Rich: Less sort of consumer Q&A, plan my vacation—though it’ll do it.
Paul: Mmm hmm.
Rich: I think they’re all gobbled, they all have the same appetite for knowledge, right? But I think Anthropic is sort of, they’ve sort of taken the ground around technical output, code, Claude—which is the Anthropic version of ChatGPT, essentially it’s their product—is really preferred by a lot of engineers. Research, deeper research-type stuff. They’re all gunning for each other, by the way, to be very clear.
Paul: Anthropic tends to be behind ChatGPT on the technology front.
Rich: Yeah.
Paul: But ahead of ChatGPT on the usability front.
Rich: Yeah, they’ve productized more, which means you can create more context, you can sort of bucket them, they call them projects.
Paul: Artifacts.
Rich: You can decorate the prompt, so it’s like, “You are an insurance adjuster.” So anytime I use that prompt, it already has that.
Paul: What’s confusing is that they all borrow from each other. So ChatGPT has this now too.
Rich: They all do this. They all do this. I honestly can’t distinguish them anymore. I’m more and more convinced that the winner is going to have to think about user interface, and how that prompt is, like, one shortcut away from wherever I am.
Paul: Mmm hmm.
Rich: Like, whether I rub my phone the wrong way or the right way or whatever you want, getting to the prompt is going to win. And that’s why I think players like Google and Apple, they are eventually going to—installing an app is one of the most expensive things in the history of technology. If you can get a product in front of people—
Paul: It’s the hardest thing in the world.
Rich: I’m looking at, I watched the new Samsung Fold—it’s very impressive, it’s a foldable phone and all that. But the thing that struck me was, like, Gemini is—like, there’s a button just for Gemini. Good luck getting a grandma to install an app, if one button it means you get AI.
Paul: Yeah, Apple’s sort of gonna bake it into Siri.
Rich: It’s already baked in. Perplexity.
Paul: What is Perplexity?
Rich: I don’t know a lot about Perplexity. Perplexity is sort of—
Paul: I’ll explain.
Rich: It’s very web-centric to me. You could kind of shop on it.
Paul: I think they’ve got their own browser going.
Rich: They’ve got their own browser called Comet that just came out, yeah.
Paul: And so the—Perplexity is way more kind of, like, I would almost say more document-oriented. Like, it sort of, it helps you with research, it searches the web for you. That was sort of their big differentiator.
Rich: Yeah.
Paul: So the thing is they’re all starting to blur and converge. It’s mostly kind of attitudes.
Rich: Yeah.
Paul: So ChatGPT is like, “We’re going to completely reinvent the world around ourselves.”
Rich: Yeah.
Paul: “And you’re going to use this chat interface, it’s going to fit here.” Anthropic is like, “You are going to do all the same things, but it’ll just be a little cozier, and you’ll be able to generate some more outputs.”
Rich: Yeah.
Paul: Perplexity is like, “You’re at a business with work to do and you need to do a bunch of research? We’re going to get in there for you.”
Rich: I think they also have, like, a shopping product?
Paul: Yeah, that’s right.
Rich: “I’ll show you an array of refrigerator images.”
Paul: They’re the ones that kind of are like, “Hey, the web, we’re going to really integrate with the web.” All right, so let me tell you about Llama. Llama is an interesting other—so this is Facebook. Facebook creates Llama. Llama is open source. You can download most of it or the whole thing, it’s got confusing licensing and so on.
Rich: It’s open source.
Paul: That’s right. You can get the whole—you can just get the database.
Rich: Yeah.
Paul: And you can do stuff with it. There’s a lot of products like this out in the world where instead of trying to compete with these big platforms, they’re like—
Rich: Grab your own.
Paul: Yeah, we’ll just—”Go ahead. We’ll just cut the legs out from under you.” You can also get a hosted version and so on and so forth. This is more in the raw materials part of the world, right?
Rich: Yeah.
Paul: So what I want to do is just sort of like run through these because they come up a lot. So it’s, you got platforms. You got things that integrate with the web. You have experiences. And then you have these kind of, like, open source who are just kind of in there to drive a wedge in—
Rich: Yeah.
Paul: —and make sure that they have their territory.
Rich: Yeah, and I also think that, you know, I don’t buy that, I don’t think it’s altruistic. I think it’s…
Paul: No!
Rich: I think it’s Facebook knowing that they are in people’s lives in very intense ways through WhatsApp and Instagram and whatnot, and that AI is going to be in the mix.
Paul: So I want to take one more minute and I’m going to ask ChatGPT to plan me a vacation in Paris. And we’re going to talk about what it does and says, and if we’ve learned anything, okay?
Rich: Let’s do it.
Paul: Plan me…I went into the ChatGPT app, and I’m going to type those words. “Plan me a Paris vacation.” Hit return. Now it’s got all these little icons when you do this, right? It’s got a little globe for search the web.
Rich: Yeah.
Paul: It’s got a research icon. It lets you switch versions. ChatGPT has all these versions. It’ll wear you out.
Rich: Yeah. I don’t think anybody changes them.
Paul: I do, I do, but we’re going to just keep it basic.
Rich: Well, you do. Yeah.
Paul: We’re going to just hit return. Okay. And I’ll give you a quick summary. I’m not going to read everything it says because that’ll wear you out. “Here is a structured—bold—Paris vacation plan. And—bold—that balances culture, food, leisure, and a few lesser-known gems. It assumes six full days on the ground—” Notice, it assumed that. “—excluding travel days, and is designed for a curious, culturally inclined traveler.” Oh, that’s very flattering.
Rich: Mmm!
Paul: So it gave me, okay, and it’s like, I got emojis, I got hotel recommendations, I got an itinerary overview. Wants to send me to Notre Dame and observe the reconstruction.
Rich: It’s incredibly impressive.
Paul: This is pretty impressive. Now let me make a few observations. So what’s just happened? It’s still typing. “Day five, modern art.” It wants me to go to the Pompidou Centre or the Musée Picasso. Day six, “markets and chill.” Got food tips. It is still going as I’m talking. It’ll go for like another minute.
Rich: Yeah. Yeah.
Paul: Okay. So I asked it to count from 1 to 10 when I was doing it, right? What is happening here? Now that we’ve talked about it, what’s happening?
Rich: It’s not that different than what you described before, which is you kicked off the string. A lot of people don’t realize that the words you typed in is the beginning of a to be continued episode. It’s essentially saying, “Write me an itinerary for a trip to Paris.” And it’s like, okay, AI didn’t just lean back and say, “Oh, Paul’s going to Paris.” It literally said, “What happens next?” And what’s happening is its incredibly sophisticated predictive model is laying out all that stream—it’s a stream-of-consciousness, but it’s a structured stream-of-consciousness.
Paul: It has absorbed every web page from every travel agency.
Rich: Its knowledge is vast.
Paul: That’s right. It has—
Rich: You could have write this about every—you could pick a small town in Albania and it’ll do a—
Paul: Not as good, though.
Rich: Right, because there’s less information.
Paul: So what’s happened? Paris is—”plan me a Paris vacation,” I chose that because it’s one of the most predictable things a guy in New York City in middle age could say.
Rich: Boy, is it.
Paul: “Plan me a Paris vacation.”
Rich: Yeah.
Paul: And so, because millions and millions of people think, “plan me a Paris vacation” and have, put variation—and because there’s a huge market for paris vacations?
Rich: It has a deep background.
Paul: It’s able, it has absorbed so much information—
Rich: Yeah. Correct.
Paul: And what it has done is, it has chewed that all up and put it in a statistical model, so it knows the sequence—
Rich: Yeah.
Paul: That usually gets sort of offered up when somebody says, “plan me a vacation.”
Rich: Correct.
Paul: And it may also have classic computer smarts in here that’s like, “Oh, hey, go over to the vacation module. They said Paris.”
Rich: Yeah, yeah.
Paul: Right? So there’s all sorts of things going on, but it’s not radically different than count from 1 to 10.
Rich: It’s not. It’s just the sequence… Imagine that every step of the way, it peers out at many billions of options.
Paul: And it’s got these internal prompts like, I’m going to read the very, very end.
Rich: Yeah.
Paul: It says, “Would you like this in a printable PDF format, or want help customizing it by interest?”
Rich: Yeah, I mean, this is good usability, right? I mean, it’s—
Paul: But what that means is that somewhere in there, there’s a system prompt that says, “Make sure to offer multiple output options.”
Rich: Yeah.
Paul: “And make sure to, if they have multiple interests, flag that and let them request something further.”
Rich: Yeah.
Paul: So they put that into the system, and then every time they evaluate, they add just a little bit of context and color, and then they give you back something that looks even more personalized and concierge-style.
Rich: Yeah.
Paul: But all they’re doing, it’s not psychic, it’s not thinking. It’s taking more and more tokens and smushing more and more tokens together and kind of reassembling the jigsaw puzzle in a way that feels more and more personal.
Rich: Yeah. Look, the tone of this podcast is like, “Check it out. It’s not that big of a deal.” This is a big deal. This would have taken you two days to put together.
Paul: What’s tricky—I mean, at least, right?
Rich: At least.
Paul: What’s tricky is that we’re in such a cultural bizarre zone. Somebody the other day came and visited and they’re like, “You know, it’s either the most evil thing that’s ever happened or it’s a god.”
Rich: Yeah.
Paul: That’s how people talk about it.
Rich: Yeah, yeah, yeah, yeah.
Paul: And the nerds in the room are like, “Eh, it’s a weird database, but boy, is it cool. It can do all sorts of stuff.”
Rich: Yeah. Exactly.
Paul: And I really would advocate for everybody, this is the end of the first part of Summer School. See, it’s a fun summer school. We’re having a good time.
Rich: Ah, it’s the best summer school.
Paul: Just paper airplanes flying all over the room. Everybody can get up and—you can go to the bathroom right now. Nobody’s gonna say a word.
Rich: You don’t need a pass.
Paul: No, not at all. Make yourself at home, right here at the school. But I think, like, if there’s one thing to take away, it’s that this is just more software. It’s very weird software. Right? If you told somebody in the 50s about databases or web browsers, they would have said, “What? I can get—you’re going to have an encyclopedia in a box on a TV?”
Rich: Yeah.
Paul: “And it’ll have every article about every town in the world.”
Rich: Yeah.
Paul: “That’s the wildest thing I ever heard. You’ve made my dreams come true.” Right?
Rich: Yeah.
Paul: It’s one of those technologies. It is transformative at that level.
Rich: It is.
Paul: But when you learn how it works, it’s just more software.
Rich: I also think when you learn how it works, you use it a lot better. You get really good at it.
Paul: You do, because you realize that you’re manipulating a vector space instead of talking to a person.
Rich: A basic cursory understanding means you can be a more sophisticated user of this thing.
Paul: Yeah.
Rich: Because that’s, I mean, it’s a little bit of a ruse. It’s deceptive to think you don’t have to know anything. Just type in the box. And the truth is, you do hit a wall.
Paul: You really, really hit a wall. So now, up next, we’re going to talk about how agents work, because that’s the other big thing that comes up a lot.
Rich: Mmm! 007.
Paul: Yeah. So tune in next week. Hopefully we’ll have a special guest all lined up to give us some information.
Rich: Great.
Paul: And I wanted to let people know, Richard. I wanted to let people know something very important, which is you can find us on YouTube.
Rich: Yes. Subscribe and like.
Paul: You can find us on LinkedIn. I’ve been trying to post a new prompt every day showing how our product works. Speaking of our product, you can visit aboard.com and Rich, what’s there?
Rich: Aboard.com is our AI play. We do something quite different. We don’t blurt out sequences of words. We actually use a pretty sophisticated platform to build starting points for sophisticated apps, like, really serious business apps. It does something very different. We should talk about Kiro sometime down the road. Kiro is Amazon’s new thing. It turns out it’s hard to ship software off of a prompt. It’s really hard because it’s just, there’s more thinking and planning that needs to happen.
Paul: You can definitely write code off of a prompt.
Rich: You can write code off of a prompt. But our platform actually does a lot of the planning and thinking and architecting and then turns it into software. Check it out. There is a prompt box. It does something very different. It doesn’t take a second. It takes a few minutes. But it’s worth trying. And we’re starting to build software for customers.
Paul: If you want to reach us, hello@aboard.com, we’re glad to talk to you. And we’re talking to lots of people. We got a couple clients going. We’re sort of starting—
Rich: Off we go.
Paul: Starting to hum. It’s been, after a year and a half of talking about something coming, it’s here!
Rich: It’s out in the wild.
Paul: And, yeah. So we’re here to make friends. Got an office in New York City, and we’re glad to talk. All right, so let’s—good Summer School. Let’s go—
Rich: School’s out.
Paul: Gonna go smoke a cigar or, sorry, cigarette.
Rich: That’s weird. Teenagers smoking fat cigars is weird.
Paul: It is. It’s a cigarette. I don’t know where I was going.
Rich: You meant cigarette.
Paul: Yeah, yeah, but that’s bad, too. Don’t smoke.
Rich: Don’t smoke.
Paul: No, it’s summer.
Rich: Don’t vape either.
Paul: Don’t vape. Go have a healthy fruit juice.
Rich: Get a smoothie.
Paul: Yes, a smoothie. Okay, friends, we’ll talk to you.
Rich: Have a great week.
Paul: Bye!
[outro music]