A New AI Acronym

October 8, 2024  ·  25 min 58 sec

You’re a business stakeholder trying to evaluate AI tools for your organization. How should you assess them—and how should you measure the value of their outputs? On this week’s Reqless, Paul pitches Rich an acronym for this very task: TRACE. Transparent, Repeatable, Actionable, Clear, Efficient. How can these metrics help someone understand these tools before letting them into their org, and help them calculate the potential return on investment? Plus: Paul and Rich discuss a recent interview with MIT economist Daron Acemoglu on just how many jobs he’s calculated will be eliminated by AI in the next decade. 

Show Notes

Transcript

Paul: Hi, I’m Paul Ford.

Rich: And I’m Rich Ziade.

Paul: And this is Reqless, R-E-Q-L-E-S-S, the podcast about how AI is changing the world of software—and other stuff besides.

Rich: Yes!

[intro music]

Paul: Rich, how are you doing?

Rich: I’m doing okay.

Paul: Okay, let’s talk for one minute. I don’t know if there have ever been more elephants in the room.

Rich: And they’re ridiculous elephants, too.

Paul: Yeah, I mean, you and I—so we’ve been having, professionally, things are going well. We had a great event. We’re rolling out some new stuff. We’re talking to lots of people.

Rich: Yup.

Paul: We have this nice podcast where we talk about AI and software. Has lots of listeners. They get in touch. How’s the world going?

Rich: Well, let’s date ourselves, just so we can help the staff at the Library of Congress—

Paul: Yes.

Rich: —when they archive this thing. It is early October 2024.

Paul: Right.

Rich: There is a pivotal American presidential election coming up in a month. The Middle East is essentially on fire right now. There’s serious—I’m Lebanese. And we have a team in Lebanon. And so we’re actually affected by everything that’s going on. Paul, by proxy, you’re a very kind Irish co-founder—Irish-American co-founder—

Paul: Yeah.

Rich: Who is very close to a lot of what’s happening just by virtue of having all these Lebanese—Lebanese tend to attract Lebanese, Paul, I don’t know if you’ve noticed that. [laughing]

Paul: Yeah. There’s always another sort of Ziade-like person, but it’s great.

Rich: There’s a lot going on, and it’s a—

Paul: I love our Lebanese sort of friends and family here.

Rich: Yes.

Paul: So anyway, I’m calling it out, honestly, just because I feel that we’re about to go and talk about the way the world’s changing with AI, and…I… [laughing]

Rich: Well, we just went to—there’s also a guilt aspect, and I think we’re going to say something universal here.

Paul: Yeah.

Rich: Like, we just complained to each other that the service was slow at lunch.

Paul: Yeah. Oh yeah.

Rich: In that context, it feels insane.

Paul: And literally, while that’s going on, you’re getting text messages from people who are like, I’m not sure if I should stay in my house. Like, it’s a lot right now.

Rich: It’s a lot right now.

Paul: And I just wanted to call it out because we’re going to keep—this is, the thing we can control is our relationship with software, and we’re going to talk about that. I know you and I are both, we’ll just, we’re maxing out our donations. We’re doing the best we can.

Rich: Doing the best we can.

Paul: And if you’re in the same boat, just kind of know we’re all here together. Like, it’s a hectic moment. And…

Rich: Yeah.

Paul: But at the same time, it’s a weird moment, too, because this other thing is happening, in the world of AI and so on, that is just huge and big and positive and negative, and it’s—God, I really miss living in a time where I could just make this the only focus of my interests, right?

Rich: Yeah. Yeah. And hopefully the world will calm down.

Paul: Well, it’s all got to fit together in the end, right?

Rich: It does.

Paul: So here we are.

Rich: Yes.

Paul: We’re in a world—I mean, who’s one of the big guys in AI?

Rich: Who?

Paul: Elon Musk. He just has some big event. He’s doing, he’s trying to—

Rich: Yeah.

Paul: You know, he’s competing with Grok against OpenAI.

Rich: Yeah, yeah, yeah.

Paul: So it’s all, it’s all fitting together in a piece.

Rich: Yeah. Okay.

Paul: So anyway, there we go. Hope everybody stays safe. So I’m going to completely change the tone. You ready?

Rich: Do it.

Paul: I am writing—are you ready? Are you ready for some exciting news?

Rich: Oh! Paul Ford’s writing! That’s all you gotta say.

Paul: I’m writing a white paper about how AI is changing software. It’ll probably be called “Reqless”.

Rich: Okay, I want to just say out loud: There are white papers that are essentially the paper equivalent, or the PDF equivalent, of white noise to help you fall asleep.

Paul: Yeah.

Rich: And then there is the Paul Ford white paper.

Paul: No, no. We, like—lots of illustrations and shenanigans, and they’re full of frolic.

Rich: As Paul Ford is known to do.

Paul: Yeah.

Rich: You know?

Paul: You can’t explain anything by making it boring.

Rich: You can’t. And you might as well have a little fun. It’s a wild time in the technology world, and let’s explore it. I mean, you’ve written long pieces for magazines—

Paul: So many times.

Rich: Exploring tech and culture and everything, and this is another moment.

Paul: So, anyway, we’re gonna be working on it probably for, like, a month.

Rich: How much are you gonna charge?

Paul: Oh no, we’re going to give it away for free, once we get your email and your role on the website.

Rich: Put an asterisk— [laughing]

Paul: Yeah.

Rich: Put an asterisk on free!

Paul: Yeah.

Rich: Because that’s where we—

Paul: You’re going to sacrifice your privacy, but you’re going to get something with a lot of colors that explains how AI is changing software.

Rich: That’s called the internet.

Paul: Somewhere before Thanksgiving. Let’s, let’s—

Rich: Okay.

Paul: One of the weird things about writing about this field is it’s moving so, so quickly. So writing about code, you’re looking at The Mythical Man-Month, written in the seventies, right? Like, there were—

Rich: There’s 30 years of history.

Paul: And 30 years of analysis, and 30 years of this works, and that doesn’t work, and there’s systems like Agile and so on. None of that exists. We are—the dumb metaphor about the plane being built while it flies?

Rich: Yeah.

Paul: Is very accurate here, because we literally, they launch the tools, and then people figure out what they’re good at.

Rich: They launch them in a—and we appreciate this, because we value interfaces, and I don’t mean interfaces just as in human interfaces, but also machine interfaces. They launched it in such a way where you could play.

Paul: I mean, that was ChatGPT with chat, and it was like 3.5 or 3.—

Rich: Yeah, yeah.

Paul: And so we’re in this wild moment, and I realize, as we go talk to people

Rich: Mmm hmm.

Paul: One of the things that’s really tricky is AI is getting into organizations as a big new thing.

Rich: Yes?

Paul: We talked to Noah Brier on the podcast, and he told us that AI is getting into the organizations through him as a consultant. He’s making things and able to do a lot of work that normally would have been, you know, cost millions of dollars for thousands of dollars using tools.

Rich: Correct. And I think people who would love to get the thing done but don’t have the budget or the time are just, they’re just asking the question. “Well, now that AI is here, can I just get it now faster and cheaper?” Like they’re asking the question. They don’t really know if they can. And humans tend to extract maximum money out of other humans.

Paul: It’s kind of like everybody has, like, what you call the innovation budget and it’s still there, right?

Rich: Yeah. And it’s like, “You know that thing I’ve been wanting forever? Can I now have it?”

Paul: Yeah.

Rich: Because there’s this incredible leap in output and productivity.

Paul: And so it’s a lot of marketing, and it’s hard to measure. And I was thinking for a while, and I was like, and I’m going to workshop this with you. You’re going to beat it up here in the podcast.

Rich: I love beating it up.

Paul: Let me switch into consulting mode.

Rich: Go!

Paul: Okay? You have a lot of opportunities for using different kinds of AI tools, and good for you.

Rich: Yeah.

Paul: But how are you going to discuss the value that they bring to your organization and whether they are working or nothing? And now, look, obviously there’s like, “I needed something that could draw a picture of a squirrel.” Like, wow, that was value. Good. Okay? But—

Rich: If that’s your thing!

Paul: If that’s your thing. Right? But for the broader business case, how are you going to understand what these things are before you let them into your org, so that you can then kind of calculate what the return on investment is going to be?

Rich: It’s hard.

Paul: It is. And there’s very little—there’s a lot of like, kind of thinking about this, but there’s very little that’s kind of good rule-of-thumb kind of stuff.

Rich: Yeah.

Paul: I’m going to give you an acronym. It’s a word. You ready?

Rich: [excited whisper] I love acronyms.

Paul: I know, I know. This is, the inner consultant is coming out.

Rich: Go.

Paul: TRACE. T-R-A-C-E.

Rich: Beautiful word.

Paul: How do we define the value of AI outputs?

Rich: Mmm.

Paul: Okay? So I’m going to start with the first one.

Rich: Meaning, is this AI tool worth it?

Paul: Yes, that’s right. Is this worth it?

Rich: So I’m an executive, I’m a decision maker, I’m a—

Paul: I need a baseline criteria for, like, is this quality or not?

Rich: Ah. Okay. All right.

Paul: Before I can even start to work with it.

Rich: Okay. I got the framing. Go.

Paul: Okay, so a consultant has showed up trying to sell you a new AI thing. And I want you to say the word, I want you to try TRACE. So “T” is for transparent. Okay?

Rich: Okay. What does that mean?

Paul: Meaning I know where the output came from and what prompts were used to create it. It’s not just dropped in as a black box. It’s not magical. We are learning how LLMs work and how they can be used in an organization.

Rich: Mmm hmm.

Paul: So I am start, I can, I can tell you any piece of content I create, I should actually be able to cite the AI. This whole thing where we’re hiding what people are doing?

Rich: Yeah.

Paul: Is not cool.

Rich: Right.

Paul: Like, we have to stop that. We need to—so I think, one, it has to be possible for you to communicate what the robot’s doing and how you are using it.

Rich: Utterly reasonable.

Paul: And that should be really clear. That should be the footnote at the bottom of the slide. I’m going to give you an “R”. You ready for R?

Rich: Go.

Paul: Repeatable. Meaning I can use variations on a prompt multiple times and achieve a similar result each time. Because what I’m seeing is people come in with the magic and they’ve prompted the hell out of the magic and they found the one thing that looks great, but then you kind of don’t know what to do to get it to happen again.

Rich: Mmm hmm.

Paul: So make the picture, write the story, and so on. And so there’s, that’s the whole category of people known as prompt engineers…

Rich: Meaning. will the output be somewhat predictable?

Paul: Yeah, because what’s the use of the tool if you can’t use it the same way?

Rich: Yeah.

Paul: Like, I wanted to summarize a text or I wanted to X,Y, Z.

Rich: A challenging requirement.

Paul: It is, and it’s not a challenging requirement if you are only using the base cases for these things. Like, draw me a picture in the style of, or whatever. But as we get into it, you’re using them, you’ll do one prompt and then you’ll take the output of that and you’ll feed it to the next prompt and so on.

Rich: Yeah.

Paul: And so if you’re going to build pipelines with this stuff, it actually kind of has to work—it’s not deterministic in the same way that code, classic code is.

Rich: Sure.

Paul: But it has to—

Rich: It’s gotta be somewhat predictable and stable.

Paul: Otherwise you’re just at the mercy of the product.

Rich: Sure.

Paul: And it’ll just stick a goose in the picture every time. Okay, “A” out of TRACE. We’re halfway through. Actionable. The output can be used, modified, shared, transformed in ways that create value.

Rich: Mmm.

Paul: So now—and again, I want to get us past magic tricks. I’ll give you an example. ChatGPT can’t really make good diagrams. It makes them as images.

Rich: Yeah.

Paul: But you know who does is Claude. Claude by Anthropic. It’ll make, it’ll use these text format outputs. And so if I wanted to make an org chart, if I had, if that was my job, org chart maker or code diagrammer, I’m going to go over to Claude.

Rich: Interesting.

Paul: There’s a variety of outputs that I can teach it about, and I’ve had better success there.

Rich: And I think what this is highlighting is sometimes you want to take the output and finish the work.

Paul: Correct. Well, I need to bring it into my visualization tool—

Rich: Whatever tool.

Paul: Exactly. I don’t want it, it’s going to take me 100 times as long to get it to “finish” that as it would for me to just take what it got together quickly. So: Actionable. And you can really evaluate these different tools this way because they create different kinds of outputs.

Rich: Yeah.

Paul: The code might be more actionable in one, the images might be more actionable, the text might be better in the other, et cetera. Clear. The output is high quality, easily understood, and can be shared with other people or executed as code. You see this growth happening in something like ChatGPT, where the bullet points are a lot better than they used to be.

Rich: Mmm.

Paul: Right? The code that comes out of Claude is really clean for the most part, especially if you’re having to do front-end components.

Rich: Right. It’s not just that it works, but it’s also readable.

Paul: Well, often when you’re—because I’m using this stuff more and more, the clarity starts to fade as you keep re-prompting.

Rich: Mmm.

Paul: It’ll sort of, you’ll be like, fix this bug, and then it’ll introduce five other bugs. And now you actually—so you have to make an objective choice as to whether you’re gonna spend a half hour debugging, keep asking it, or start over.

Rich: Mmm!

Paul: And I’m finding that start over, but now I’m starting to make a more reproducible prompt pipeline is a little bit better.

Rich: Interesting.

Paul: So—

Rich: I like the term, is that a term that’s out there, prompt pipeline?

Paul: No, but we’ll go with that.

Rich: I think it’s an interesting term because it speaks to the work that you still have to do.

Paul: This goes back to my thing, which is these are not magic working gnomes, they’re translation. You give it a prompt and it translates it to one thing and then you take that output and you translate it to another.

Rich: Mmm hmm.

Paul: And the more you can break that down, the easier it is to understand what it’s doing and the more reproducible.

Rich: Yeah.

Paul: All right, last letter. Efficient. I can achieve this goal more quickly using an LLM than using other means. And I feel that this is the great curse of all new technologies. Blockchain is a perfect example, where it was, like, “You’re going to be able to buy a candy bar by using 200 billion kilojoules of energy, and it’ll only take six weeks.”

Rich: Yeah, yeah.

Paul: And you’re like, “But I was just, I can do that, I have a dollar in my pocket.”

Rich: Is it better?

Paul: Is it better? Right? And is it better—and sort of, those tie into all the other ones. But can I really make a case that I can do more with this thing?

Rich: Yeah.

Paul: And not that it’s just easier or lets me do more—because it’s also, you’ve got the kind of, “but such big portions” problem with a lot of this stuff.

Rich: Yeah. Yeah.

Paul: I find when I am using it with laziness that it produces all kinds of stuff that I don’t even read. Like, you know, it’s like I don’t look at the code and I just, am like, “Does it run? Does it run?”

Rich: Yeah.

Paul: And that, long-term, if a human is not interpreting, evaluating, and acting on the, out on the outputs, you’re just making a big mess for yourself later. Like AI-written code is still going to be legacy code in the future.

Rich: Right.

Paul: It’s not—

Rich: You better understand it, is what you’re saying.

Paul: Yeah! You should make sure the documentation actually lines up with the code, etcetera.

Rich: Totally.

Paul: And those things fall under efficiency. So that is my consulting acronym.

Rich: Yeah.

Paul: You didn’t beat me up too hard on the way.

Rich: No. Look, I think it’s making a lot of sense.

Paul: How would you measure AI value? Okay, we got this whole new world. I’ve seen you. You’ve actually walked this tool into organizations, demoed it, people are leaned in.

Rich: Yeah.

Paul: You sell to C-level people, have for years and years.

Rich: Yeah. Yeah.

Paul: So how are you gonna prove that there’s value here, that they can’t get some other way?

Rich: When I think about value, I think about it in the context of the stakeholder or the consumer of the tool. Meaning, if I don’t care about cars, I don’t see the inherent value of a $150,000 supercar versus a $50,000 SUV. Like, I don’t see it.

Paul: I don’t think those numbers—I think you care about cars so little that I think, I think everything’s like five times more expensive than you’re thinking.

Rich: Yeah, maybe.

Paul: Yeah, okay.

Rich: Like, an SUV is, like, $50K. I think you can get one for $40K, I think.

Paul: Okay.

Rich: And then there are cars that are $200K, right?

Paul: Yeah.

Rich: And for me, I get it. It goes faster. That doesn’t do it for me. The paint is very bright colored, and that doesn’t do it for me.

Paul: You live in Brooklyn, so you can go 14 miles an hour and the paint all gets ruined.

Rich: Yeah, exactly. So I think when you look at it through the lens of who is going to benefit from it?

Paul: Mmm hmm.

Rich: Value has very different meanings.

Paul: Okay.

Rich: I think as an engineer or practitioner, value is efficiency, reducing errors, better documentation, something that really augments my own core skills.

Paul: Sure.

Rich: And making me feel good.

Paul: Should make me look good.

Rich: Make me look good. I’m in an organization. Not—everyone’s working for someone, most of the time. [laughing]

Paul: Right.

Rich: So that is a particular lens on value.

Paul: And I would say the tools like Cursor or the built-in code generation in Claude, those are demonstrating that kind of value today. People are using them.

Rich: That’s right. They’re using them.

Paul: We’re using them to write code—

Rich: Yes.

Paul: —that supports our AI software-development tool, Aboard.

Rich: Now, I am someone who has more—I am a technologist. I do consider myself a technologist, sort of a platform thinker. But I am someone who interfaces mostly with non-engineers.

Paul: Business.

Rich: Business. And for business, value is looked through—there is an instantaneous currency exchange that happens with anyone you’re talking about in business.

Paul: Mmm hmm.

Rich: And that is: Does it add value to my business?

Paul: Right.

Rich: They don’t care about documentation. They don’t care about the fact that it’s creating automated scripts. They just don’t—it’s not that they don’t care. It’s that they are too far removed from it. So what you have, and you mentioned—

Paul: Let’s actually, like, quantify that. It could make them more money now. It could allow them to spend less money today. Or it could cut future risk.

Rich: That’s right. Those are pretty much the drivers, right? Like, and so what they’re hearing right now, it’s like there’s a, There’s an odd smell in the room. That’s about all they’re, they’re at. Which meaning, they just see it as like, “Hey, I’m hearing a lot about AI. Does that mean I can finally shave those, that 4% off of my cost so my margins go up?” That’s literally how they think. Or more like this: “I’ve had enough customer service people complain about our tools, and I’ve kept brushing them off because it’s too expensive. Is it now not so expensive?” That’s about it.

Paul: Yeah, it’s like a currency trader hearing, like, big news about the Indonesian rupiah. Right? Like, just sort of like, “Whoa, I haven’t thought about that guy in years.”

Rich: Yeah. And so when I hear you talk, I think what’s interesting about your acronym, and I think it’s worth sharing, this is a draft.

Paul: Yeah, yeah. No. I’m pitching it to you today.

Rich: If I put that acronym in a table and I put in two columns on each of those letters?

Paul: Mmm hmm.

Rich: One is practitioner.

Paul: Mmm hmm.

Rich: And the value they see in it, in that facet.

Paul: Mmm hmm.

Rich: And then another is the business stakeholder. And business stakeholders, there’s a few that actually really light up for business stakeholders, and there are a few that really light up for practitioners.

Paul: Let’s split and slightly mischaracterize these two groups. I think that the practitioner, it’s almost, at this point today, a kind of play. Like, I will play with this thing until it gives me the outputs I want. And I’m willing to invest that time. For the stakeholder, it’s like, is this reproducible so that it can drive one of those three kinds of values?

Rich: Reproducible—is one of your letters.

Paul: Yeah.

Rich: Is reproducible. Hugely important for a business stakeholder, obviously.

Paul: Yup.

Rich: Because if you can’t, then it’s a toy. It’s not impressive. Business stakeholders see crazy things out of engineering.

Paul: This is the most vulnerable this technology is, because a large organizational stakeholder does not want that much serendipity in their outputs. Whether it’s text—

Rich: Or none. They want to minimize it. I mean, they love that there’s, like, it seems to be spitting out output that used to cost me a lot more money. We’ve gone from like $10 a line of code to maybe three.

Paul: Yeah.

Rich: I don’t know. I can’t tell. But in the end, I am not going to expose my business and bet my business, frankly, on code that spit out of a machine. I do need a human being. I got to yell at somebody.

Paul: I need a senior engineer and analyst to, yeah.

Rich: And this is a recurring theme, I think, the last few podcasts we’ve recorded, which is this is demanding everyone to up their skills so they understand what these machines are spitting out.

Paul: The hard part is that the actual fundamental technology is still kind of a black box to a lot of people who are working in it.

Rich: That’s right.

Paul: This is truly one of the hardest things to understand around.

Rich: Yeah. And do you have to understand how they’re doing what they’re doing? I actually don’t know if you do. I think it’s good to have the concepts.

Paul: I mean, I don’t understand how my TV works. Works in every detail when I turn it on.

Rich: Exactly. Here’s what I think you do have to understand. When you ask it to spit out a React-based to do list manager?

Paul: Mmm hmm.

Rich: Understand the code it spit out and.

Paul: Be able to compare the value of that code to the value of equivalent code created, maybe it takes 100 times longer, by a human being.

Rich: And what you have to worry about is that I think there has been a trend for a lot of engineers to pick up frameworks—it becomes the hot thing—and not really understand why or how they work. Look, the really solid engineers, the architect-level people understand it, but for many, they’re just happy that it all glued together and it worked.

Paul: I mean—

Rich: The thing, the machine’s humming.

Paul: Before the robot wrote the code for you, you would go to Stack Overflow and copy and paste the code.

Rich: Let me ask you a question.

Paul: Okay.

Rich: And this might ruin us as a podcast.

Paul: Okay.

Rich: There are a lot of people who think Tailwind, the CSS framework—

Paul: Yeah.

Rich: Is great.

Paul: Yeah.

Rich: Do you think—what percentage of those people, which is, frankly, I think it’s one, Tailwind is everywhere.

Paul: Yes.

Rich: Like, now in web development. How many of those people know why it’s great.

Paul: An extreme minority.

Rich: Okay. You said that, not me.

Paul: Mmm hmm.

Rich: I was gonna say—

Paul; No, look—

Rich: —a significant minority. You said an extreme minority. But you know what? You get trained on it and you’re like, well, I guess this is what I use.

Paul: Well, and it works.

Rich: And it works.

Paul: You change the numbers in the classes and you add the little bits of text, and you—.

Rich: It’s got some clever tricks to it.

Paul: And you read its manual, and you live inside of its world. Frameworks let you, they give you actually, like a cultural and social context. You can go to the forum. Everybody’s able to answer your question.

Rich: Not only answer your question. They’re fans of the thing.

Paul: They love it. And it abstracts away all the legacy stuff that is much more confusing. Right? So I have a very mixed feelings about Tailwind and React and all this because I came up on the original web.

Rich: Yeah.

Paul: And so then these new platforms show up and they sort of blew it up. And I think AI is doing that now, too.

Rich: That comfort in being able to use the framework but not fully understand it? You can kind of get by.

Paul: Mmm hmm.

Rich: But with AI, you can use the output and not understand it. And I think it’s going to be far more dangerous. And I think businesses, like, the customers of that code are not going to stomach it.

Paul: Oh, you’ve created instant legacy code.

Rich: It’s wild. It’s not only, it’s like—

Paul: Because the developer—

Rich: Instant, potentially hallucinatory legacy code.

Paul: Also the developer who wrote it is gone, right? They wrote it two minutes ago, somewhere on a server.

Rich: They quit.

Paul: They quit, yeah.

Rich: They immediately quit their job.

Paul: Yeah, and so now you have—

Rich: That’s a good sort of metaphor.

Paul: AI employees offer very bad retention. Right? Like, they don’t stick around.

Rich: They answer your question, and then they say, “I quit.” Every AI prompt should end with, “I quit. This is a hostile work environment.” [laughing]

Paul: That’s exactly right. We’re gonna get there, don’t worry. So, look, let’s close this. I’m gonna tell you an interesting analysis. There’s a professor at MIT. Okay? Good school.

Rich: Pretty good.

Paul: Daron Acemoglu.

Rich: Mmm hmm.

Paul: Who says, look‚he’s at MIT, he’s taking AI seriously.

Rich: Sure.

Paul: You’re rolling your eyes, but it’s real.

Rich: Eyes rolling.

Paul: And he’s like, look, in the next decade, how many jobs—how many jobs do you think he is predicting will be taken over by AI?

Rich: I have no idea. Millions.

Paul: 5%. He’s like, only 5% are really vulnerable here.

Rich: Wow. Does he explain?

Paul: First of all, he’s like, okay, let’s say I’m right. Then the way that we’ve invested makes this kind of a bubble. I’m like, 5% is a lot of percentage of jobs, personally, like, especially—

Rich: I mean, that’s a recession.

Paul: Yeah. Especially if it’s engineering jobs.

Rich: Yeah. If unemployment goes from 3% to 9% or whatever, you know, the math doesn’t work that way, it’s not the only—5% of a particular sector jobs. Okay. Does he explain why?

Paul: So he sees that there’s just like too much excitement in the C suite, and he just sort of did a big analysis. He just doesn’t see it landing.

Rich: Interesting.

Paul: Like, if he’s like, when you bring this to the economy and you look at the actual state of the technology in the next ten years.

Rich: Yeah.

Paul: It’s—stuff like we’re talking about? Sure. Vulnerable. Things are going to change, but like this idea that, uh, in the ten years you have AGI and 80% of people will be, you know, relaxing at the beach while robots work for them.

Rich: Yeah, yeah.

Paul: Is banana cakes.

Rich: It’s banana cakes.

Paul: And yet that is what is being expressed. That is—when you drive around San Francisco, it’s just sort of AI billboard after AI billboard.

Rich: Sure.

Paul: And so I just sort of, I thought I would throw that out. I don’t, like, the article, doesn’t go into supreme detail, but here we are.

Rich: Yeah.

Paul: And that’s worth thinking about. Now, again, I would say 5% is utterly worth a conversation.

Rich: Yeah. And I can see where he’s coming from. Look, man, we come from the world of enterprise software, which is a corner of software, but I’ll just tell you outright, even these spectacular tools are not going to be able to keep up with the crazy requirements of any particular business. It’s just very specific. It’s very dialogue-driven. It’s a lot of problem solving.

Paul: Well, this is why you can’t just install Salesforce and have a functioning sales team.

Rich: Takes like six months.

Paul: Yeah, this is why enterprise software exists. It’s because you had to bring culture to the template.

Rich: There are incredibly specific needs that require you to listen to them and then translate them into tools. And we are very far away from that. Now, that doesn’t mean you, like, the phrase I’ve been sticking to these days is AI is real good at the easy parts, like, a lot of the rote kind of basic building block stuff? Skip it. Focus on the, on the rest of the thing.

Paul: What this guy is really saying is, look, be rational. 5% in 10 years is a pretty big deal. But it’s almost to a point where everybody’s saying that so much change is coming that it’s starting to become like a pump-and-dump scam. Like, we all gotta calm down here.

Rich: Yeah. Which, I mean, look, the Valley is so happy to have a new…

Paul: Shiny toy.

Rich: New shiny toy, right? Speaking of shiny toys!

Paul: Oh, right, we forgot to mention our AI-powered software startup. Rich, I come to you with a problem, I need to, I have a lot of people who go out on service calls. I have a lot of people who, we’re trying to manage complicated data in our organization. I have a collection of weird hats that I want to catalog because my CEO is a huge hat collector. What do I do?

Rich: You call Aboard.

Paul: Aboard! You don’t call us, you send us an email. Listen, we’re modern.

Rich: [laughing] Send an email to Aboard. We are a platform that generates an absolutely amazing starting point for what you need. And then we bring product specialists in to finish it off.

Paul: That’s our secret technology.

Rich: Our secret tech is people.

Paul: Yeah.

Rich: But we use AI to skip a lot of the steps, get you there much more quickly, have a prototype, working software in your hands in weeks.

Paul: I mean, it’s what we used to do and charge, you know, like 5, 10 times more, so.

Rich: Yeah. Prototype in days, working software in weeks, final delivery in months. I’m going to start using that as a phrase.

Paul: There we go. Check out aboard.com, my friends. And if you want to get in touch, send an email to hello@aboard.com. We read it, goes straight to us.

Rich: Take care of yourselves. Have a wonderful week.

Paul: Bye.

[outro music]