Why do we try to explain tech concepts and processes with metaphors—and why do we choose the metaphors we use? On this week’s podcast, Paul and Rich get philosophical, kicking off the conversation with an article about how the human is not like a computer, and travel through the history of personal computing to our present AI moment. Plus: How exactly should you handle the idealists in your organization? 

Listen Now

See all episodes

E33

The Dangers of Metaphors in Tech

Paul Ford: Hi, I’m Paul Ford, the co-founder of Aboard.

Rich Ziade: And I’m Rich Ziade, the other co-founder of Aboard—and CEO.

Paul: Very good. Rich, I’m going to tell you what Aboard is really quickly. I know you know, but I think it’s good for you to hear it. Aboard is a data-management platform for everyone. So if you ever used a spreadsheet to make a list, whether you’re doing it because it’s a, it could be a shopping list, but it could also be, like, almost software that you use inside of your organization, because when you go into companies, people have got spreadsheets everywhere that kind of secretly run the place? Aboard is a tool for turning that into actual software. Invite people, move cards around, take that data, and make it into something that you want to use all day, rather than something you have to use all day. You can pick it up right now and save web links for it. Or you could be like the organizations we’re talking to and turn it into software that runs really important parts of your company. That’s what it’s for.

Rich: Check it out. Aboard.com.

Paul: It’s free. Free to try. All right, so, hello@aboard.com, if you want to reach us, that’s enough marketing for today. Let’s actually record the Aboard Podcast.

[intro music]

Paul: This is going to be a Paul Ford special, which is, I have a big idea, and let’s see if we can actually get some with it.

Rich: Or shift-delete.

Paul: [laughing] Yeah, that does happen, too. So, look, I picked up and read an article in, it’s a publication called Aeon,  A-E-O-N.co, and the article is called “The empty brain. Your brain does not process information, retrieve knowledge, or store memories. In short, your brain is not a computer.” So, look, let me, this is like out of the world of cognitive science and neuroscience and so on. This is not a wacky idea. We’ve known for a while that brains don’t really work like computers. It’s not like something you can switch on and off. They’re kind of always kind of firing. It’s like this sort of wild, blobby mess of signals just going every which way. And they’re super interconnected. But people really want to believe that computers can become intelligent like human brains.

Rich: They do.

Paul: Let me ask you, why do you think that is? Why do you, why do you think that tech, our industry, a really significant number of people in our industry are, like, have been convinced for a long time. It’s only a matter of time before the computer becomes a person.

Rich: I mean, I think if you, I’m gonna get a little lofty for a brief moment.

Paul: Okay.

Rich: I think when you look at invention for the last couple hundred years, a lot of it is about, you know, sort of extending our own faculties and our own capabilities, right? Like the invention of the camera is this enhancement on how we take in visual images. The, the invention of the telephone is a, it is taking our ability to communicate through language and extending it anywhere instead of just, you know, in the room.

Paul: Mmm hmm.

Rich: So I do feel like when we think about innovation, and when we think about invention, we are thinking about extending how humans live and extending our own potential, right? Like flying. It’s like, yeah, I’m not elegant like an eagle. I can’t just flap my wings and fly. But gosh darn it, if you buy tickets to Phoenix on a Delta flight, you’ll fly, right? [laughter] So I think. I think the, the ultimate end in that is to as closely as possible, replicate the, just the magical ability of intelligence and of cognitive thinking. That is the final frontier in a lot of ways for a lot of people, because it’s still so mysterious. I, for my own reasons, I’ve gotten to know various neurologists and people in medicine that focus on the brain, and they love telling you how little they know. [laughing] It’s like, you don’t understand. We don’t know anything.

Paul: How do they see the brain? How do you think a neurologist or a neuroscientist looks at the brain?

Rich: I think there are some, I mean, you started this podcast by saying that it’s not like that, but there are some things that are like that, and that is, data inputs actually exist. Like, those are scientifically proven. There are optic nerves, there are oral nerves, there are nerves on your skin. There are thousands of taste buds on your tongue. Those things actually—

Paul: [overlapping] Like, your eyes are not video cameras, right? Like, they’re not. It’s…

Rich: They’re not.

Paul: It doesn’t work that way.

Rich: But there are optic nerves that run like USB cables to this CPU, to this center, this command center, right?

Paul: Mmm hmm.

Rich: Now, once you get into the command center, it looks like pudding. [laughing]

Paul: Right.

Rich: Which is where things get kind of tricky.

Paul: I just saw, I just went and got glasses, and they did that, they do a photo of the inside of your eye.

Rich: Yeah, yeah, yeah.

Paul: Yeah, I mean, it’s, whoo boy.

Rich: Yes. Yeah. And so, and so I think about all they know, and I am no neuroscientist by any means, is that it is a massive web of neurons and electrical signals are traveling through. That’s about it. Like, they also know, because they poked at it, not for any other reason other than the fact that they poked at it is that the front of the brain is emotion, and then there’s, like, the left side is motor skill. And the only reason they know that is because they poked at it with a stick. [laughing]

Paul: Yeah. It’s literally like soldering iron. Like, why doesn’t that cir—what’s that circuit do?

Rich: [overlapping] Yeah. Beyond that? It’s, look, you know, it is, it is a very mysterious. It’s probably. It’s probably one of the most mysterious corners of medicine, obviously, like, for all the reasons, right? Like, they can’t fix certain things about the brain. They don’t know a lot. They’re terrified of it. Neurosurgeons are weirdos because they’re the only ones willing to go in. It’s a whole world.

Paul: So this article is written by someone named Robert Epstein, who used to be the editor in chief of Psychology Today. I wanted to mention that. And also—

Rich: Okay…

Paul: He has a spot in here, relatively towards the top, where he lists another writer—who is an artificial intelligence expert—the metaphors that people have used for intelligence over the last 2000 years, and my favorite is. I’ll just read it to you. “The invention of hydraulic engineering in the third century BCE led to the popularity of a hydraulic model of human intelligence.” So, like dams, right? “The idea that the flow of different fluids in the body, the humors, accounted for both our physical and mental functioning. The hydraulic metaphor persisted for more than 1,600 years, handicapping medical practice all the while.” So here’s why this all really resonates—

Rich: [overlapping] Not a good theory, overall.

Paul: Well, but it worked for, it worked for, like, the, but the big engineering challenges of that time were getting water into the city, right? [laughter] And getting—

Rich: How do you make that leap to, like, artificial intelligence? Like, whatever. Okay, okay.

Paul: I think that’s what humans do, because after that, everybody got into, like, springs and gears, right?

Rich: Yeah.

Paul: Like, you know, the railway shows up. What I’m going for here is, and this is, it’s really kind of meta, but I think it’s relevant, which is technology, when it lands—technology is physical. Like, a chip is a physical thing. It’s made out of, it’s got a silicone base—

Rich: Yep.

Paul: —and it’s doped with different chemicals, and wires go through it, and, gee whiz. A hammer is a physical thing. But humans just project onto physical things. We sort of bring the world in as if it was us. We’re like, oh, okay. If you look at our industry, it’s so powered by metaphor—and I’ll give you an example. Computers were a really big deal, but ultimately a relatively small industry. They were, like, kind of compared to, you know, other industries, they weren’t even that big a deal. Nothing compared to oil and gas in the early eighties. And then Xerox got together and was like, let’s make the future of computing. And they kind of did. And they came up with bum, bum bum: A metaphor. Literally the windowing metaphor. It was the desktop metaphor. I’m still, I’m looking at it right now as I’m talking to you. I can take a file that doesn’t exist, and I can drag it into a trash can that doesn’t exist.

Rich: Or put it in a folder.

Paul: Or put it in a folder. That’s right. And that actually turns out you can unlock trillions and trillions of dollars of value and change hundreds of millions of lives with those little metaphors.

Rich: Yes.

Paul: The more the metaphor makes sense to people, the more people look at it and they recognize it because it looks and relates to something else that is part of their day to day life, the more likely you are to find success. And I think that that is one thing that’s kind of true. Like, I think you get the windowing interface and so on and so forth.

And I think we’re in this very interesting moment, and this is sort of what I’m driving towards, even though it’s taking me a minute to get there. We’re in this very interesting moment where we have, okay, people have been saying that computers are going to think and talk and be human, literally, since the 1950s, when they were a bunch of switches. Like, they just, they get mixed up. Everybody gets mixed up. Everybody thinks that you can kind of go from machine to human. We’re just, like, two steps away. Just the processors have to get a little bit faster, et cetera, et cetera.

So that’s been going on forever. Okay, maybe we will have artificial general intelligence one day. Maybe we won’t. Right now, I don’t think there is some obvious next step that’s going to get us there, but other people do.

Okay, so just put that aside for a sec. We’re in this really interesting moment where everybody’s focused on that kind of metaphor. They’re focused on like, ah, the human is a computer, right? And this piece sort of breaks down the reasons the human is, like, the way our brains and, you know, our amygdala does not work like a large language model and kind of all that stuff. It’s kind of what you’d expect.

But the metaphors that we’re picking right now, using all these new technologies, it feels very, very, like, unfounded. And I kind of want to just, here’s—let me just throw it out. We’ve chosen chat because it is the interface that makes the most sense for interacting with one of these weird databases. Why are we doing that? Where do we go from there? We have a strategy at Aboard where we’re going to use AI to create data. What are the other things that we’re going to do as a society? How would you unlock this? Because we’re hurting for metaphor. People want AI because it’s the new exciting thing. But then they’re like, they actually don’t have that problem. They have other problems.

Rich: Let’s talk about metaphor for a split-second here, because I think its utility is mass—its ability to catch on is so powerful, because imagine Xerox PARC had said, you know what? Lets not replicate a digital desktop—

Paul: Yeah.

Rich: And folders and files. Let’s do something else. We have graphics, and they create a solar system on your screen, and they call files data units, and they just don’t bother with it all, right? And then what you end up with is this much steeper hill to climb for most people. Most will bail, right? And the truth is, you can see where metaphors have limited utility, meaning they just don’t bother with them. The command line, the terminal console, the scripts that you write to do certain things, those are highly specialized, and no one bothers. I mean, they call them packages. They do call them containers, sometimes containerization. They still use words, but those are not for the masses.

What happened with chat is that the whole world is talking to the rest of the world on their phones. And then they added a machine that seems to be just as chatty as Aunt Louise. And it’s like, whoa. And Aunt Louise seems to know everything. If she doesn’t know everything, she acts like she does.

I’ve said this many times on this podcast and in other podcasts. The progress of technology is the elimination of steps. If you can eliminate steps, then you’ve advanced. And AI was a huge, it was like a long jump leap in terms of steps, because I didn’t install anything. I didn’t have to learn some new thing to get an answer. It was using something that not only have we been conditioned to use, which is chat, for the last 30 years, but it’s using it in a way that’s—like it’s in WhatsApp. Like, WhatsApp, Facebook added, like, Jake Gyllenhaal’s face and named him Sid in WhatsApp, I don’t know if you know this.

Paul: Yeah.

Rich: They took celebrity faces and they named them something else, and next thing you know, I’m asking them questions.

Paul: Right.

Rich: This, what Facebook understands probably as well as anyone, is like, how do I get even the person with a fifth-grade education, a kid, to understand how to use this thing? And so that, the power of that is immeasurable, really, in terms of its ability to take.

Paul: You want to know another thing that’s fascinating in our industry? I’m going to go in a slightly different direction, which is we’ve been talking about, like, okay, yeah, I think this is right. I think that, like, the windowing interface within the desktop, the desktop metaphor, the chat metaphor, they’re familiar. That’s all we talk about, metaphors, as if they have a magic quality.

Rich: Paul, just to punctuate, you know, what word nobody ever uses, like, in common language? Portal.

Paul: No.

Rich: You know how many years they tried to use the word portal? Because all the nerds kept reading books, sci-fi books, and they’re like, “I’m gonna give you a portal.” And everybody’s like, “Okay, thank you, portal man.” And no one knows what, and it died.

Paul: They did build that, there was that video portal between Dublin and New York City, and people kept exposing themselves. [laughter] And, like, the Dublin people kept holding the pictures of 9/11.

Rich: [overlapping] I mean, end the podcast right there.

Paul: [laughing] Yeah, it was—

Rich: That sums it up.

Paul: No, no, well, where you went is exactly where I was going to, which is that the, there’s a whole bunch of technologies, when you are running a technology organization, that only engineers care about. And I almost want to name them as, like, anti-metaphorical. Right?

Rich: Yeah.

Paul: I’ll give you like, true hypertext, deep functional programming. You know, people like Ted Nelson, who was one of the earliest developers of hypertext, spent 30 years, 40 years telling everyone that they were doing it wrong, and that they weren’t doing it in the true way of real hypertext.

Rich: Always.

Paul: Right? And I was very, when I was in my twenties, I had a book by him, I was very enthralled. I just thought that was very exciting, that like, you know—

Rich: This exacting vision of the future.

Paul: We just need to organize the world along kind of library science lines and use these data structures and then educate everyone with like, a master’s degree level understanding of technology, and then we will have the utopia that digital life should bring. And I mean, you know, Alan Kay, who was one of those creators of the desktop metaphor, was always frustrated, because what Apple took away was the metaphor. And they didn’t bring object-oriented programming, which he thought would revolutionize the world.

And I have to say, I do think, this is the moment, reading this article, and I’m watching this cycle play out over and over again, which is, and I just want to, it doesn’t have a name, right? You could call it like a hype cycle or whatever, but it doesn’t really have a name, which is there’s this very big abstract idea that shows up. It’ll be like, large language models, okay? So, like, suddenly neural networks are growing up and you can do new things with them you couldn’t because the computers are faster, and we figured out that 3D cards from Nvidia are really good at making databases of all human knowledge that are kind of lossy. Whoa. Who knew, right?

Rich: Mmm hmm.

Paul: And then Sam Altman is like, yeah, go ahead, launch it. And suddenly the world blows up, okay? And meanwhile, you’ve got all these nerds over here making LLM pronouncements, and then they get really obsessed with all these sort of, like, very specific ideas about artificial intelligence coming down the pike, and here we go. And sort of like, an orthodoxy emerges around what the technology is for. And then people come and are, like, so excited by this new sort of metaphor, they come and chat with it, right?

So you have the, like, the technology purists on one side, you have the interface on the other, and then they’re kind of at war for the next 30 years. [laughter] I think we’re going to be hearing from people about AGI and what true AI is and what you should do with this technology for the next 30 years. And then chat will get thrown, you know, it’ll be chat, and then it’ll be something else for the next 30 years, and this will be this big tension. And I don’t, we saw with the web, like, there’s one true web, and it has to be, one year it would be accessible, and the next year it’d be sort of data-driven or semantic or whatever.

Rich: Yeah, yeah.

Paul: And meanwhile, Google just kept adding, like, more video playing functionality to the browser.

Rich: Okay, so this is what I think you’re saying. I do appreciate the idealists. There are, like, a lot of the invention—we have some of it, by the way. We’re crazy. We have been picking away at the same scab for three years now at Aboard, because we have this perpetual frustration with how the world seems to work, and we’re pretty convinced that a lot needs to change, right?

And eventually, I think the tension is between idealism and, frankly, adoption and commercialism or whatever reason you’d want. Google became a company and then that company started looking at charts and graphs and kept saying, wow, people really like moving pictures. Let’s keep going with that, right? It starts as an idealistic sort of view of things. I don’t know the Google founders, like, when they were in the garage, but very often it’s like, there’s usually very lofty statements about it, right? Like a desk, a computer on every desk was Bill Gates’…that’s not a business goal.

Paul: Mmm hmm.

Rich: That is like, this is going to be something that is going to be in every house. Right?

Rich: Right.

Rich: That idealism, and what it represents, that’s the seed of it, right? And then what happens is commercial motivations, frankly, Steve Ballmer shows up. The tension is Steve Ballmer shows up and he’s like, “Welp, sales. It’s time for sales.”

Paul: [Steve Ballmer shouting voice] SALES!

Rich: Sales, right? And then that. Now the things you do are less about reaching some utopian state and more about, frankly, conquering the textile industry and then conquering the auto industry with software, and then con—like, it becomes much more pragmatic, much more goal-driven, much more money-driven. You could say it out loud here. But there’s always a handful of people, and you see that, like, some of the people at OpenAI, like, Altman is effective because he’s got a foot in each, one in commercial and one in idealism.

Paul: Mmm hmm.

Rich: He tends to interchange them. But there’s a couple of, like, you always have to just see him with his team, and there’s always a couple of weirdos that can’t believe we’re burning the world down. Like, they’re just convinced that this is bigger and we don’t understand it. We got to hang back a bit.

Paul: Right.

Rich: That is, to me, a very classic tension. That idealism versus that pragmatism, that’s frankly driven by business and money. Right? Like, that’s what it is. Like, Altman will tell you, it’s, you know, this is going to change how people live their lives. We’re going to find cures. And then he goes to, like, Qatar to get a trillion dollars so he can open a chip factory. Like, that’s. Eventually, it boils down. What’s real here? I don’t know. I snicker at the idealism.

Paul: I actually figured out what I was trying to go for. If you go into a big organization, a big organization with a lot of impact, you are going to find a group of people who truly are aligned with the mission. Like, they see the technology as its own thing, and they actually see it as truly representing what it means to be human. They think that they are building the future of humanity.

Rich: They feel that this is a loftier purpose.

Paul: So many of the best engineers I’ve worked with really do believe that.

Rich: 100%. I’ve hired a couple. I could not get them to hit a date, and they were brilliant, but they felt that they were on a mission, that there had to be purpose to what they were doing, and they couldn’t do anything else.

Paul: And they believe that the kind of, that the computer has a soul.

Rich: Yes.

Paul: They really do. So there’s always a group of those people, and you actually, you need them because they’re motivated and they’re brilliant, and they’ve let the metaphor of the technology take over their lives, even though that’s not the way the world or your brain actually works.

Rich: Absolutely, absolutely. It’s the mad scientist, right?

Paul: In the same organization, right? You have people who, and I think this is where I’ve ended up and where you’ve ended up, you look out and you talk to people, and you talk to individuals, and you talk to smart people at businesses, and you realize that that entire world that everybody lives in, that, that system of belief about technology and change, 1/10th of 1% of it can leak out into the general population. That’s all they can handle.

Rich: Yes.

Paul: And they can handle chatting with ChatGPT and asking it to write their essays. But when they see, when they look inside and they see the religion, they’re like, what the hell is that? I just wanted it to do math for me.

Rich: Yeah. I don’t know. I don’t want to brush off the idealists, because I don’t know if we’d be here without them. Even as ridiculous—

Paul: I was one!

Rich: [laughing] You’re not anymore?

Paul: I’m not, and I’ll tell you why. It’s not that—

Rich: You hang on to some things, Paul. I’ve heard you say, look, I want this to be a business. We’re talking about Aboard. But I do want this to be something that’s useful and has utility and positive ways in the world. I’ve heard you say that. Maybe that’s your negotiation, your negotiated statement. [laughing]

Paul: No, no, I believe you have to do good work or otherwise, why bother? Right?

Rich: Yeah. Yeah.

Paul: So there’s, there’s that. But it’s not that. I used to really believe, like, you know, the semantic web will change humanity, but what I believe now is a little different, which is my understanding of humanity was deeply off. Humans can’t handle much more than they’re handling today. The idea that you’re going to transform them with your vision of technology—and I’ve come close. Like, I got to tell millions of people about new technology over the course of my career.

But people go out and they look for tools that they can use. They’re actually not looking for a transformative experience. And so the only real option you have if you’re in a business is you have this one group of people who might be true believers, and then you have the entire world on the other side. And if you’re a leader, you’re the filter. You are the individual who is responsible for taking what the true believers do, disappointing them every day of their lives, and trying to empower a bunch of people who really can’t handle much more that you can give them a couple tools, a couple ideas, and then see what happens next.

Rich: I agree. I agree. I mean, this is, this is a classic tension. I’ve had many engineering leaders just look at me with sometimes disappointment, oftentimes disappointment, occasional disgust. And not because they thought what I was doing was wrong, but they saw its purpose as very limited.

Paul: You fraudulent sellout.

Rich: Exactly. Exactly.

Paul: That’s life. There it is.

Rich: Yeah. No, I think we should keep going with this conversation. I think a good pivot on it, by the way, is that, I think the one thing we can take away from the idealists is they never want to do harm. They always see it as a very positive force in the world.

Paul: They’re great! We love them!

Rich: Yeah. And then what happens is, I think one of the sad things is, that’s happened with a lot of technologies is it’s used to do harm because of commercial motivations. And that’s something that’s just fact of life in the world.

Paul: Even if even the best-intentioned stuff like you, it’s almost impossible to win. But you can, you can, like again, another conversation for another day. All right, thank you for indulging me when I go meta. I know we’re for enterprise.

Rich: No, this was fun. This is very philosophical.

Paul: People in the enterprise like big thoughts, too.

Rich: Yeah, yeah. We’d love to hear your thoughts on this subject. Give your idealist a hug when you see them.

Paul: That’s right.

Rich: They need a hug. And hit us up at hello@aboard.com. Thanks for listening.

Paul: Oh, you know what you do with your idealist? You schedule an hour for them to talk about important new technologies and tools.

Rich: And just nod slowly.

Paul: Mmm hmm. Mmm hmm. Don’t look at your phone.

Rich: Don’t look at your phone.

Paul: Just listen.

Rich: Exactly.

Paul: All right, we’re just listening. Hello@aboard.com. Thank you, everyone.

Rich: Have a lovely week. Bye bye.

Published on