Robots Take Over, Get Regulated

October 29, 2024  ·  20 min 22 sec

The Biden administration recently put out their first-ever National Security Memorandum on Artificial Intelligence, so on this week’s Reqless, Paul and Rich unpack the memo and discuss what it might mean for the U.S. government’s future attitudes towards AI. Plus: They talk about recent developments with Anthropic’s Claude that allow you to control all the computers in the world.

Show Notes

Transcript

Paul Ford: Hello, I’m Paul Ford.

Rich Ziade: And I’m Rich Ziade.

Paul: And you’re listening to Reqless—R-E-Q-L-E-S-S—the podcast about how AI is changing the world of software. Brought to you by Aboard.

[intro music]

Paul: All right, Rich, we’ll talk about Aboard in a minute. We can market later. There’s a lot going on in the world.

Rich: Nooooo! I want to market nooooow!

Paul: Nope, we’re gonna, we’re gonna market, we’re gonna market later.

Rich: Okay, fine.

Paul: [laughing] Okay. All right, so there’s two things I want to talk about today, and I’ll tell you what they are.

Rich: Okay.

Paul: One is some new and exciting features in the world of this wacky technology that is either destroying or saving the world.

Rich: Great. Looking forward to hearing about that.

Paul: And the other is that there’s a lot of new regulation and government stuff happening around this space.

Rich: Oh!

Paul: Biden just came out with a big announcement about how it’s all going to go with AI and what we’re going to do. So we should talk about that, because it seems like the regulatory frameworks are starting to show up.

Rich: Are you sure it was Biden, or…

Paul: Don’t do it. Don’t do it.

Rich: 3D graphics of Biden?

Paul: I’m already in a bad mood, so don’t even.

Rich: Okay, fine.

Paul: Okay. So first thing is, and I just think we need to talk about it. So Claude, you know Claude?

Rich: I do know Claude. This is one of the AI products. One of the big, fast-growing ones.

Paul: That’s right.

Rich: It’s part of Anthropic.

Paul: Anthropic. So it’s, you know, Anthropic is kind of like the slightly quieter, maybe a little nerdier, OpenAI. But it is big. It’s a beast.

Rich: It’s one of the big guys.

Paul: And so, you know, I’ve said this a billion times. I’ll say it again. One of the best—if you are a technologist, one of the best places to just kind of understand what the non-research bleeding edge is, the practitioner’s bleeding edge, is Simon Willison’s Weblog. SimonWillison.com.

Rich: He’s all over this stuff.

Paul: Well, he is a…

Rich: Prolific.

Paul: He’s an open-source person who’s like, wow, this is really allowing me to accelerate the process by which I make things.

Rich: Yes.

Paul: So he writes about it and summarizes it. Always very useful. So he actually, for the October 23rd post, he quoted another writer, Alex Albert. Here’s what you can do with Claude. Here’s what you can do now. And Alex Albert wrote this prompt, and it kind of works. I’m just going to read the prompt.

Rich: Okay.

Paul: Okay? Go to data.gov, find an interesting recent data set, and download it. Install sklearn with Bash tool. Write a Python file to split the data into train and test and make a classifier for it. You may need to inspect the data and/or iterate if this goes poorly at first, but don’t get discouraged. Come up with some way to visualize the results of your classifier in the browser. Okay? So what’s changed here is two things. I mean, first of all, that is a wacky thing to say to your computer.

Rich: Don’t be discouraged?

Paul: [laughing] Yes. So we’re there, right? We know a little bit about that.

Rich: Well, I think people, we’re still processing the fact that it matters.

Paul: That’s right. Because what that actually does in terms of, it means that the tokens that get generated are more likely to correlate to the tokens where, to the situation where somebody said, “Don’t get discouraged.”

Rich: Yeah.

Paul: Like it’s, it’s not, like, you’re actually telling the computer not to get discouraged, you’re just asking it to emulate what a less-discouraged programmer would do.

Rich: Right.

Paul: So anyway, this kind of works. And what the big difference here is, it’s out of the sandbox. It’s operating on virtual machines. It’s operating on the computer directly.

Rich: That’s big. Yeah.

Paul: Okay, so that is a, taking it home, because these tools have been trying to become operating systems.

Rich: Yeah.

Paul: And now they’re saying, how about you let the large language model run your computer for you?

Rich: It’s not just in a code editor or IDE environment. It’s doing all sorts of things.

Paul: They’ve been very sandboxed. And to take a direct action with an AI usually would mean that you’d have it write some code.

Rich: Yeah.

Paul: And then you would go ahead and run the code.

Rich: Yes.

Paul: Okay? Or it would run in the browser, but you might, like, it sort of—but it wouldn’t affect the world.

Rich: Right.

Paul: Here we are having side effects on the world with our AI.

Rich: It’s a programmer using a browser to get all the information and stuff they need and they’re writing scripts and it’s just, it’s going out and about.

Paul: There have been attempts to do this before and sort of fine lines, but for one of the very large LLM vendors to say, “This is now something that can talk to an operating system and do the things that you do with computers.”

Rich: Yeah.

Paul: And then for this to be the interface where you write the paragraph and you say—because you could also say, like, he says, go to data.org, or data.gov and find some interesting stuff. But you could also say, like, “Let’s go find some census data sets,” or you can sort of start to iterate that way.

Rich: Mmm hmm.

Paul: This is another inflection moment, and they’re coming very, very quickly, right?

Rich: They are.

Paul: But this moment means that the boundary between the legacy operating system and platform and sort of old software—software, software, like, I’m going to go to my terminal, I’m going to type in things, I’ll write a little code and so on, and these giant LLMs that you go talk to and they produce information and spit out code and so on. We have decided that even at the biggest level, we are going to blur the hell out of that. And it’s been pretty clear where one started and—Windows 11 might have AI features that you can talk to, or it’ll help you find movies, or—

Rich: Agents or assistants or whatever, yeah.

Paul: That kind of stuff. But instead what we’re saying now is that, the statement that is being made by Anthropic is that they think the future is just you talking to the computer and then it does the computer stuff.

Rich: Well, I think what this is about is, look, I don’t think Anthropic has a plan. I don’t agree with you, but I may be wrong. You know as much as I do, and I know as much as you do about what Anthropic is thinking.

Paul: No, no, that’s my interpretation. I don’t think they have a grand vision of, like, replacing Microsoft.

Rich: Yeah, yeah. No, but I do think they’ve, they’re picking up on a particular posture that I think OpenAI established, which is, like, “I really have no idea. But guess what? We are going to close our eyes and jump.” And I think that’s what they’re doing here. I think, the truth is, what AI has taught us in the short time it’s been in our hands is that it’s always interesting. It may mess up, it may surprise us, but even when it messes up, it opens new pathways.

Paul: Yes.

Rich: And I think we’re seeing that, right? And I think what they’re, what they’re, you can actually say, you know what, “Take that back fence off and let the horses run out into the field.”

Paul: Yeah.

Rich: “Let’s see where they go.” And I think that’s what’s happening here. And I think the power of that is massive. Because the truth is when we think, let me go back to the programmer who uses tools. The really good programmer has got, like, six tabs open next to VS Code, right? Like they’re not just coding. They’re always out there.

Paul: Yes.

Rich: Always kind of—either they’re trying to think of a better, find out a better way to do a piece of, a function, or they’re looking for data from elsewhere to test an idea. They’re always out there.

Paul: That’s why, there’s a concept, there’s a couple—yak shaving, nerd sniping. Programmers are inherently highly distractible. So sometimes you get somebody who is actually kind of looking for new frameworks and approaches to do their work, or they are just having their brain saturated with new ways to do things, and it’s—

Rich: Always.

Paul: It’s always a struggle, right? This is the eternal struggle.

Rich: I gotta tell you. I once added a module, or a plugin to VS Code.

Paul: Yeah.

Rich: That was around, like, code beautifying.

Paul: Yeah.

Rich: I could tell someone who put in, there were, like, 600 color schemes. The amount of effort and energy someone or a team or a small group of people put into just this little corner told me so much about the programmer. [laughing]

Paul: One of the things that people don’t know about programmers, programmers use, often they’re working with text, right?

Rich: Yes.

Paul: And it looks very basic and kind of clunky and old school.

Rich: Yeah.

Paul: Inside of their text editors, even their old ones, are fully fledged programming languages that directly manipulate the code as they write it.

Rich: They love it.

Paul: And this was before AI.

Rich: Yeah. Yeah.

Paul: This is like, this is the baseline. So like, I think at a certain level, the reason we’re seeing so much acceptance and so much, so many people leaned into the new world of code being written by AI is because actually there’s a lot of context here.

Rich: Yeah.

Paul: Like, it started with autocomplete, like, in the 80s and so on and so like—

Rich: I mean, is that what AI is? Is it like just absolutely outer-space capability autocomplete? That’s essentially what AI is.

Paul: I gotta tell you, because you read, you probably read like I did, I put it in the newsletter. The Ethan Mollick kind of, like, summary.

Rich: Yeah, yeah, he mentions this.

Paul: So he—Ethan Mollick is a professor who focuses on this stuff and he, you can see it if you go look at our newsletter. He wrote—

Rich: It’s a great primer.

Paul: Just like, you always need a new primer because the space is changing so fast. So it’s, like, probably the best current primer for, like, how all this stuff works. And to really—he’s just like, “This is just super-duper autocomplete.”

Rich: Yeah.

Paul: Is it really just that? Kind of yeah, maybe not. Like, but I mean, it’s sort of like, that’s a good place to start. Programmers live in autocomplete.

Rich: I might be wrong, but I don’t think I am. Alan Cooper, who worked at Microsoft many, many years ago.

Paul: Yeah.

Rich: He came up with the innovation of studying an object model and then aiding a programmer, program, like, as they’re coding to sort of introspect into the model and actually suggest the methods and objects and variables of an object and whatnot.

Paul: He was the first to bring that into the world with Visual Basic. A lot of that thinking goes back to Xerox PARC and Smalltalk and sort of like—

Rich: Yeah, so it’s worth, I think that is the theme here. Programmers will get you the deliverable, but my God, that’s—the deliverable is a fleeting affair. Their actual love is the tools.

Paul: Yes.

Rich: They love the tools. They live in the tools. They love adding stuff to the tools. They’ll write tools for their tools.

Paul: That’s right.

Rich: They’ll spend time on their own tools.

Paul: Well, this is the great joke about all this is there’s an immense promise that we will finally see enormous velocity and all the code that we’ve been waiting for for decades will get written.

Rich: [laughing] Yeah.

Paul: And there is a secondary assumption that somehow engineers will change their entire way of working and suddenly start—stop writing tools and just start shipping code. But actually we could end up in this perfect recursive environment in which we, like, write prompt editors that write, that create more prompts that, like—this can always happen.

Rich: Always, always. Right? And the truth is, first off, you’re seeing a lot of energy on programming tools, AI-powered tools for programmers, because it’s actually a massive space. It’s a massive amount of money gets spent on these tools.

Paul: Well, it’s probably about 25 million people, some of the biggest companies in the world, and it is a multi-trillion dollar industry. And when we think about software, we think about Microsoft or Call of Duty or whatever, but most software is the HR system inside of the bank.

Rich: Tools.

Paul: Yes.

Rich: Tools. And I think, from an engineering perspective, they will always look towards empowerment through those tools. That’s kind of a natural thing.

Paul: All right, so Rich—

Rich: It’s very cool.

Paul: It’s just, here we are. I think it is an inflection point. I agree with you. I actually don’t think there’s some grand strategy here. As a longtime observer, I’m like, “Whoa! It’s in the real world now.”

Rich: Yeah.

Paul: Okay, so let me read you a little White House press release. We’ll switch, we’ll—

Rich: We’re switching topics?

Paul: We’re going to switch topics.

Rich: Okay. All right.

Paul: We’re going to, we’re going to—

Rich: [overlapping] Oh boy. This could go anywhere right now.

Paul: [overlapping] We’re fully engaged in the world. “Today”—and that is actually today as we’re recording, so October, October 24th—“President Biden is issuing the first-ever National Security Memorandum on Artificial Intelligence. The premise is that advances at the frontier of AI will have significant implications for national security and foreign policy in the near future. NSM builds on key steps,” blah blah, blah, blah, blah.

So we’re going to do two things. We’re going to “implement concrete and impactful steps to (1) ensure that the United States leads the world’s development of safe, secure, and trustworthy AI; (2) harness cutting-edge AI technologies to advance the U.S. Government’s national security mission.”

Now this is interesting to me because previous government AI stuff has been sort of like, “Hey, we gotta safeguard our information environment and we need to make sure people have access” and so on. There’s another thing going on which is because everyone’s brain is literally filled with screaming squirrels right now, the idea that the White House has been steadily engaged with AI kind of gets glossed over.

Rich: Yeah.

Paul: Because all we know is what we see in the paper, which is just screaming monkeys hitting each other with sticks.

Rich: Okay, just to unpack that analogy that you’re using right now, we are in the final days of an election, a big presidential election. Everyone’s sort of lost their minds. But the White House has to continue to function. AI is a massive component, I guess you could say, a massive issue that has to get addressed. I mean it’s just this thing that’s out there.

Paul: We’re in a very tricky moment, right? because what you actually want from your federal government right now is for it to be completely engaged and aware of this radically changing information environment, security—I mean, China, which is kind of our big competitor in the world, I’m not saying enemy.

Rich: Sure.

Paul: Is as far along in a lot of ways as we are. Baidu releases open source LLMs just like, and they’re as good at certain things as what we have.

Rich: We have restrictions on the hardware they can have access to.

Paul: That’s a big part of this.

Rich: That’s one of the things here.

Paul: So I mean this is what the government can do. This thing, this NSM, directs to improve the security and diversity of chip supply chains.

Rich: Yeah.

Paul: It makes collection on our competitors, operations against our AI sector a top-tier intelligence priority, et cetera. And it creates an AI Safety Institute. So theoretically what you have here is kind of what we say we want.

Rich: Yeah.

Paul: Which is for the federal government to think infrastructurally, supply chain-oriented, security-oriented. But, but I think there’s, the thing that really sticks out to me about what’s happening today is, and again, literally today, is they’re kind of flipping the switch and saying this belongs in the federal government, this is now a national competitive issue at a level that implies a lot of security risk, implies a lot of kind of threat vectors and attacks. And we need to spend, you know, hundreds of millions or billions of dollars to shore up, protect, and keep an eye on this industry.

Rich: I think that’s the message, I read between the lines on messages like this. One is this sort of sets the stage for, you know, it’s acknowledging an arms race.

Paul: Yes.

Rich: And it’s also acknowledging the U.S. is going to keep the latest version for itself. Right? Like, we sell old planes and old helicopters to third-world countries and whatnot because they’re left, they’re strategically inferior and we keep the fastest fighter planes for ourselves.

Paul: Yes.

Rich: I think that’s, that’s part one of this. Part two is actually kind of a coded message to other foreign actors around acknowledging that AI is a weapon that can cause damage.

Paul: Mmm hmm.

Rich: And that we are not only going to go on the offensive, but we’re also looking out for, we’re still processing all of that. Right? Disinformation is a weapon.

Paul: Then you get to other bullet points. So like, we are going to demand that we use AI systems in service of national security, but unequivocally we can only do it in ways that align with democratic values. So that’s one like, hey, we are going to follow our own rules here. You know, we’re not going to produce propaganda with this thing, nominally—or will we? Or we will all under the rules that we already have. And then the last, there’s a couple other bits here, but the one that really kind of threw me is, we’re going to direct agencies to propose streamlined procurement practices and ways to ease collaboration with non-traditional vendors.

Rich: Oh, really?

Paul: Now see, that is very interesting. And I mean, let’s talk about it, because the way it works now is a couple of giant consulting firms and parts of the military are the ones that can do tech for the government. If you and I had a great tool that we just thought was amazing for the U.S.

Rich: Yeah.

Paul: We don’t have a chance of getting the U.S. to use it, or even look at it.

Rich: In fact, we could have a whole episode on this. In fact, what they have is, like, deeply entrenched firms down in D.C. that do federal government work and we’d have to go through them.

Paul: The best example of where this leads you was the healthcare.gov debacle.

Rich: Yes.

Paul: Where after healthcare.gov launched and failed completely during the Obama administration, they then stood up a whole lot of new organizations to use genuinely modern, sort of proactive software development.

Rich: They have to go around the old archaic systems and bureaucracies that were in place. This is a huge, huge, huge issue in terms of the amount of government waste and spending that goes into IT. It is so deeply entrenched, so wasteful, actually. I’m glad to read that bullet. Hear that bullet that you just read. I’ll believe it when I see it.

Paul: Well, this is the thing. Well, first of all, like, we’re going to have to see where things go.

Rich: Exactly.

Paul: I don’t, I don’t think that this proactive, engaged and very sort of like pro-democratic engagement and procurement policy is something that would necessarily continue if Harris isn’t elected. So who knows? Fingers crossed. But it is interesting to see us try to create a relatively coherent policy, to see this as a global thing, like truly global.

Rich: Yeah.

Paul: We got China. We have people creating content. It has to align with our values and so on. And a desire to bring it closer and closer to the federal government, which is very, very interesting because this is going to look and feel like an absolute feeding frenzy to big consulting and existing government contractors.

Rich: Yeah.

Paul: They historically are very good at keeping their defensive parameter up and not letting a company like Aboard, let’s say, anywhere near it.

Rich: Exactly. In fact, you’d want to go through them.

Paul: You’d have to. So if there’s any direct way to interact with the United States government around AI and actually engage, I think that that would be fascinating and amazing.

Rich: Yeah.

Paul: So here we are. I’m just sort of. I’m nailing down two newsworthy moments. We have a federal government, while we can keep it, that truly wants to understand and engage with this stuff and sees it as an absolute top priority at the highest level to create a working framework and structure that fits under our existing norms and allows us to move very quickly, and also to use it at the federal government level. Federal government is a huge consumer of software.

Rich: Absolutely.

Paul: And at the same time, we are taking these tools and we are letting them directly control our computers and change the environment in which we work. Launch servers.

Rich: I think a good way to put it is we’re giving the tools more and more freedom.

Paul: That is exactly right.

Rich: Movie will be interesting to watch.

Paul: And meanwhile, the government is saying we have to start to regulate the freedom that we’re allowing here.

Rich: Things are moving fast.

Paul: They’re moving exceptionally fast. And, you know, in a couple weeks, they are going to move one direction or the other. It’s gonna be a wild time.

Rich: Wild time. We are at Aboard. At aboard.com. It is a really, really impressive platform that lets you stand up custom software through AI that solves problems that you never thought you could afford to solve, or that you’d solve them as quickly as we can because of AI. Check it out at aboard.com. We’ve got some already—we’ve already got some exciting conversations happening. We’d love to have some more, so reach out.

Paul: The reality is we’re riding this wave with you, but I think we’re a little ahead of the wave, Rich. And we are—

Rich: We are.

Paul: We are here, as always, to help organizations figure out what to do with software and frankly, what to do with software at about probably 10-20% of what it used to cost.

Rich: Yes.

Paul: All right, so, hello@aboard.com, if you want us, check us out at aboard.com. Check out our newsletter, which I mostly write, and it’s pretty damn good. Rich writes it sometimes, too, and it’s also pretty damn good.

Rich: Thank you, Paul.

Paul: Literally anything else you need, just get in touch. We love you, and we’ll see you soon.

Rich: Have a great week. Bye.

[outro music]