Arushi Saxena: Can We Trust AI?
As people feed their whole lives into LLMs, how can they protect themselves? On this week’s Aboard Podcast, Paul and Rich are joined by Arushi Saxena, a trust and safety expert who’s worked everywhere from big tech to startups to the U.S. government. What does trust and safety mean in the AI age, both for individuals and for companies working with LLMs? Arushi also gives an overview of the trust and safety world, but sorry, folks: What happens at TrustCon stays at TrustCon.
Transcript
Paul: Hi, I’m Paul Ford.
Rich: And I’m Rich Ziade.
Paul: And this is The Aboard Podcast, the podcast about how AI is changing the world of software.
Rich: I got a rash on my elbow, but don’t worry, I asked ChatGPT exactly what to do.
Paul: Okay, that’s a terrible idea.
Rich: Claude?
Paul: Maybe. Okay, let’s play the theme song.
[intro music]
Paul: All right, Rich. We are joined—
Rich: By an elbow rash expert?
Paul: Yes, thank God, we were able to get one off the street.
Rich: No.
Paul: No!
Rich: We’re gonna talk about everybody—I think it’s a billion people at this point are using AI.
Paul: Probably a billion and one today.
Rich: And they’re giving a lot of information to it. And we’re talking about just the prompt box. But there are many places and many—
Paul: You can trust Sam Altman.
Rich: Well, let’s talk about that. We actually have a trust and safety expert.
Paul: Those are two big subjects to be an expert in.
Rich: Yes. Arushi Saxena is joining us today.
Paul: Oh, there, she’s right there on the screen!
Rich: She is right there on the screen.
Paul: Very exciting.
Rich: Welcome, Arushi.
Arushi Saxena: Hey, Paul and Rich, thanks for having me.
Paul: So tell us, where are you and tell us what you do.
Arushi: Yeah, so I’m logging on from San Francisco, California, where a lot of AI development is happening, for better or for worse.
Paul: Mmm hmm.
Arushi: I work in the AI safety and security space. I’ve done a bit of public-sector work and now back in tech, helping one of these developers develop more secure AI for professionals. But here in my general and personal capacity.
Paul: Okay, I mean, let’s clear it up because people will be—you work for Harvey, the big legal AI startup. We’re not going to talk about, like, their secret roadmap. We’re going to leave that alone. The PR people who are probably hovering right out of the, behind your chair, not going to get upset. But the point to make about that is you’re very hands-on. Like, you’re involved, you’re in an organization and you’re defining policies and working with people to help them use AI more safely. So it’s not just abstract.
Rich: Well, first off, this profession, trust and safety AI expert, didn’t exist, like, 24 months ago. What does that job entail? What is the responsibility of someone with a role, not even job, it’s almost, like, a profession, I guess you could say, that transcends anywhere you’re working.
Arushi: Yeah, yeah, I mean, it’s—the profession’s kind of evolved naturally, as it should. And you kind of think about it in a few ways. One, people in roles like mine, our job is to make sure that there are controls in place, policies in place, processes in place that dictate that we are building AI safely, in an established way that follows certain standards, that are agreed upon, that we’re testing things before we launch them, and that we’re monitoring them once they are released out into the world. So that’s a very kind of broad overview of what trust and safety could entail. But the role in the profession has probably been around for maybe, like, 10, 15 years, since social media came to be.
Rich: Right. Beyond AI, it’s always been something that has been in place. Does privacy fall into that role? I guess, user privacy?
Arushi: It does. I would say, we can be detailed and at every company it looks different, but yes, protecting user data and keeping things confidential is a big part of trust and safety.
Rich: Let’s zoom in on the dining room table of someone’s home. I’m guilty as charged. Everybody’s typing away and sharing all sorts of facets about themselves, details about themselves—in many ways, because it’s so intent-driven, even more profoundly than other tools. Because it could be a medical condition, it could be job advice, it could be all sorts of things.
Paul: I constantly feed Claude your personal information.
Rich: That’s not cool. [laughter] Yeah. “Tell me about this guy.”
Paul: Yeah, exactly—
Rich: “What is this guy about?”
Paul: Actually, I have noticed too, like, ChatGPT is really focused on increasing engagement, and so it keeps dropping personal details from past chats into the output.
Rich: So its memory seems to be getting longer.
Paul: Yeah. So it’s, like, “As somebody who really likes synthesizers, this will make sense to you because…”
Rich: It knows you.
Paul: Yeah, but it’s not cool.
Rich: Okay. You’re not impressed by that?
Paul: No.
Rich: You’re creeped out.
Paul: It’s creepy. It feels like it’s been stalking. It’s a robot. Anyway, that’s not—we’re not here to talk about my problems.
Rich: You’ve got the floor, Arushi. What do you tell that person typing away, using that thing all day long, what should they think about?
Arushi: Yeah. At basic, you probably do not want to be inputting what we call PII, personally identifiable information. Anything that you believe that shouldn’t be leaked in someone else’s session, you probably don’t want in your own session. Because there’s a whole world out there that OWASP and other entities—like, there are organizations in charge of mapping and tracking incidents and attacks. And it’s not just, like, incidents and attacks, it’s mistakenly things can leak just because of the way the technology functions as well. So things that you don’t want other people to see? Do not input into your LLM unless you have a really good reason to be doing so. And we can get into guardrails and filters and, like, this gets sophisticated.
Rich: Yeah.
Arushi: But that’s kind of the advice I give my mother and grandmother and grandfather.
Paul: I mean let’s start in the other direction. You can’t control them. [laughter] But the, like, should there be an age limit? Should we, should we say like, “Hey, kids should not be on this, at least unsupervised, until age X.” Like, would you ever advocate for a parameter like that? Not, like, at a government level, but as advice?
Arushi: You know, in theory, I do. If your brain can’t rationalize how to use the interwebs, you shouldn’t be using an LLM. But at the same time, there’s this education component which we have to talk about as a society and people are using it to learn.
Paul: Can I tell you, my daughter was having a geometry struggle. It was a bad one.
Arushi: Yeah.
Paul: And I was like, “Look, I’m going to pull out the big guns.” Took a picture of her laptop, uploaded a ChatGPT and it very, very confidently produced absolutely the wrong answer. Like with a full proof.
Rich: Well, there’s that. That’s the lesson learned with that stuff.
Paul: I mean that’s it, and she watched me do it, and I was just, like, so disappointed. But I think, like, conveying that to a 12 or 13 year old, that level of ambiguity, I wasn’t surprised it failed. I know how it fails, but, you know, I don’t, she’s not used to that. She’s not used to systems just not working. They’re, you know, Netflix plays the movie and YouTube does stuff for you.
Rich: Yeah.
Paul: And Google Docs opens up, and so suddenly you have this thing that very confidently is, like, it’s a 42 degree angle.
Rich: Yeah.
Paul: And it’s wrong.
Rich: I mean, one of the most convenient aspects of AI is that I can give it a PDF because I got a scary letter in the mail and I could just scan it and upload it up and say, “Should I be worried about this?” Bad idea?
Arushi: Yeah, that’s interesting. [laughter]
Paul: Yeah.
Rich: Big exhale.
Paul: Yeah.
Arushi: Yeah, what’s in the PDF? But also, we all do it. I do it every week.
Paul: Right.
Rich: That’s the thing, right? I mean, here we are.
Paul: You’re in this world. What are the big scary stories that everybody points back to? Like, recently, I think it was a big consulting firm. I can’t remember exactly which one. I think it started with a D. Got in big trouble in Australia because they generated a bunch of reports with AI and charged hundreds of thousands of dollars.
Rich: Mmm.
Paul: And they had to give the money back. And it was a nationally embarrassing story.
Rich: Mmm. That’s generated. That’s not upload.
Paul: But that’s a horror story, right?
Rich: Yeah.
Paul: Like, what are the horror stories that people kind of come back to in the world of trust and safety? Like, what are you looking at as like, boy, we have to avoid that situation?
Arushi: Yeah. I used to think a lot about hallucinations and data leaks, but I feel like the research and the tactics there are getting better to address that. Like, there are tons of smart people working on making hallucination rates lower and grounding output in verifiable fact. Like, it’s getting better. It’s not solved. The human part is what I actually worry a lot about. And the over-reliance concept, which is as humans as we use these tools, as we upload more PDFs, is we go to GPT as our friend. Like, the other day my husband was like, “I was talking to GPT in the car.” [laughing] Like, we’re starting to rely…
Rich: Oh yeah, it’s a trusted friend.
Arushi: Exactly.
Rich: So in this con—I mean, look, you said it before, the gravitational pull of this stuff is just too strong, it’s too convenient. I don’t feel like reading an 11-page letter I got from the government or the city council or whatever, and I want you to summarize it for me. Everybody’s doing it. Do these platforms, and obviously you can’t speak for all of them, do they have any incentive to say, “Okay, when someone gives me a file, I’m gonna, I’m gonna answer this question, sit on it for a day and then obliterate it?” Nobody’s asking that—I don’t think 90% of users, 95% of users aren’t asking that question. It’s like, “Oh, I’m sure they’re gonna delete this as soon as they give me the answer.” But what’s going on? Like, what is happening to that trailer of communication and artifacts that’s, that’s flowing around?
Arushi: Okay, I like this question because you’re kind of going from the safety side to the security and control side, which is, like, a very kind of verifiable—like, there’s a process in place and there are concepts in place.
Rich: Yeah.
Arushi: And so we, in the industry think about this a lot. And there’s an idea of zero-day retention. When you have a contract between a provider and a user, whatever the user uploads, you need to process that at the time you receive it.
Rich: Quickly, yeah.
Arushi: Very quickly and accurately to deliver results.
Rich: Yeah.
Arushi: But after the fact, you can have contractual mechanisms in place where that provider or processor is not allowed to retain your data.
Rich: Hmm.
Arushi: A lot of companies request ZDR or very strict retention policies of their AI providers.
Paul: So that’s happening with very large companies interacting with very large companies that provide services. But consumers can’t really ask for that, right? Like, when you’re advising your mom—or are there widgets I don’t know about where I could keep…?
Arushi: I think you can actually.
Paul: Hmm.
Arushi: If you are an AI-literate consumer and paying attention to your settings, which not a lot of us do, you should totally take time to review your settings portal as a user of all these tools. And there are options where you can have, you can have it delete your kind of conversations and uploads, settings on whether they can train on it or not. Like, there are granular user settings and providers are getting better.
Rich: Let’s zoom in on that. I mean, it’s sitting there on a hard drive and then there’s, “I’m gonna take the thing that you just sent up to summarize and train on it.” For the laypeople in the audience, what do you mean by train it?
Arushi: It means that they are taking this data that you’ve provided or uploaded and they are learning from it, so that the model gets smarter and can use that benefit and context to make its responses smarter for everyone in the future.
Rich: So it’s not a matter of, “please delete it.” It’s also a matter of, “just answer my question, don’t do anything else with it, and delete it.”
Paul: I’m still back at the place where you have to tell people to go look at their settings, which…
Rich: I love settings.
Paul: I know.
Rich: I love going to settings. But we are not the majority.
Paul: I told ChatGPT you like settings.
Rich: Stop talking to ChatGPT about me.
Paul: I love to tell everybody.
Rich: It’s creepy.
Paul: Yeah, it’s good.
Rich: It’s weird. Yeah, I mean, you nailed it. No one’s going to go into those settings. You know, there’s the old saying, I’ve said it on this podcast a few times, “The streetlight goes up after the accident.” We tend to learn our lessons after bad things happen. Are we heading towards a world—and I feel like Europe is always more inclined to move more quickly around regulation, let’s move to how do we get people—
Paul: And much more slowly about everything else.
Rich: Heyo!
Paul: But boy, they love—no, they love to regulate.
Rich: They love to regulate.
Paul: I say this as a pro-regulation person.
Rich: Yeah.
Paul: They have an unbelievable sort of regulation manufacturing—
Rich: That’s the push-pull tension, right? You need regulation because harm can be caused, but you overregulate and innovation is sort of stifled.
Paul: We don’t talk a lot about Mistral on this program.
Rich: Yeah, yeah.
Paul: Anyway, regardless.
Rich: Are we heading towards that? It feels like we’re not at the moment. Relying on users and saying, “Go to your settings.” That’s a no-go. You could argue there are obligations these companies should have by default. What is the default setting? Right? Like you could say it’s never going to train unless you say you can let it train, rather than it can train unless you say you can’t let it train.
Paul: I can actually frame this question a little bit differently. I think, I feel that we’re not in a great moment for, for a national U.S. regulatory framework in the same way that Europe is moving. Right? It just doesn’t seem like the current administration is very, very pro-AI.
Rich: Pro-innovate. Go…
Paul: Pro-Sam Altman. But I do see state-level and other kind of organizational requirements emerging, and actually you’re probably seeing more and understanding more. And I’m wondering if we’re entering a future where you have different state-level reactions to AI and different community plans and how we’re actually going to navigate that, because that sounds really complicated as opposed to, like, one nice federal framework. And if you’ve run across anything like that, or is it just too early?
Arushi: Your summary so far resonates, but I think we’d be surprised to see how many state-level proposals there are. And also like there are legislative proposals still going even if we don’t have like kind of a national AI framework pass anytime soon, I, as well as my peers have been surprised to see the flurry of bills being submitted and proposed at various levels.
Rich: Mmm.
Arushi: Going back to state, like, not even national. California passed SB, I think, 53, which requires certain transparency requirements of platform, model AI providers and a few other requirements that promote more transparency in the industry and help with incident response. I expect to see progress being made. But what I’m worried about is also, like, a patchwork across states, kind of what happened with privacy, where you have a patchwork of privacy laws. Colorado, California, Massachusetts, but there’s nothing unified yet.
Paul: That’s what I’m wondering, too. I feel we’re headed towards that.
Rich: It’s a ways off. And also transparency is one thing and requiring companies to not train on data by default is kind of another. Right? There is, I’m guessing, already, I’m guessing a very robust lobbying effort to protect the innovation path for all this AI investment. And so it’s going to be a while, it feels like. Is it too late?
Arushi: No, I don’t think it’s too late. And also I would challenge that and say that it’s smart to assume that they’re training by default, but a lot aren’t, because they’re getting in trouble for it. They’ve already gotten in trouble for it. So when OpenAI launched Atlas, their agentic browser, by default, I don’t think it trains on your data. I mean, they definitely talked about training in their early kind of press release. So I think they want to, a lot of companies want to challenge the assumption that they’re training by default. And they’re like, “Actually, no, you can opt in, you don’t have to opt out.”
Paul: I mean, this, training on your chats going into a prompt box, I get, like, there is a little bit of an implicit deal there that you’re giving them your stuff and you’re talking to them. But training on your entire browsing experience as you look at Gmail and stuff, that is an absolute danger zone. You’re right. And I don’t think people aren’t thinking that way—
Rich: Ah, so you’re saying, if I read my mail in the Atlas browser—we should talk about AI browsers? Because I think that is front—
Paul: They see every page you look at—they see Amazon, you browse personal ads, whatever, they can just eat that.
Rich: Yeah, personal emails.
Paul: Yeah, exactly. So what we’re saying here is, like, that is not going to be a given. Like, even, even OpenAI is like, “Hey, hold on. Okay, we’re not going to do that.”
Rich: Yeah.
Paul: We’re going to go—they’ll, they’ll keep an eye on what you, when you actually interact with OpenAI, they’re going to do stuff with that. But, like—
Rich: Right.
Paul: If you use their browser, it’s not a good assumption that they can just help themselves to every piece of information that you see on the internet.
Rich: Yeah.
Paul: Give us some advice. Okay, so here we are as a, we’re a mature-ish organization that is still also a little bit of a startup. We have a product where you type in the box and it talks to an LLM and it produces code, it produces a scaffolding, and generates an application. And so we feel pretty buttoned-up because that experience is either going to happen under the guidance of a professional who’s helping you build your application, or you’re going to use our web demo.
Rich: We also purge.
Paul: Yeah, we purge. We erase apps after, I think it’s, like, a month or six weeks. So we try not to leave a long trail for people around. Things are anonymized, so on and so forth. But, like, as an org that is standing up an LLM-adjacent capability, we don’t have our own LLM, we’re not collecting data and we’re not a data play. What are some good things that we should read and know about and sort of good best practices for gathering information and holding on to information, stuff like that?
Arushi: Yeah, I would say, you know, be aware of kind of where you are in the supply chain of working with LLMs or what you’re providing. And so you kind of have a good sense of that, depending on where you are, your risk level is higher or lower. There’s a few things that can suddenly increase your risk level and you should be a bit more careful, which is if you are accessing people’s personal content directly or if they’re providing it to you or you can access it through your tooling, then that opens a new door of processes and standards that you all might want to think about.
Tool calling, what we refer to as agents, but also tool calling also opens up a new door of things like remote and unauthorized code execution, or data exfiltration of tax, which is when there are hidden instructions in some browser somewhere and your agent runs those directions in your environment and some of the data that you have, that you have access to gets pushed out, exfiltrated.
So like, tool calling, what data you have access to, those are all, like, risk vectors. And then memory is a big one. What you are memorizing versus not how long you retain, that seems like you all have that under control. But there’s also just things like thinking about your user. How could things go wrong if a user just messes up by mistake?
Paul: Mmm hmm.
Arushi: Some things are malicious and some are mistakes. I think scenario planning, threat modeling, always be doing.
Paul: Are you ready to do some threat modeling?
Rich: I’m ready to do—well.
Paul: I think you basically threat model all day long.
Rich: That’s how I manage the business. So that’s at the company level. There is the individual, which, I think we have to take care of them. Like, we can give instructions and cheat sheets, but let’s face it, most individuals are just, they’re downloading these tools and just using them. GDPR is kind of telling. No one—I mean, it’s gotten to a point where it’s just this, this little toll booth at any website you go to. You bat it away.
Paul: You know, sometimes—
Rich: Do you accept all?
Paul: If I like a website, it’s now become, like, a little relationship thing. I’m like, you know what? You get all the cookies.
Rich: Yeah.
Paul: Yeah. You’re special.
Rich: You give them cookies? You’re like grandpa.
Paul: Like, a nice synth news website.
Rich: Yeah.
Paul: They don’t have any money. They’re doing the best they can.
Rich: Have some cookies.
Paul: Have the cookies you want. You can chase me around the internet.
Rich: What percentage of people know what the hell that thing is?
Paul: Zero. [laughter] Like, I think—I mean, no, but I’m joking, but I actually—
Rich: No one knows what that is.
Paul: I think that statistically, the number of people who understand how tracked they are, how trackable they are, and what can be done with their data is…approaches zero. And I really, like, I was aware that ChatGPT was building memory and building a profile of me, but I got—and I’m obviously very informed on this stuff, and have implemented cookie systems on websites.
Rich: Yeah. You understand how it works.
Paul: I understand how it works. But it really was shocking to see a narrative of my life emerge in an answer to an unrelated question. It just blew my mind. Because I’m like, “Oh—” Seeing it in action, we’re so used to it being kind of passive, where they’re like, “It looks like you want a refrigerator. I’m going to show you 50, everywhere you go.” And you get used to that.
Rich: Yeah.
Paul: You’re like, “All right, that’s data. It knows I want a fridge and it shows me fridges.”
Rich: Yeah.
Paul: But this is like, “I’m going to tell you a story about yourself.”
Rich: Yeah.
Paul: You know, “You asked about, I don’t know, politics, but I’m going to tell you about your hobbies in the middle of my politics answer.” Woof!
Rich: Yeah.
Paul: I think a lot of people are probably headed for that experience, and I wonder if that doesn’t make people a little more aware of how fungible their privacy is in this world.
Rich: The sad news is, or good news, depending on where you are in your life, everyone’s brain is essentially tapioca pudding at this point, and no one is thinking about anything. [laughter] Let’s get a reaction to that.
Arushi: So I’ll point out the viral trend that you may not have realized you just pointed to, which is people asking these LLMs, “What do you know about me that I don’t know about myself?”
Rich: Oh, I don’t know—
Paul: Is that a thing?
Arushi: That’s a thing. I haven’t done it because I’m too scared to do it. And I work in the industry. [laughter] People around here are doing that and they come back and they’re like, “Whoa, I didn’t realize I had those tendencies, or that I was worried about that, or that I think a lot about that.” But the profiling is becoming explicit and requested. It is crazy.
Rich: Oh, wait, it’s like psychoanalysis?
Arushi: Yeah.
Paul: Oh, we’re doing this. Yeah, we’re doing this live.
Rich: Oh…
Paul: [laughing] Yeah.
Rich: I’ll paste, copy-paste, right over to you, Paul. And vice versa.
Paul: I’ll put it in the ChatGPT.
Rich: I have one more question around Google. Google is, I feel like, a different animal. I have known Google and Google has known me—
Paul: Personally?
Rich: Pretty much.
Paul: Yeah.
Rich: For 20 years.
Paul: Sure.
Rich: Between my personal Gmail account and my search, and I use Chrome, so there, that’s the end of that.
Paul: It’s more like 26.
Rich: Yeah.
Paul: Nice try.
Rich: Yeah. They’re gonna show up. They’re starting to show up. Their tools are already impressive. They’re just not dominating yet. The context they have and the through-line of my whole history with them and now into this capability, I feel like is just gonna happen casually and it’s actually something else because they already have my contacts. They have, we could talk about memory and how long it remembers you and, you know, starting to paint a picture. And Google has known me for many years, and they’re gonna capitalize on it because it’s competitive right now and it’s intense and it’s already really good. Like, it’s not in the middle of the game yet.
Paul: Great advertisement for Google. Where are you headed?
Rich: Is that cat out—like, we could sit here and talk about how we’re gonna, like, limit these things and there’s laws coming and whatnot, but Google already has it all.
Arushi: Google does already have it all… [laughter] But I think…
Rich: That’s a great response.
Paul: Yeah, it really was.
Rich: Well, that’s that.
Paul: Thanks, Arushi!
Arushi: We can’t end there.
Rich: No, we got to end optimistically here, Arushi. Keep going.
Arushi: Well, I think it’s about also meeting you at different surfaces, which, now that I think about it, a lot of these companies, Meta, Google are already meeting us at, but it’s like, are they providing us value? If they’re providing us value and they’re protecting us along the way, then maybe we’ll end up in a better place.
Paul: Oh, you sound, you’re really doubling down there. [laughter]
Arushi: I’m like, for trust and safety, we think about all the bad so often, like, every day. That’s what my job is. That we forget that it’s a cost-benefit analysis, and we do want the benefits to outweigh.
Paul: Where do trust and safety people hang out? Like, is there like, a big conference in Vegas where you all just are like, let’s let it go. We’re having a good time.
Rich: Ah, what happens in Vegas…
Arushi: TrustCon. And we have TrustCon.
Rich: TrustCon.
Paul: What happens at TrustCon?
Rich: No, nobody gets asked that.
Arushi: I’ll invite you next year. [laughing]
Paul: Oh, we’re coming. Yeah. No, you’re right. It’s all a secret.
Rich: It’s TrustCon.
Paul: Okay, so there is TrustCon. And is there sort of like, are there kind of publications or websites that people check out or Discords or like, where do people, where do people find each other in this industry?
Arushi: Yeah, I’d say there’s two houses. One is the Trust and Safety Professional Association. The other is the Integrity Institute. And then there’s also, like, All Tech is Human.
Paul: Boy, standards are high there. That is, I would never, that’s—woof!
Rich: You’re not getting in there.
Paul: Imagine naming something—like, you can’t do anything after that.
Rich: No.
Arushi: And then All Tech is Human. These are all people that really care about and are paid for or want to be paid for making the internet safer and making digital safer. And we kind of all find each other there. And I think the industry is growing, but we’re still pretty a small part of ihe Internet and of the workforce. Yeah.
Rich: I do have a last question, even though I said my last question was the last question. Does AI distinguish young people? You go to ChatGPT, you download it, does it say, “Hey, you better be 13, so I know…” It’s not doing any of that.
Arushi: They’ve built controls to infer, and then for the signup flow, I’m not sure what the latest is.
Rich: But a wily 12 year old?
Paul: Sure.
Rich: Can go buck wild.
Paul: My son, my son can access YouTube through Google Calendar.
Rich: [laughing] That’s a gold star for that.
Paul: Oh, it’s, it’s the amount—
Rich: Paul’s son, he’s trying to block YouTube for him, and he found the browser inside of the Google Calendar.
Paul: Because, and it just sort of cuts—
Rich: You got to give him that one.
Paul: Do you know how—
Arushi: Oh, that’s smart. [laughing]
Paul: We talked the other day. We had a conversation as he was going to bed. He’s 14. About the levels of security in the house. And we identified, like, 13 or 14 different perimeters.
Rich: Yeah.
Paul: And he’s just like, “Yeah, you were worried if I was doing that, but you’ve actually created such a set of impenetrable shells that I’ve given up on that.”
Rich: Right. He’s looking elsewhere.
Paul: There’s a laptop that he hasn’t turned on in three months because—
Rich: Sure. It’s all blocked.
Paul: “It’s just done for me.”
Rich: Yeah, yeah, yeah. We glossed over it. That’s scary also, frankly, that, you know, there are all sorts of controls. I use screen time, limiting by age and all that. And meanwhile, they can stroll into—because these chat tools, I feel like, they were such an explosion, it was so quick that I think now everyone’s sort of buttoning it up. But it got out there in such a extreme way that it just got into everyone’s hands and it’s kind of now become normal and it’s only been a couple of years.
Paul: I’m going to do something that’s counter to what I normally would ask, because I tend to be the softy in the shop, and I like trust and safety and accessibility and all the good, healthy web things. I think they’re important. So does Rich. We have a, you know, our product is accessible and so on. But let’s say I was a serious capitalist and I was in here and I was like, “Ugh, God, here we go.” Make a good business case. Why do I need this? Why do I need trust and safety in my work? You know, “I’m 20 people, I’m building it. My app is growing. Ugh, you’re going to—”
Rich: “I don’t want to deal.”
Paul: “You’re going to slow me down.”
Rich: Yeah.
Arushi: Yeah. Because if people feel very tangible harm or see tangible harm from your product, they’re not going to come back. They’re going to stop using it less. I mean, we learned it in social media. We had a very formal term for it. It was called prosocial engagement. Prosocial behavior. Similarly, with AI, you’re only going to pay money for something you see value in. So as soon as you start seeing harm, feeling harm, you’re not going to use it. You’re not going to be sticky. And that’s, like, the most foundational thing is how do you promote sticky, positive engagement. So that’s where trust and safety just comes in foundationally.
Rich: I like it.
Paul: I dig it.
Rich: An argument beyond just the ethical.
Paul: No, I mean, I think—
Rich: Makes sense. You could wreck your brand.
Paul: People need both.
Rich: Yeah.
Paul: They need—but they need to understand it’s not all just, like, managing liability risk. And it’s, but it’s also got to be positive. Prosocial is a lovely term. That’s good. All right, well, let’s go be prosocial somewhere.
Rich: Arushi, great conversation. We don’t often talk about this stuff. We’re always talking about, go, go, go. Let’s make the world different with AI. But this is important.
Paul: If people would like to get in touch or reach out or find out more, where can they go?
Arushi: Yeah, they can find me on Twitter. Find me on LinkedIn. Arushi Saxena.
Paul: Awesome. Well, thank you. This was super useful.
Arushi: Thank you for having me. It’s fun to chat with you all. I hope the rash, the elbow rash gets better. [laughing]
Rich: Yeah, we’re gonna, we’re gonna, it recommended a cream of some sort. Maybe put it on.
Paul: Maybe you should try…
Rich: Blind trust here.
Arushi: Blind trust.
Paul: You should try Claude.
Rich: [laughing] Just get an, get a second opinion.
Paul: Get a second opinion.
Rich: Fair enough. Fair enough. Arushi, thanks so much.
Arushi: Thanks, Richard and Paul. Talk to you later.
Rich: Take care.
[interstitial music]
Paul: All right. Good, thoughtful conversation.
Rich: Very interesting. To this day, you ask the average person on the street, “Did you know that your third-party data at Facebook is being sold off to Amazon so it can send you slipper ads?” They’re surprised by it. Today! It’s not common knowledge that there is an entire marketplace of your behavioral data. It’s just not known.
Paul: One of the analogies, it’s essentially like you are the star of a spy movie in which an entire global satellite apparatus—
Rich: Is tracking you.
Paul: Is tracking you.
Rich: Yeah.
Paul: You are actually, you are Will Smith in Enemy of the State. You are being observed, like in a spy film. And they are doing absolutely everything they can, it’s like 24. Like, all those shows where somebody’s in a command center?
Rich: Yeah.
Paul: Going, “He’s moving north!”
Rich: Yes.
Paul: That’s you.
Rich: Yes.
Paul: You are the terrorist, as far as they can tell.
Rich: Now look, then that person can decide, “I’m not that interesting. I don’t care.” That’s okay.
Paul: Well, let’s be clear.
Rich: No one is asked.
Paul: We all do.
Rich: We all do.
Paul: We all do.
Rich: Implicitly—
Paul: I am filmed, and not just because I’m awesome, but literally walking down the street in New York City.
Rich: You’re a famous guy.
Paul: You’re famous just by walking down the street. Because you’re probably, there are probably—when we went out to lunch, probably about 500 pictures were taken of us.
Rich: This is real. And when crimes get committed, they immediately go to those cameras.
Paul: You know something wild? A package was supposed to be delivered here and it wasn’t.
Rich: Okay.
Paul: Okay? And I talked to our head of ops and she went, “Oh, no worries. I’ll talk to the building.” Because they can see if the postman came or not.
Rich: Yeah.
Paul: And I want you to just play that out. That means that every single person who comes and goes from this building is fully recorded and that is retained forever.
Rich: Yes. I would say a third of people know that. 3% of people know exactly how your information on your phone and on the internet is traveling around.
Paul: I mean, it’s also, it’s one of the most complicated things, like, the diagrams of how all those systems work together?
Rich: It’s incredible.
Paul: Cover a wall.
Rich: I think the AI, I think the thing that’s different is that, yeah, you saw me go to Whole Foods on the map. I think people don’t really realize, they really feel like they are alone. It’s them and the ChatGPT.
Paul: It can draw a conclusion.
Rich: It’s not just that. There is this aura of privacy. Like, it’s, it’s, the lights are out, it’s just me and my laptop and I’m talking about, like, pretty personal things. And there’s just this assumption that it’s all, there’s a, there’s, you know, patient-doctor privilege or something.
Paul: I gotta tell you, this is what worries me. This entire environment, the entire internet environment was built in an exceptionally high-trust, civic society model of how to live.
Rich: In 1873.
Paul: No, no, I mean, like, literally, Google comes around and it’s like, we’re going to have an open web and people are going to search it and people are going to be their own publishers.
Rich: Right.
Paul: And this is like an annex to a democracy.
Rich: Yes.
Paul: And now we have a very different kind of culture. We talked about on a recent podcast, the big WIRED article about tech going hard-right. You’ve got organizations like Palantir that work with policing organizations and kind of, like, import and make their data tidy. And now you’ve got this context engine that can draw connections that weren’t there before.
Rich: Oh, yeah.
Paul: Between all of these different behaviors, and I don’t know how many degrees of separation I am from someone who committed a crime. I really don’t.
Rich: Yeah.
Paul: I have a very boring, tidy lifestyle.
Rich: Yeah.
Paul: But like, who knows? And we’re getting weird as a society. And we’re not just getting weird. China’s already real weird. And Lebanon’s interesting, because Lebanon just doesn’t care. It’s like, “Yeah, you’re going to do some crimes.”
Rich: They’ve got bigger things to worry about.
Paul: Yes. And so what I see happening, first of all, this is not me saying, like, “Put the tinfoil on your head.”
Rich: No, no.
Paul: I think it’s very complicated and I think there’s lots of overlapping lines here. But you know what gets more and more attractive every day?
Rich: Hmm?
Paul: Group chats.
Rich: Okay.
Paul: Signal. Little tiny spaces with a couple of people?
Rich: Yup.
Paul: That don’t have an LLM integrated.
Rich: Yeah.
Paul: Don’t report back anywhere.
Rich: Fully encrypted.
Paul: Expire after a few days.
Rich: Right.
Paul: And it’s not that I think they’re watching. I know they’re watching. And they’re aggregating and they don’t care about me. But being in that giant database in a time of giant social tumult just feels weird. I just don’t want it.
Rich: Yeah.
Paul: And so I do feel—
Rich: You’re going to the barbershop.
Paul: I want options.
Rich: You’re hanging out.
Paul: That are small.
Rich: Yeah.
Paul: I talk with you all day long. I talk with a couple other people on a regular basis.
Rich: Yeah.
Paul: And I don’t really need that to be public and I don’t need that to be connected any database. I don’t need my private conversations with my children to be logged in.
Rich: Right. Right.
Paul: And so am I, am I living this like you know, Bruce Schneier, and locking it all down and not showing my ID at the airport?
Rich: No.
Paul: No. But do I—
Rich: You still gotta live..
Paul: But—I’ll give you an example. Let’s say I had, my kid had a health issue. I would use Signal to talk about that with my wife, if I was going to use a digital.
Rich: It’s a sensitive issue. Sensitive topic.
Paul: Yeah. If I was like, “Okay, we went to the doctor, I got this information about my child’s health.” I’m not going to put that out in the world.
Rich: Sure.
Paul: Because it’s too valuable.
Rich: Yeah.
Paul: It could turn into money.
Rich: Yeah.
Paul: And do I really think Google cares about my kid’s health?
Rich: No, it’s kind of not the point.
Paul: I just need to draw some lines.
Rich: Yeah.
Paul: And I think we all do. And I think that there’s an element of all of this where you as a person, I think you owe it to yourself a couple hours of education on the subject.
Rich: Yeah.
Paul: And then to say I’m going to use this tool instead of this tool and I’m not going to do this, but I am going to do this.
Rich: Yeah.
Paul: And just start to get a little bit more bearing, because the world’s pretty chaotic and I actually think cultivating smaller spaces will yield a lot more happiness and control.
Rich: I think, from a trust and safety perspective, yes. But I also think from, like, a personal mental health perspective, yes.
Paul: You ever heard of Dunbar’s number?
Rich: I have.
Paul: Okay. So I think it’s 140 or 150. It is the number at which afterwards, as it grows larger, if a human society can scale to about there, and it’s sort of roughly the number of people that you can know everybody by name. You can know their kids.
Rich: Yeah. And then it melts.
Paul: After that, severe factionalism tends to occur.
Rich: Yeah.
Paul: And if you think about your dynamics, think about—a 500 person group chat is just chaos. Like that Discord? That requires hardcore moderation.
Rich: Yeah.
Paul: 20-person Discord? You might have one or two bad actors, but you can kind of boot them and go about your business.
Rich: Yeah.
Paul: But actually you get to a certain scale and, like, a chimpanzee dynamic starts to emerge.
Rich: And a performative dynamic. People behave differently in that environment, because they have an audience, not a group anymore.
Paul: So what I’m going to throw out is that the absolute best platforms for trust and safety are those with a smaller number of people. Where you don’t need trust and safety. You don’t need that infrastructure.
Rich: Yeah.
Paul: Because people are known to each other and can behave.
Rich: But the AI is magical.
Paul: And that’s why you need trust and safety.
Rich: Let’s leave it there.
Paul: There it is. All right, so—
Rich: Check us out at aboard.com.
Paul: You can trust us and we’re safe.
Rich: Look at us.
Paul: Yeah.
Rich: We look trustworthy.
Paul: Honestly, we built a very, like, we knew how messy this thing was, and so it’s one of the reasons we built a nice office that people can visit. It’s one of the reasons—
Rich: Humans?
Paul: Yeah. Because it’s, you actually need to know that there’s something real.
Rich: Yeah.
Paul: And that this isn’t a bunch of…
Rich: Nonsense.
Paul: Random stuff.
Rich: Yeah.
Paul: So go to aboard.com, build some software by typing stuff into the box. We do eventually purge your data. We don’t hold on to it forever.
Rich: Yep.
Paul: We try to be respectful, but, you know, use common sense. Don’t upload a PDF with your Social Security number in it. Keep listening to the podcast. Give us five stars. Anything else?
Rich: Reach out if you have topic ideas, questions about what we talked about today or whatever. Hello@aboard.com.
Paul: Yes, hello@aboard.com.
Rich: Take care of each other. Have a lovely week.
Paul: Have trust.
[outro music]