the aboard newsletter

Good Bot, Bad Bot

With AI, a rule of thumb is emerging: More to less? Then you’re blessed. Less to more? Shut the door.

by -
Image of colorful items being put into a grinder, with similarly colorful wires coming out.

Look, it’s doing product management.

I want to share two interesting links today, because I think they represent two different ways of thinking about AI—and they connect to lots of things we talk about in this newsletter.

The first is a research paper that caught my attention: “AutoPR: Let’s Automate Your Academic Promotion!” It describes a system created by a group of Chinese researchers. The idea is pretty simple: Give their system a research paper, and it makes social media and PR materials. It does a lot of work to make the social media pop—layering in emojis and more considered structure. According to the paper, “PRAgent demonstrates substantial improvements, including a 604% increase in total watch time, a 438% rise in likes, and at least a 2.9x boost in overall engagement.”

Image of an AI-generated research summary.
That’s the good stuff right there.

I…like this! Why? Three reasons:

It makes economic sense. Researchers who aren’t affiliated with giant companies or large research labs at universities often have few resources to promote their research. And for the most part, biology postdocs cannot write good posts—not least in their native language, but especially in multiple languages. AI won’t be as good at posting as a thoughtful human, but it will likely be better at fun, emoji-laden social media posts than, say, an actuarial scientist adjunct who speaks English as their fourth language.

It’s a product. There’s a demo, though I couldn’t make it work because I didn’t know the right API keys. But the idea is simple: You upload a PDF, it makes your stuff. You can review it then post it. It puts guardrails on the LLM, forces it to behave, and produces legible output that a human can review.

It reduces and summarizes. As a rule of thumb, LLMs are good and more accurate when they summarize, less accurate when they generate. Generating social media content and summaries from longer, detailed research papers is going to work pretty well; meanwhile, the things produced are going to link to the papers themselves. These are light promotional artifacts that guide people to a public resource.

The only thing I wish is that they had a default tag or notice that was always produced—“Summarized by AutoPR” or just “#AutoPR” would be enough. People should know when their content is made by bots.

So that’s a nice, positive use of AI that’s trying to do right by overworked scientists. 

Now I’m going to share a different kind of fave: It’s called BrodyAutomates

On this Instagram account, an extremely influencer-ish person named Brody shows you how to automate everything with AI. Often the content is just what you’d expect—he’s big into the tool n8n, which lets you draw AI and Internet automation workflows. In one post, he generates an invoice manager with Base44, and in another, he makes a lot of YouTube slop and makes a lot of money

But mixed in with the sincere posts, there are also these incredibly grim, extremely well-produced satirical posts—like how to use AI to set up a gambling help line that pushes people to gamble more, or how to automate dating, or use AI to fire an employee, or the AI scam center that automatically scams people. He sums it all up with his “Hell Yeah” generator—it wakes you up at 6:45, plays “Man in the Box” by Alice in Chains, replies “can’t talk, grinding” to all your texts, and automatically lowballs 10,000 people selling Harleys on Facebook.

What’s wild about BrodyAutomates is that the line between satire and reality is purposefully blurred. It seems like there’s an AI glaze added to all the videos. What is real, what is an ad, what is a joke, and what is content—those boundaries don’t matter any more. I really do recommend you watch everything he’s doing and saying, because it’s a vision of the present, not the future, and the boundaries are getting real fuzzy, real fast.

I’ve been thinking a lot about how Rusty Foster called AI “language without thought,” and how Laurie Voss pointed out that AI is good at summarizing but less good at other things. Automating social media PR for nerdy research papers? Lots of reasons that could be useful and worthwhile. The “thought” part is in the research paper; you just need some different language. Automating dating (ironically)? Or semi-ironically generating tons of videos and throwing them on YouTube without looking at them to monetize? That’s a harder case to make—there’s not a lot of thought, just a lot of stuff that other people have to sift through. 

I think a good rule of thumb is emerging: More to less? Then you’re blessed. Less to more? Shut the door. As always, there are exceptions. But we need some guidelines to get through this.