Three Quick, Useful AI Links

By
Photograph of many colorful dice.

Combine this with a dictionary and you know how LLMs work.

Some links that we found interesting recently:

Ethan Mollick on “Thinking Like an AI,” a very useful primer of “how it all works.” There are a lot of these but the industry is moving so quickly that there’s always a need for new tutorials that incorporate the current state of things:

Where does an LLM get the material on which it builds a model of language? From the data it was trained on. Modern LLMs are trained over an incredibly vast set of data, incorporating large amounts of the web and every free book or archive possible (plus some archives that almost certainly contain copyrighted work). The AI companies largely did not ask permission before using this information, but leaving aside the legal and ethical concerns, it can be helpful to conceptualize the training data.

Drew Breunig breaks down AI use cases into three categories (via Simon Willison):

After plenty of discussions and tons of exploration, I think we can simplify the world of AI use cases into three simple, distinct buckets:

  • Gods: Super-intelligent, artificial entities that do things autonomously.
  • Interns: Supervised copilots that collaborate with experts, focusing on grunt work.
  • Cogs: Functions optimized to perform a single task extremely well, usually as part of a pipeline or interface.

Let’s break these down, one by one….

Real Software in Real Time

Type in what you need and get a prototype in minutes.

Start Building

At Aboard, we’re definitely not aiming to become Gods. We want to be a nice set of Cogs, coordinated by well-guided Interns, controlled by humans in the interest of humans. This is what used to be called “software.”

Third, an amazing reader named Erik Gavriluk wrote us in response to last week’s newsletter, after I complained about PDF manipulation:

Google’s NotebookLM is doing the best tricks with PDFs and the moment. It spits out summaries, timelines, and “cast of characters” from some very clever technology + prompt engineering. I never got anything close to this using custom GPTs with OpenAI or Claude.

This was an excellent recommendation. I’ve been playing with it and it feels far more artifact-oriented than other tools, and also more…solutions-oriented? Less open-ended? Seeing these interfaces evolve is interesting—chat is always under the hood, but they’re starting to look more like real software. We’ll talk about other interesting tools in upcoming newsletters.

More next week! Hope that everyone is staying safe and as calm as possible.