In My Skills Era
AI can be transformative, but you’ve got to put the work in.

They are exceptionally well-trained.
There are a lot of different ways to imagine an AI future. Sam Altman famously thought it should and could look like the movie Her, where you talk to an AI assistant with the voice of Scarlett Johansson. Replit, Lovable, and other vibe-coding tools think it looks like technical users typing instructions into a prompt box. Aboard thinks it looks like a chat on one side, with data and prototypes on the right.
I think we all know that, in general, we’re in the “horseless carriage era” when it comes to AI: We’re very focused on engineerless programming, artistless drawing, writerless publishing, consultantless strategizing. As often as the industry discusses the massive growth potential of this technology, those conversations tend to focus on what we’re going to be able to get rid of, not what we’re going to be able to add.
I recently wrote an essay for WIRED where I explained how I want AI to become…normal, even if that means the rate of growth slows down and the bubble pops. Namely:
What will it be like when AI is normal and boring? Well, the magic show will be over. You won’t have those moments of awe and wonder, like when ChatGPT told you dogs in Slovenia can talk, or Dall-E showed you your first sexy Pikachu. Yes, giant companies will continue to have the biggest and best AI, and they’ll still use it primarily to enhance the shopping experience and sell banner ads frictionlessly. And yes, we’ll continue to see tons of glazed AI-generated videos showing large-breasted human-cat hybrids abandoning their crying kittens. I’m sure we’ll see tremendous advancements in breast generation.
But then there’s everything else. We nerds have to learn how to teach people about LLMs, about how to put guardrails on AI projects and not just count on OpenAI to do it for us. We’ll have to ship products, and make smarter chatbots, and help people use these tools in good ways, even as other people are using ChatGPT to automate their dating lives. We’ll have to figure out where vibe coding is good and where it’s dangerous. And we’ll have to do it all while the world melts, both cognitively and glacier-ly, knowing that AI is contributing to that melt. On one side you’ve got these new human-impersonating machines that spew words, images, videos, and sounds by the millions per minute, and on the other you’ve got 8 billion agitated primates with access to weapons. The entire human commons is about to become a Superfund site, and the people who made the mess will move on to quantum computing. Once the frenzy fades, there’s just going to be a lot to do, and less sovereign wealth with which to do it.
So what will that even look like? I think if I could place a bet, it would not be on voice interfaces, or the total replacement of programmers, but rather on something like Claude Skills. (While Anthropic is pushing the concept, it will work with other LLMs, not just Claude.)
Want more of this?
The Aboard Newsletter from Paul Ford and Rich Ziade: Weekly insights, emerging trends, and tips on how to navigate the world of AI, software, and your career. Every week, totally free, right in your inbox.
The basic idea is you bundle up the things a computer needs to do a thing—code, maybe some documentation—into a folder, and then you write up a bunch of paragraphs describing how to make something using those bits. You give it a name. That’s a “skill.” There are a bunch of examples on GitHub.
Once you create a skill, you can invoke it. You might say something like, “Hey, can you use the algorithmic art skill and make a lot of birds flying around.” If you enabled Skills in your Claude settings, it won’t improvise or try to find a new way to solve the problem; it will read your skill’s documentation and follow it, and make a weird bird animation (that link points to an example invocation). In other words, skills are reusable guardrails that make sure an LLM will behave predictably and do certain things when prompted.
The official skill for creating “digital art” starts like this:
Algorithmic philosophies are computational aesthetic movements that are then expressed through code. Output .md files (philosophy), .html files (interactive viewer), and .js files (generative algorithms).
This happens in two steps:
- Algorithmic Philosophy Creation (.md file)
- Express by creating p5.js generative art (.html + .js files)
Wild stuff! Is this a sane way to program? Not really. But it’s comprehensible and communicable, and it gives you a way to organize your ideas and work.
All of this points to a future where you define a skill, run it, and then improve it to be more generalized, or faster, or optimized, or change the template a little. (A lot of the work we do internally at Aboard looks like skills, too.) My hunch is that this will be “programming” in the new world for a long time. You still need to know lots of boring things about data, files, and computation. But instead of capturing lots of edge cases, you can provide some sample code, describe how it’s supposed to work, and then let the LLM coordinate everything else. Do that on repeat and you’ve basically got “agents.”
Skills remind me of “Literate Programming,” which was an attempt by the great computer scientist Donald Knuth to weave together code and prose in order to fully explain a program at length. Basically, you write text, then add in chunks of code, and from there, you can produce either a program or a book from the same files. Literate programming isn’t documentation, because the code could literally be out of order; the idea was that the explanation had priority, and the code was extracted from it. It never really caught on in that form, although many ideas from Knuth have been used in more conventional forms—documentation generation, or interactive notebooks.
Claude Skills are a lot like literate programming, weaving together text, explanations, instructions, guidance, and code. Yes, the explanation is in some ways more important than the code, but with one huge exception: The narrative text is also, through an LLM, executable. But ultimately, I think they’re pretty recognizable artifacts. They make sense to read. They can be versioned, edited, and improved by groups of people and, yes, by LLMs. I expect we’ll see zillions of them in the future.