Don’t Mix Up Artifacts With Processes
You can predict how someone will feel about AI based on how much of their life is spent inside of a bureaucracy.

Good for one, not the other.
A lot of questions about AI reduce, when closely inspected, to silliness. One I see often is: “Could AI replace journalists?” But then who would write the restaurant reviews? There is no journalism without restaurant reviews.
What’s going on under that question is people keep mixing up the artifact with the process. Play this out a little: When you read a restaurant review in a publication that has any allegiance to journalism (i.e., not pay-to-play Instagram influencers), the promise is that the reviewer:
- Has eaten food in the past;
- Can describe the taste and smell of food;
- Can describe their experience of being present in the restaurant at a point in time;
- Is behaving within ethical standards such as not being bribed (and if the meal is comped, they’ll tell you).
An LLM can do absolutely none of these things, because it has no senses, does not experience place or time, and cannot be ethical. It can make the artifact; it can’t follow the process.
Want more of this?
The Aboard Newsletter from Paul Ford and Rich Ziade: Weekly insights, emerging trends, and tips on how to navigate the world of AI, software, and your career. Every week, totally free, right in your inbox.
At some point in every career, you come to realize that the artifact is a side effect of a process. Comedians have a hilarious hour of material for their HBO special because they’ve workshopped the jokes at comedy clubs for months. Newspapers happen because everyone knows their place in the daily schedule. Musicians work on albums for years; software also takes years to get right. Everyone thinks they could write a hit song, but very few people have the time, resources, and patience to write thousands of terrible songs until they figure out what makes a good one.
AI is wild because it can make convincing artifacts, what Rusty Foster called “language without thought.” We’ve never seen this before, and it’s making us all wacky. A bot made me a video! It wrote my college essay! It made an app! But then, when you poke a little closer, they turn out to be weird—kind of eldritch artifacts, vague. The code might run, but it still needs a lot of human intervention to become a product. To me, the jury is out on whether AI can make good stuff. But what it’s really good at is accelerating processes.
If your job is to create an artifact for public consumption, Claude or ChatGPT are often the wrong tools for the final product. The process does matter—it’s integral to the artifact. They can summarize news, or send you interesting links, or give you helpful background. But ultimately, someone has to go to the restaurant, eat the bread sticks, and tell you how it went. That’s the process.
But if your job is to make artifacts that push the bureaucracy forward—case file notes, quick briefs, purchase orders, incident reports, all the things that get emailed, reviewed, and filed in the “Z:\\” drive—then this stuff is a godsend, and it lets you go home early. The artifacts you create in these jobs are not the end state—they’re part of the process. Look at doctors, using electronic health record systems: They can barely handle their caseload as is, and their industry is heavily regulated. They can assume, with some safety, that AI-driven change will come with guardrails, and that large medical associations will have their back. Doctors are often excited about AI, because they don’t feel like their role in the system is replaceable.
This connects to another thing I’ve noticed: A person’s politics are not strongly predictive of how they perceive AI. Independent, progressive people—freelance writers, illustrators—really hate this stuff for a variety of reasons. But I keep meeting other types of people—doctors who work in harm reduction and free clinics, do-gooder lawyers, not-for-profit people—who tend to have the same values the first group espouses, and they’re absolutely in love with AI.
I’ve never seen anything like this, at least not in tech—where people believe the same things about society, but one group wants to blow up data centers, and the other is starry-eyed with excitement about LLMs. I increasingly believe that you can predict how a person will react to all this stuff by figuring out how much of their life is spent inside of a bureaucracy. Work on your own? The bots are coming to ruin your life. Manage employee and constituent safety at a large group of harm-reduction-focused, state-funded addiction recovery clinics? “I use it for everything.”
Focusing this a little more: I think if you are responsible for a public-facing artifact—a restaurant review, a mobile app, an EDM track—the fact that this thing can skip to the artifact without the process is absolutely nerve-racking, because the process is hard to learn, and it’s how you differentiate, and it’s how you ensure quality, safety, and reliability.
But if you work in a big bureaucracy, your artifacts often go unread. A lot of time is spent filling in forms. It’s not a creative act, it just captures the state of the process, and it’s used to feed the next step. An awful lot of people are doing tons of work just to feed the CEO’s executive dashboard. They hate that part, AI makes it easy, and they think you’re bananas if you can’t see that.
The current moment is basically a huge masterclass about the difference between processes and artifacts. When someone says they hate it: They tend to hate the artifacts it creates, and the ecological and cultural cost. When someone says they love it: They tend to love the way it speeds up their processes and allows them to focus elsewhere.
Is there a way to resolve this? I’m asking. I do not have the answer.