the aboard newsletter

Language (and Code) Without Thought

Using AI to code is like looking into a mirror-world: The LLM generates meaninglessness that you turn into something useful.

by -

Having a little think. Image via Wikimedia Commons.

I was very jealous this week when I opened up the latest edition of the Today in Tabs newsletter and realized its author, Rusty Foster, had articulated something about AI in an incredibly clear and useful way. Namely:

The essential problem is this: generative language software is very good at producing long and contextually informed strings of language, and humanity has never before experienced coherent language without any cognition driving it. In regular life, we have never been required to distinguish between “language” and “thought” because only thought was capable of producing language, in any but the most trivial sense.…

The whole newsletter is worth reading, but the idea of “language without thought” is a really simple—and really helpful—distillation of what AI provides and what it doesn’t. We’ve never had this kind of technology at scale before. There have been weird poetic exercises in randomness or various kinds of cut-up techniques or the Flarf poetry movement, but these were creative experiments, not industrial-strength paid utilities with hundreds of millions of users.

One of the more interesting aspects of watching AI “land” in our culture is that, when it comes to things like essays, poems, images, or videos, there’s a very vociferous and expansive opposition to the technology. A lot of people hate AI for a lot of reasons: They see it as a threat to human equality, to meaningful careers, to democracy, to the information commons, to the environment. Terms like “glaze” and “slop” have emerged to allow us to collectively describe the artifacts of generative AI, as opposed to the more organic outputs of a human mind.

Small digression: I’ve been in a position to review a lot of artifacts (code, prose, essays, designs, business plans) over my life and I can usually tell if someone has put in the work within a second or two. Not whether it’s good—you need to actually read and review things for that—but whether they really did the work. 

Human artifacts have a very uneven shape. The paragraphs are different lengths. The design is not perfectly symmetrical. The code has functions that are longer than others. Human reality is…lumpy. You learn to see it. So for me, the “smoothness” of AI output, its tendency to complete the assignment from its internal outlines, always hits me as lazy. Not useless, or bad. Just incomplete.

One place where people are a little more muted in their criticism—not always, but in general—has been using AI to write code. It’s been a source of amusement to me that the leftiest, most progressive programmers I know (not all of them, but a lot) have fully internalized the arguments against AI, might hate the LLM companies, may even agree that AI coding products don’t offer the advantages that everyone promises, but admit that they will give up their little coding bots over their dead bodies. I get it.

Could this be hypocrisy? Sure. But thinking about “language without thought” brought me to a fun idea: Coding is sort of the opposite process of AI text generation. When you program, you have a bunch of ideas, and you have to translate them into programming languages, which then get translated not into meaning, but into mechanistic, endlessly repeating processes. By the time a piece of code translates through compilation into machine code, it runs billions of times a second—and it’s utterly unintelligible to us, with no connection to human language or thought. Down at the CPU level, it’s not even math, per se: It’s physics, a light switch turning on and off really fast.

If AI is computer-generated language without thought, coding is human-created language that will, eventually, be compiled down to something utterly stripped of thought and intent. The craft of programming is about fully accepting that the computer is very fast and shiny and cool, but absolutely as dumb as a rock. The older a software engineer, the more they expect everything to break—and the more their coding tends to be incredibly defensive, filled with huge numbers of catchments to capture the infinite flow of bugs before they bring the whole system down.

I think this is why, as a software-oriented person, I get such a kick out of the new technologies. Using AI to code is like looking into a mirror-world where instead of making something that will end up running on an unthinking robot, the LLM generates meaninglessness and I have to turn it into something useful (which then, as code, gets turned back into robot repetitions). This is all so ridiculous and meta that it’s inspiring and fun, instead of sad. The thoughtlessness is a feature, not a bug.

At Aboard, our tool spins up very robust, very real prototypes of applications that are theoretically totally valid software, but they’re not done yet. They’re ways to explore ideas. They work, but they’re not yet lumpy. They need legacy data, weird business rules, all the homely stuff that makes software real and useful to groups of people, be they giant banks or small mutual aid societies. 

Over time, we can add more utility and automate more, but I don’t see any near future where you give a system a prompt and it comes back with completed software. We can use this stuff judiciously to automate processes that can be, basically, thoughtless—predictable, bureaucratic stuff that’s been done a million times before, like configuring a server or converting a video. But when it comes to helping people do things with software, we’re going to have to keep doing all the thinking.