One of the earliest documented uses of the check valve is in the myth of Pandora’s box. This is also one of the earliest documented cases of Lorin’s Law due to the check valve being installed backwards. The cat’s out of the bag (or box) as they say, and that brings us to today’s topic: LLMs, a class of things which seems unlikely to flow back into Pandora’s box.
A year ago I was walking someone through my process of writing code from a reference (e.g. StackOverflow, or just cargo-culting an app’s existing conventions). I described how, even when I had something to copy, I’d type things out word-by-word, character-by-character anyway because it helped my internalize and understand what I was writing. I find this particularly true for unfamiliar solutions found online, and worthwhile even with familiar code. This is why teaching a subject often increases understanding of it, since we must rely not only on rote reproduction of knowledge, but also synthesis of new ideas.
Writing things out this way is “slower” only in the sense of producing lines of code, but there’s a trade-off. Many of these systems have subject matter experts that you call when things go seriously pear-shaped, and not coincidentally, these experts tend to be the people who built the system. “Slower” becomes “faster” when you page the expert at 4am for an inexplicable problem that’s dragged on for several hours, and then they toggle a single parameter to mitigate and go back to sleep. If you don’t care about what you’re producing then sure I guess LLMs can produce code very quickly, but the bill always comes due.
(I can hear the peanut gallery saying that a single subject matter expert is also a single point of failure and other operators should also gain knowledge about the system. That’s kind of the whole point, because knowledge and expertised are gained through direct interaction with a system, something which is partially if not completely abdicated when marshalling LLMs.)
There’s a mistaken notion that the role of a software developer is to produce code, and while that’s certainly a big part of it, the “responsibilities” section of your average SWE job description begs to differ. A modern view of complex systems is that these systems doe not so much have accidents as they produces accidents. That’s not to say these accidents aren’t undesirable or surprising, only that the systems are executing according to their design, no matter how undesirable or surprising that might be.
Sure makes you want to have some input and understanding of what that design is.
Anyway.
This isn’t a blag post about generative AI cynicism (though we can talk about that too); it’s an assertion that act of producing code has at least equal value to the produced artifact. I know there’s a counterargument that you can just throw more LLMs at the problem and have them explain what they have wrought, but I think we’re a long way from an LLM telling the oral history of organizational decisions and design choices that ultimately underpin the reasons that software is the way it is.
I know they’re not going back in Pandora’s box, and I’ve even seen some thoughtful approaches to LLMs in code authoring. I’m not telling you not to do it, but to be, well, thoughtful. Good luck us.
Post-post script! Does all of this apply to writing emails, newsletters, blag posts, and other human-language writing? You bet it does.