The Practitioner's Vocabulary Problem

vocabularypracticenaminginvariants

There is a big gap between doing knowledge work at an expert level and having the vocabulary to describe how it is done. Chess players mean a lot when they talk about the "Grünfeld Defense" or the "Sicilian." In software engineering, programmers have elaborated design patterns around recurring solutions. "Refactoring" means a lot. Jazz musicians still have it — there's a reason so much pedagogy happens through playing together rather than talking. The knowledge is real, it's transferable, but it's stuck in a pre-linguistic state.

What's unusual about AI-assisted work is the speed of the gap. Most disciplines had decades to develop their vocabulary organically. We have months. The tools change underneath you while you're still trying to name what you learned on the last version. And the dominant discourse — hype on one end, dismissal on the other — actively resists the kind of careful naming that would help. Reviews and tutorials feel ephemeral, contributing to a persistent sense of falling behind.

Yet there are definite invariants that emerge from sustained work with LLMs. These practices get filed under "context engineering" or "prompt engineering," but those labels lack the specificity to be teachable.

What practitioners actually know

People who build with LLMs every day have developed intuitions that are genuinely hard to articulate. Here are three.

The planning-execution boundary. There's a point in any agentic workflow where you want the model to stop planning and start doing — and another point where you want it to stop doing and go back to planning. Experienced practitioners have a feel for where that boundary sits. But we don't have a crisp way to talk about why it sits there, which means we can't teach it systematically. We just say "stay in planning mode longer" and hope the listener develops the same intuition.

Constraint as communication protocol. The most productive AI-assisted work I've done hasn't come from asking for more — more tokens, more context, more capabilities. It's come from asking for less, more precisely. Constraining output format, constraining scope, constraining the model's role. This is an old idea in design — Stravinsky's "the more constraints one imposes, the more one frees oneself," Exupéry's perfection attained "not when there is nothing more to add, but when there is nothing left to take away" — but it takes on a specific character when your collaborator is a language model. The constraint isn't just a creative spur; it's a communication protocol. We don't have vocabulary that captures that dual nature.

The trust calibration problem. Every practitioner develops, often unconsciously, a mental model of when to trust the model's output and when to verify. This isn't binary — it's a continuous, context-dependent calibration that shifts based on domain, task type, model version, and how the conversation has gone so far. We call this "knowing the model's strengths and weaknesses," which is about as useful as saying a good driver "knows the road."

These are solvable problems, but only if we can name them precisely and not leave them as anecdotes.

The workshop journal

This site is an attempt to do the naming work. Not from above — I'm not proposing a taxonomy and asking people to adopt it. From inside the practice, while building things.

The format is a workshop journal. Some entries will be essays like this one. Some will be technical write-ups of systems I'm building: a constraint-based design system for teams that lack UI/UX intuition, a document orchestration layer, a conversational form architecture. Some will be short notes on a single idea or pattern. The connecting thread is that everything here comes from building, and everything is trying to find the right words for what the building teaches.

A few commitments:

I'll invent vocabulary when existing vocabulary fails. Not for the pleasure of neologism, but because sometimes the right word doesn't exist yet. If I coin a term and it doesn't earn its keep — if it doesn't help you think more clearly about something you already knew but couldn't say — it deserves to die. Language is a tool. Tools that don't work get discarded.

I'll show the work that doesn't work. The interesting thing about AI-assisted development isn't the success cases — those are easy to narrate. It's the failure modes, the patterns that almost hold, the approaches that work at one scale and collapse at another. If this journal is only victories, it's not honest.

I'll stay concrete. The temptation with a topic like "how humans and AI work together" is to float up into abstraction — philosophy of mind, alignment theory, the future of work. Those are real topics, but they're covered extensively elsewhere, often by people with more standing to speculate. What's under-covered is the middle layer: the practitioner's report from the workbench. This journal is itself an instance of the practice it documents — written with AI, shaped by human judgment, accountable to neither in isolation.

Why "slithy toves"

Lewis Carroll coined the phrase in Jabberwocky. Humpty Dumpty later explained that "slithy" means "lithe and slimy" — two real words, compressed into a portmanteau that captures something neither word alone does. Carroll called these "portmanteau words." The technique has been remarkably productive: we still make them all the time (blog, email, podcast).

That's the move this site is trying to make, applied to ideas instead of words. Take two real concepts that don't usually sit together, compress them, see if the result captures something neither concept alone does. Sometimes it will. Sometimes you'll get nonsense. The Jabberwock is, after all, a monster — but it's a monster you can slay, if you know the right words.