Write-Time Intelligence
The industry defaults to runtime intelligence — let the model figure it out. But intelligence deployed at write-time has a steeper gradient, observable targets, and doesn't share the alignment problem.
Explorations into how humans and AI work together
The industry defaults to runtime intelligence — let the model figure it out. But intelligence deployed at write-time has a steeper gradient, observable targets, and doesn't share the alignment problem.
How a Sanskrit verse about two illusions meeting became a constraint compiler with 18 canonical states, four command phases, and Nielsen heuristics as an objective function.
Teams don't fail because they lack capability. They fail because they re-solve solved problems, make changes without knowing the blast radius, and lose context when people leave.
A model M can solve task Y via prompt X only if their combined complexity is sufficient. This constraint is simple to state, hard to measure, and has a surprising practical shortcut.
When a task has a steep gradient toward the solution, small models perform as well as large ones. The leverage isn't in smarter models — it's in reshaping tasks so the answer is easier to fall into.