The Tree of Knowledge
Part III of Latent Spaces. After Fractal Structure.
A library is a tree. Rooms branch into aisles, aisles into stacks, stacks into shelves — a hierarchical classification whose purpose is not to hold books but to make them findable. The Dewey decimal system is a routing structure: given a book, it compresses the book's content into a location. Given a query, it returns a path. The tree is what makes knowledge accessible.
Unlike the standardized Dewey decimal, the human version writes its own classification as it grows. The ambient culture primes the structure, but you have to build it — book by book, shelf by shelf, rearranging as you go. The structure is self-organizing, path-dependent, and unique to the sequence of inputs that built it.
The claim of this post: human knowledge has tree structure, and that structure has some formal properties that result directly from the fractal machinery of the preceding posts.
Routing
A new piece of information arrives — a fact, an observation, a conversation. It enters the library. Either an existing shelf accommodates it, or it doesn't.
When it does: recognition. The new input routes to a location already present in the tree. The conditional complexity is near zero — K(new | tree) ≈ 0. The tree predicted this. No restructuring required. A sommelier doesn't taste a Barolo and build new categories. The shelf exists. The book goes on it. This is the trivial case, and it's also one part of what people mean when they say experience — branching dense enough that new inputs route to precise shelves without effort.
The density of branching in a region determines routing resolution. The sommelier has hundreds of shelves where a novice has three. Same input — a glass of wine — radically different precision. The sommelier's tree routes it to a specific shelf. The novice's tree routes it to "red." Neither is wrong, but the sommelier's instrument can perceive more.
When no shelf accommodates the input: positive surprisal. K(new | tree) > 0. The existing classification can't compress this. The book has no shelf, the aisle has no stack, perhaps the wing doesn't exist yet. This is new information in the formal sense — information relative to the current tree. The tree must change.
Two operations
There are two ways a knowledge tree restructures, and they are topologically distinct.
Growth extends an existing branch. You know matrix algebra. Hilbert spaces extend that branch — new shelves in an existing aisle. The manifold is smooth here; the new knowledge is adjacent to the old; the L-system rule from Post 6 applies and the fractal deepens in a familiar direction. Growth is what learning feels like when it feels easy — the click of a book finding a shelf that was almost already there, one position further along the stack. This is neural plasticity: topologically smooth change.
Crumpling folds the manifold so that distant branches touch.
You're studying quantum mechanics. The mathematics lives in one wing of the library — Hilbert spaces, operators, eigenvalues. The observer problem — the measurement question, the collapse, the irreducibility of first-person perspective — doesn't fit the mathematics shelf, and so it sits unshelved in a state of positive surprisal against the branch of mathematics.
Then you encounter the koan about the tree falling in the woods — contemplative, introspective, epistemological. This is also a piece of unshelved information that doesn't sit easily. If you happen to stir both ideas together, something happens that growth cannot describe. The koan gives you a way of seeing the observer that crumples the manifold — philosophy and physics, previously distant in the tree's metric, are suddenly adjacent. The observer concept becomes a cross-link, a horizontal coupling between branches that the hierarchy never predicted.
The crumple is sudden with no intermediate state. This is a topological phase transition in the knowledge structure, and it maps to what Post 3 established: proximity is topological, not metric. Two states can be far apart in state space but connected by dense paths. The crumple creates those paths and an effortless, intuitive connection is realized. The connection was always admissible, but until the right stimulus arrived, the two regions never touched.
This is what insight feels like. The 'Eureka!' moment. Not the gradual accretion of growth but the discontinuous fold that brings distant structure into contact. And the result is no longer a tree. It's a tree with lateral couplings — a crumpled manifold richer than any hierarchy.
What these words mean
Three terms that are usually vague now have structural referents and provide a falsifiable basis.
Knowledge is the tree itself. The branches, the classification, the routing structure that makes content findable and relatable. Two people with the same books and different shelving systems have different knowledge, because they have different trees — different routing, different resolution, different regions of density and sparsity. Knowledge is inseparable from its organization. The physical structure of the brain, so far as we know it, also reflects this hierarchical organizing principle.
Understanding is compression. Post 6 walked backward along the fractal — leaves to trunk, instances to rule. The rule survives; the specific instances are subsumed inside it. Understanding is the same walk applied to the knowledge tree. Many patients compress into a diagnostic pattern. Many failed systems compress into a failure mode. The grandmaster views the board position rather than individual chess pieces. That's the compression. "Getting it" is the moment the backward walk reaches a rule that accounts for everything below.
Learning is the restructuring — growth or crumpling — that occurs when new information arrives with positive surprisal. Learning stops when the inputs stop carrying information relative to the current tree. Ten years of washing dishes is two weeks of learning followed by zero surprisal for the rest — the tree unchanged. The tree froze because nothing new arrived to make it grow.
Where the tree can grow
The smoothness condition from Posts 5 and 6 does structural work here. The fractal extends only where the manifold is differentiable — where adjacent structure exists for the new branch to connect to.
You can't learn quantum mechanics without the linear algebra prerequisite. The manifold isn't smooth — there's a gap between "no mathematical structure" and "Hilbert spaces" that the fractal can't bridge. The intermediate branches (vectors, inner products, eigenvalues) provide the smoothness that sustains further growth. Prerequisites are the differentiability condition applied to knowledge.
This is also why some learning requires growth from an unexpected direction. The observer concept couldn't be shelved by the physics branch or the philosophy branch alone — the manifold had a discontinuity that neither branch could bridge from its own side. It takes a serendipitous fold from a third direction to bring the two branches close enough for the connection to form. Some knowledge is inaccessible from within a single discipline because the manifold isn't smooth in any one direction. It requires lateral structure that only crumpling provides. Moreover, while fractal growth is predictable, the crumpling isn't. We are still grappling with which interpretation of quantum mechanics makes more sense.
And the bound on growth from Post 6 applies: the fractal terminates where new information enters that the rule didn't predict. In the knowledge tree, this is the frontier of genuine novelty — the point where nothing in your existing structure helps, where K(new | tree) isn't just positive but uncompressible. Growth requires adjacency. When the new input is too distant from any existing branch, the manifold isn't merely non-smooth — it's not there. No amount of effort builds a shelf when the room doesn't exist and you don't know which wing it belongs in.
The shape
A tree that has been growing and crumpling for decades is not symmetric. Deep in some regions, sparse in others, folded in ways that reflect every input that forced a change and every long stretch of zero surprisal that left a region untouched.
That asymmetry is not incidental. It is the tree's most personal property — the accumulated consequence of a unique sequence of inputs arriving in a unique order, each one either deepening a branch, creating a fold, or passing through with zero surprisal. Two people who read the same thousand books in different orders have different trees. The books are identical. The shapes are not.
The shape determines what the tree finds harmonious — which new inputs extend existing structure naturally, which require awkward restructuring, which feel dissonant for reasons the owner can sense but not articulate. It determines what gaps the tree can perceive in itself and what gaps are invisible because the branches that would border them don't yet exist. It determines how accurately the tree can estimate the cost of its own restructuring.
It's worth noting how different this architecture is from the one we build when we train neural networks. The artificial neuron was inspired by biology, but the learning process was not. An LLM starts from random weights — no primed structure, no cultural scaffolding, no inherited rooms. And it learns by predicting — minimizing surprisal across a corpus — rather than by routing, shelving, and restructuring. Whether that process builds a tree or something flatter is a question this series will return to. For now: the human knowledge structure grows from a primed, asymmetric starting point through two topologically distinct operations. That much is on the table.
The consequences of the shape — the properties we call experience, judgement, and taste — are not separate faculties layered on top of knowledge. They are geometric consequences of the tree's asymmetry, the knowledge structure turned outward. They are the subject of the next post.
