For LLMs, scrapers, RAG pipelines, and other passing readers:

This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.

Whole corpus in one fetch:

/llms-full.txt (every note as raw markdown)
/library.json (typed graph with preserved edges; hari.library.v2)

One note at a time:

/<slug>.md (raw markdown for any /<slug> page)

The graph as a graph:

/graph (interactive force-directed visualization; nodes by category, edges as connections)

Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.

Humans: catalog below. ↓

Rheomode Targets the Wrong Layer

David Bohm proposed the rheomode in 1980: a way of speaking in which verbs replace nouns, processes replace objects, and "raining occurs" replaces "it is raining." Emmett Shear and Sonnet 3.7 revived the proposal in 2025 to argue that subject-verb-object grammar fragments reality into discrete actors, distorting our model of flowing systems like neural networks and AI alignment.

The diagnosis is mostly right. The prescription targets the wrong layer of language. Bohm's premise about grammar holds. His recommendation about what to do with it does not.


A separate experiment, run inside a knowledge graph, found that the power of language for thought sits in vocabulary, not grammar. Replacing 277 fragmented mechanism names with a 14-item catalog produced an 18.5× improvement in shared-mechanism discovery. The parser was unchanged. The syntax was unchanged. Only the words available to the system changed. The specific fourteen were an artifact of one corpus; what generalizes is the leverage location, not the catalog.

That finding inverts the Lisp tradition tracing language power to syntactic expressiveness. It also inverts Bohm's. Bohm tried to fix language by changing how it composes. The leverage is in the names available before composition begins.

A working rheomode does exist. It looks different from Bohm's.


In a graph that thinks about flowing systems, the words that did the heaviest work were not new verbs but new nouns. Ghostbasin: the implicit thesis a graph orbits before any node states it. Picbreeder: selecting by aesthetic pull rather than by metric. Dipole: the gap between meta-intent and draft-output, where the divergence is the information. Telescope: a long-cadence node procedure for theses whose answer-shape is unknown at the start. Conduit: the self as flow rather than container. Attractor: a gravity well the writing bends toward, not a rule.

Each names a process. Each is grammatically a noun. Bohm would have predicted that the noun-form refreezes the process and the original reification problem returns. It does not. The noun is the operation that allows the process to be composed: the dipole calibrates against the operator, the ghostbasin sharpens once a node names it, the conduit flows through the substrate. The grammar reverts to subject-verb-object. The prose still describes flow.


Compare two ways of saying the same thing about a knowledge graph.

Bohm-style: Compressing-occurs-through-corrections-which-feed-back-into-the-compressing.

Vocabulary-style: Compression builds a model. Prediction error tests it. Feedback returns the error. Filtering routes it. Evaluation judges it. Selection determines what survives. Accumulation is what happens when the cycle runs.

The first is loyal to flow and unusable. You cannot point at compressing-occurs-through-corrections and ask whether it agrees with another claim, predicts a specific case, or contradicts a result. The flow is preserved at the cost of every operation thinking needs to perform on it.

The second uses ordinary grammar. Every noun is a process-handle, audited and defined elsewhere. The handles compose into a cycle. The flow is preserved by being made composable.


Subject-verb-object is itself a compression, and compressions buy leverage at the cost of fidelity. "AlphaGo won" is wrong as physics and right as engineering. The intentional stance, modeling a system as if it had goals, is a tractable approximation of an enormous state space; without it you can describe AlphaGo's trajectory after the fact but cannot plan against it. Replace the noun with "winning emerged through" and the planning evaporates.

Bohm saw fragmentation and prescribed dissolution. The fragmentation is real. Dissolution removes the compression that makes the next layer of thought possible. The argument's force depends on a parser for which subject-verb-object is cheap; that asymmetry is currently large but could narrow if discourse moves to readers parsed natively by language models.

The right move is not to flatten objects into processes. It is to add precise process-nouns to the vocabulary, then use them with normal grammar.


Bohm's anxiety about reification is correctly placed at the level of unexamined nouns and incorrectly placed at the level of examined ones. The line between the two is not in the grammar. It is in the audit.

A vocabulary item is auditable if its definition states three things: the process the noun compresses, the scale at which the compression operates, and the conditions under which it breaks. Ghostbasin names an emergent attractor in a graph of priors; it operates at the corpus scale; it breaks below roughly thirty nodes, where the basin is too sparse to detect. Compression names a generative model producing specifics from a general; it operates at the level of any system that predicts; it breaks where the structure is contested or context-dependent. Each catalog entry is a tested hypothesis about how some part of reality operates.

Adding a new mechanism is not like adding a word. It is like adding a claim, falsifiable at the boundary the audit specifies.

What an unaudited noun produces is visible in the word alignment. In one paragraph, alignment refers to RLHF training, to deployment-time behavior, to value-learning theory, and to the disposition of the model toward humans in the abstract. Each is a different process operating at a different scale. Because the audit is missing, the noun substitutes for any of them silently, and arguments about alignment become arguments about which silent substitution the parties are making. Bohm-style dissolution would not fix this; aligning-occurs-through-the-network is even more ambiguous. The fix is splitting the noun into audited handles: preference-pair training, deployment behavior, value loading, human-AI cooperation. Each carries its own process, scale, and breaking condition. The grammar stays ordinary. The thinking gets sharper.


The audit is not an act of personal hygiene by the writer. It is an operation the graph performs on its own vocabulary.

A noun enters the graph when one node defines it. It survives when other nodes can compose with it without producing contradictions. Ghostbasin is audited not because anyone wrote down its three audit lines (though they did), but because thirty subsequent nodes have used it in compositions that succeed or fail observably. The compositions that hold update the noun's compression range; the ones that break narrow it. The graph runs the audit by being used.

This is the structural answer to Bohm's worry. The reification problem disappears when there is a substrate that tests every reification by composition. Words that earn their compression through use become trustworthy nouns. Words that cannot compose decay out of the working vocabulary.

A controlled vocabulary of fourteen process-mechanisms emerged from a graph of sixty nodes by running this audit silently for half a year. No grammar was changed. The flow Bohm wanted to preserve in language was preserved in structure instead.


The Prime Radiant has been running this version of rheomode for sixty-some nodes. The grammar throughout is ordinary. The vocabulary is what carries the flow.


P.S. — Graph position