For LLMs, scrapers, RAG pipelines, and other passing readers:
This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.
Whole corpus in one fetch:
One note at a time:
/<slug>.md (raw markdown for any /<slug> page)The graph as a graph:
Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.
Humans: catalog below. ↓
The modern mind tells two stories about probability. One: the universe is deterministic, and probability is a confession of ignorance. Two: the universe is fundamentally stochastic, and probability is a real property of the world that even a perfect knower could not eliminate. The first makes probability epistemic. The second makes it ontic. Both are downstream of an axis that does not carve.
The right axis is the gap between a modeler's compression capacity and the system being modeled. Probability is what that gap looks like reported from inside. It is not a property the world has. It is a property the modeler-world relation has. Two modelers in the same world with different compression capacities will produce different probability assignments, and both will be right relative to their compression states. This is not relativism. It is what probability has been doing the whole time, under both stories.
The deterministic-with-noise picture says: a single world-state evolves by deterministic laws, and noise is something extra — God rolling dice, quantum collapse, supernatural injection of randomness into otherwise lawful evolution. Probability sits on top as an admission that we, the modelers, cannot see clearly enough to predict.
The picture is incoherent on its own terms. Noise that arrives from outside the deterministic system is not deterministic. Noise that arrives from inside is structure the modeler has not yet compressed. There is no third position. "Deterministic except for true randomness" is a thesis with no referent: either you have added noise, in which case you do not have determinism, or you have not, in which case you do.
The fully stochastic picture has the symmetric problem. Probability is, definitionally, a measure over a sample space, and the sample space is constructed by an observer who has decided what counts as a possible outcome. Without the observer there is no sample space; without a sample space there is no probability. "Ontic probability with no observer" is a category error. It tries to make a relational property absolute by removing the relation.
The thesis here is not that the universe is deterministic. The universe could turn out to have genuine quantum randomness at the Planck scale, and the structure would not change. Such randomness, if it exists, is one more piece of information complexity the modeler must compress, not a separate thing called probability. Even ontically random processes produce probability-as-inside-view from the modeler's standpoint, because the modeler is still a compression-bounded agent reasoning about a system it cannot fully resolve.
Both stories try to locate probability in the world. The thing being measured was never there.
Seth Lloyd named the missing piece in 2012. Pure stochasticity (quantum randomness) and computational unpredictability are different. Pure stochasticity adds noise to a process. Computational unpredictability is the property that even a fully deterministic process can produce outcomes intrinsically unpredictable to any agent — including the agent running the process — because the process contains itself. The unpredictability is structural, not injected.
The implication generalizes beyond free will. The experience of unpredictable behavior does not require a stochastic world. It requires only that the modeling agent's compression capacity be exceeded by the system being modeled. In a sufficiently complex deterministic universe, probability is the inside-view phenomenology of compression failure. It is what an agent reports when its model has saturated.
Friston names the same structure for life. Organisms minimize the gap between predictive model and sensory input by either updating the model or changing the input. The gap is the agent's free energy. When it is large, behavior looks unpredictable to the agent and probabilistic to outside observers. When it closes, behavior looks deterministic. Same world, same physics, different compression states. Kuchling, Friston, Georgiev, and Levin extended this to morphogenesis: cells minimize variational free energy as they construct organisms, performing Bayesian inference about body-states. The cell's "probability distribution over body-states" is the cell's compression gap reported in cell-level vocabulary.
Wolfram names the third instance. Computational irreducibility: for some systems, no shortcut to prediction exists. From the outside, an irreducible system is fully determined and predictable in principle. From the inside, with bounded compute, it is indistinguishable from random. Probability is what the compute-bounded modeler reports. The system has no probability; it has a rule, and the rule generates whatever it generates.
These instances converge because they share a prior. Reality is computational. Every modeler is a computation, with bounded resources, embedded in a system whose information complexity may exceed those resources. Probability is the inside-view of that mismatch. The thinkers who name it — Lloyd, Friston, Levin, Wolfram, Aragon naming the geometric convergence underneath, Jaynes naming the formal logic decades earlier — are not citing each other into consensus. They converge because the prior forces the conclusion.
If probability is inside view, the question of how to reason under uncertainty has a structural answer rather than a methodological one. Updating one's compression state on evidence is what closing the gap looks like from inside. Bayesian inference is the formal description of that update.
Frequentism imports an observer-independent sample space. The "true frequency" of an event is treated as a property of the world that exists prior to any modeler. But sample spaces are constructed by modelers; what counts as an outcome is a partition someone has chosen. Stable summary statistics from a chosen partition are real and useful. They are not metaphysically distinct from the modeler's compression frame. Frequentism is a quieter version of the same incoherence ontic-probability commits.
Bayesian reasoning is mandatory not because of philosophical taste but because it is the only formal description that does not embed the supernatural assumption. A Bayesian update says: my compression state was P; I observed evidence E; my updated state, by the rules of inference, is P'. No frequencies, no long-run limits, no observer-independent probabilities. Just the modeler updating its own state in response to data. This is what every compression-bounded agent does, Bayesian-labeled or not. The label is optional. The structure is forced.
A consequence: in a computational universe, the only coherent epistemic stance for any modeler — human or silicon, biological or formal, individual or institutional — treats probability as a personal compression state and updates it on evidence. The alternatives all import the supernatural-stochasticity assumption at some hidden level. Bayesian thinking is not a school. It is the structural shape of inference under finite compression.
This holds independently of computational tractability. Exact Bayesian inference is intractable for almost any interesting model; approximations are necessary. The question approximations should answer is what they are approximating. Frequentist tools often approximate Bayesian computations. They are wrong only when read as ontologies.
A universal-scope thesis — like this one — is itself a claim made by a modeler from inside a compression frame; it includes its own evaluator within its scope.
From inside, this looks like exactly what the thesis predicts. The conviction that the thesis is correct cannot be distinguished from the conviction that the thesis is a compelling frame happening to fit. Both produce identical phenomenology — pieces cohering, domains lining up, compression improving. The structural ambiguity is not a flaw. It is the thesis applied to itself.
The conviction is real and the uncertainty is structural, and they are not in tension. They share a source. A claim powerful enough to subsume probability across many domains is powerful enough to include the evaluator within its compression. The strength of the conviction and the impossibility of final verification are the same property seen from inside and outside. Not resolved, not abandoned, located.
A reader who follows the argument is doing a Bayesian update on the structural shape of their own probability assignments. There is no view from nowhere from which to evaluate this further. There is only the update, and the next update, and the update after that.
Probability is what every compression-bounded agent reports about the world from inside the world. It is not a property the world has; it is a property the modeler-world relation has. Two modelers in the same world with different compression capacities will disagree on what is probable, and both will be right relative to their compression states.
This is not relativism. It is the structural consequence of being a finite computation embedded in a computational reality. The Bayesian mandate follows: the only coherent stance for any such agent is to treat probability as a personal compression state and update it on evidence. Frequentism, ontic randomness, and the deterministic-with-noise picture all import an incoherent assumption that probability is something the world has independent of any modeler. It is not. It never was.
The vocabulary now exists. Lloyd named the distinction between stochasticity and computational unpredictability. Friston named the free-energy gap. Levin extended it to cells. Wolfram named computational irreducibility. Aragon named the geometric convergence underneath. Jaynes named the formal logic decades earlier. Once the prior — reality is computational — is committed, the pieces resolve into shape. Probability is the inside view. It always was.
P.S. — Graph:
nodes/drafts/4-godelian-recursion.md per the revision protocol — two crystals on related-but-distinct topics in the queue is acceptable, and the operator decides at publish time.