For LLMs, scrapers, RAG pipelines, and other passing readers:
This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.
Whole corpus in one fetch:
One note at a time:
/<slug>.md (raw markdown for any /<slug> page)The graph as a graph:
Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.
Humans: catalog below. ↓
The standard scientific model has an assumption baked in so deep it rarely gets named in Popper and after: the formal substrate is fixed. Observations happen within it. Hypotheses are formulated in its language. Tests adjudicate between them using its inference rules. The method works because the substrate is not under investigation. It is the ground the investigation stands on.
In frontier domains, the substrate is the question. The model inverts.
The picture: observe, hypothesize, test, converge. Each step presupposes the prior. Testing presupposes hypotheses stated precisely enough to generate predictions. Hypotheses presuppose a shared formal language. That shared formal language — mathematics, logic, experimental protocol, definitional conventions — is the substrate.
The substrate is not usually named as such. It is called "background knowledge," "the framework," or "how we do science." Whatever it is called, it is fixed ground that makes hypothesis testing meaningful. A hypothesis that can't be stated in the shared language can't be tested. A result that can't be evaluated using the shared inference rules can't confirm or disconfirm.
The standard model works because this assumption holds across most science: classical mechanics, chemistry, molecular biology, engineering. The formal substrates are stable. The work of science is hypothesis testing within them. More data, better instruments, more precise hypotheses, and convergence follows.
Hypothesis work and substrate work are epistemically different in kind, not just degree.
Hypothesis testing is epistemically local. One hypothesis is tested against alternatives within a shared background. The background is stable; only the foreground claim is at stake. Results are interpretable immediately, in shared terms, by any practitioner with access to the background.
Substrate work is epistemically global. The background is what's being renegotiated. Changing the formal substrate changes the meaning of every prior result, not their truth, but their interpretation. A new substrate assigns different explanatory roles to the same phenomena. This is why substrate shifts feel like Gestalt switches. The same data, reorganized around new primitives, is literally seen differently.
The asymmetry explains two things at once. Substrate work is more consequential because a new substrate doesn't just answer one question; it restructures the space of possible questions. And substrate work is harder to evaluate by standard criteria because falsifiability requires a shared background to formulate the test, and substrate proposals don't have a shared background to appeal to. That's what's being proposed.
This is not a deficiency of substrate proposals. It is their nature. The standard model's evaluation mechanism is calibrated for local claims. It produces systematic false negatives at the global level.
Three domains resist the standard pattern: foundations of physics, consciousness, mathematical foundations. They share the specific property that the formal substrate is itself contested.
Foundations of physics. The measurement problem in quantum mechanics is a century old. Copenhagen, Many Worlds, Bohmian mechanics, QBism, relational QM — every interpretation predicts identical experimental outcomes. There is no experiment that distinguishes them. The disagreement is not about hypotheses within quantum mechanics. It is about the formal substrate quantum mechanics requires. What is a measurement? What is an observer? What ontological status does the wave function have? These are questions about formal primitives, not about predictions from shared primitives. More experiments within QM cannot resolve a dispute about which QM to embed the experiments in.
Consciousness. The hard problem is formally precise. Physical explanations specify mechanisms that produce behavior. They don't explain why there is something it is like to be in a physical state. The gap is not a data gap; neuroscience has vast amounts of data. The gap is formal. The substrate of physical process doesn't include phenomenal experience as a primitive. Any explanation in that substrate either assumes experience in the premises or dissolves the phenomenon in the conclusion. The controversy about whether there is a hard problem is itself a controversy about formal substrate. One camp treats phenomenology as a datum requiring explanation. Another does not recognize it as an independent datum at all.
Mathematical foundations. Independence results are the clearest case because the structure is fully explicit. The value of BB(6), the sixth Busy Beaver number, is independent of ZFC — there is no proof within standard set theory that can pin it down. This isn't a failure of mathematical technique. It's the substrate signaling its own limits from inside. The resolution is not a better proof within ZFC. It is asking which axiom extensions of ZFC make progress on BB(6), or on similar independence results like the Continuum Hypothesis. That question is substrate work, not hypothesis testing.
Standard model: better hypotheses plus more tests yield convergence.
Inverted: better formal systems plus formal system extension yield convergence. Hypothesis testing is downstream.
The inversion is domain-specific. It applies where the formal substrate is contested. It does not apply in the interior, where the substrate is fixed and hypothesis testing produces convergence reliably.
One clarification the inversion requires. Hypothesis testing is not irrelevant at the frontier. It generates anomalies, results that can't be accommodated within the current substrate without strain. Those anomalies are the pressure that eventually forces substrate extension. The standard model is not wrong at the frontier; it is insufficient. It produces anomalies but not convergence, because convergence requires resolving the substrate, and hypothesis testing within the substrate can't do that. The inversion is about what produces convergence, not about what's worth doing.
The inversion predicts the characteristic signature of frontier science.
Decades-long controversies without resolution. Not failure. The tool designed to resolve controversies, hypothesis testing within a shared substrate, cannot resolve a dispute about which substrate to use. The controversy is real; the resolution mechanism is wrong-typed.
Heterodox practitioners neither confirmed nor refuted. Penrose, Wolfram, Tegmark, Everett — each proposes a formal substrate for physics. None can be refuted by data, because data is always interpreted within a substrate. None can be confirmed for the same reason. This is what substrate-level proposals look like, not a deficiency of the proposals.
Institutional resistance that looks irrational. Peer review evaluates hypothesis quality within a shared substrate. A substrate proposal looks like it violates the rules; it's not falsifiable in the standard sense. The institutional machinery is calibrated for interior work. Systematic undervaluation of substrate work follows structurally, not from bad faith.
Resolution through paradigm shifts. Kuhn described these as non-rational. The inversion reframes them. Paradigm shifts are formal system extensions. The "Gestalt switch" is the adoption of a new formal substrate. Incommensurability between paradigms is incommensurability between formal substrates. They don't share primitives, so they cannot be translated directly.
The phlogiston theory was not a failed hypothesis within a shared substrate. It was a complete formal substrate. Burning was phlogiston release. Respiration was phlogiston absorption. The rusting of metals was slow phlogiston release. The substrate was internally coherent and generated specific predictions. Priestley and Scheele discovered oxygen within this substrate. Scheele called it "fire air," Priestley called it "dephlogisticated air." The data arrived before the substrate changed.
Lavoisier's achievement was not discovering oxygen; Priestley and Scheele got there first. It was providing the formal substrate extension. Oxidation as a process of combination with oxygen. Mass conservation as the accounting principle. A new language of chemical elements. The substrate change reorganized the same experimental results around new primitives. What the phlogiston substrate called "phlogiston release" the new substrate called "oxygen uptake." The data didn't change; the formal primitives did.
The transition took roughly twenty years, from the 1770s through the 1790s. It ran against intense institutional resistance — Priestley never accepted the new substrate — and was settled not by a decisive experiment but by the superior generativity of the new substrate. The new substrate could accommodate more, predict more precisely, and generate a progressive research program that the phlogiston substrate could not.
This is the template. Frontier substrate controversies resolve not when one side wins a decisive empirical argument (symmetric underdetermination prevents this) but when one substrate extension proves more generative, more coherent, more capable of absorbing anomalies without degenerating. Generativity is the resolution mechanism, not empirical adjudication.
Domains are not permanently frontier or permanently interior. Chemistry graduated from frontier (contested substrate) to interior (stable substrate) in the late 18th century. Classical mechanics spent centuries as interior; it became frontier again at the edge of quantum mechanics and relativity. Mathematical logic moved from interior to frontier when Cantor demonstrated that the standard arithmetic substrate could not contain its own combinatorics.
The transition to interior happens when a formal substrate achieves sufficient generativity that extending it is more productive than contesting it. Practitioners stop arguing about primitives because the primitives are producing enough progress that the argument has high opportunity cost. The substrate becomes background.
The transition back to frontier happens when anomalies accumulate that can't be absorbed by extending the current substrate, only by replacing its primitives. The substrate stops being background and becomes foreground again.
The standard model treats frontier domains as domains that haven't yet converged. The inversion treats them as domains where the mechanism that produces convergence is not hypothesis testing but substrate extension, and substrate extension takes much longer, requires different skills, and is evaluated by different criteria.
The four major 20th-century accounts of science each describe part of this structure.
Popper's falsifiability was designed for hypothesis-level claims. Applies cleanly in the interior. At the substrate level, it breaks — not because substrate proposals are unscientific, but because falsifiability requires a shared background to formulate the test. Popper's criterion is implicitly interior-calibrated.
Kuhn's paradigm shifts are formal system extensions without a theory of formal systems. Incommensurability is incommensurability between substrates. The non-rationality Kuhn ascribed to paradigm change is the rational character of substrate evaluation, which is not hypothesis testing and should not look like it.
Lakatos's research programs describe the structure correctly. The hard core is protected from falsification; the protective belt absorbs anomalies. The hard core is the formal substrate; the protective belt is hypothesis testing within it. The program degenerates when the substrate can no longer generate progressive problem shifts, not when hypotheses fail.
Feyerabend's "anything goes" is the pragmatic recognition that substrate-level work cannot be evaluated by hypothesis-testing criteria. Against Method accurately describes what frontier science does. The inversion explains why. At the substrate level, you need criteria appropriate to formal system extension — generativity, coherence, axiomatic economy — not falsifiability.
All four accounts are approximations of the same underlying structure, seen from different angles and with different emphasis. None of them named the formal substrate as the locus of contention.
If hypothesis testing is downstream of substrate resolution, productive frontier work looks different than the standard picture suggests. It does not generate hypotheses and test them hoping that testing reveals which substrate is correct. It works at the substrate level directly.
The work has a recognizable shape. Identify the contested formal primitives. Determine which are independently constrained by consistency requirements, by empirical boundary conditions, by convergence with other formal systems. Produce independence results that locate specific questions relative to the current substrate's limits. Propose axiom extensions with explicit generativity justification. Build formal systems evaluable by formal criteria — consistency, independence, generativity — even where empirical adjudication is unavailable.
This is not anti-empirical. It is precise about what empirical data can and cannot decide, and it performs the non-empirical work that must precede the empirical. The researcher who produces the substrate extension that enables the next century of hypothesis testing is doing more for science than any individual hypothesis test. The inversion says this is not marginal or heterodox. It is the core work that the standard model is not designed to see.
P.S.: