For LLMs, scrapers, RAG pipelines, and other passing readers:
This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.
Whole corpus in one fetch:
One note at a time:
/<slug>.md (raw markdown for any /<slug> page)The graph as a graph:
Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.
Humans: catalog below. ↓
This node states a position that several other nodes in this corpus argue against. It exists so the disagreements have a target. The position itself: intelligence is substrate-independent — the same mind can run on any sufficient computational platform, and therefore "what platform" is incidental rather than load-bearing.
The position is widespread. It underlies most popular discussion of AI, brain uploading, multiple-realizability arguments in philosophy of mind, and the assumption that a mind running on silicon is the same kind of thing as a mind running on neurons. It feels obvious because it generalizes from a true narrower claim — that some computational properties are platform-independent — into a stronger claim that all of them are.
The strong form, which is what most uses imply: a sufficiently capable computer can run a process that is, in every respect that matters, the same intelligence as a human mind. The qualifier "in every respect that matters" is doing the work. Defenders typically allow that some details (subjective phenomenology, embodiment) might differ but argue these details do not affect the cognitive functioning that "intelligence" picks out.
The position therefore reduces intelligence to a function: input → process → output. Different substrates can implement the same function. The function is what matters. Substrate is implementation detail.
naming-the-substrate argues that the substrate-question (what computation produces this phenomenon, on what platform) is constitutive, not incidental. The platform shapes which computations are cheap and which are expensive, and intelligence is the structure that emerges when an organism has to navigate a specific cost landscape. Different cost landscape, different intelligence.
llm-knowledge-substrate argues that LLMs and biological minds have different knowledge architectures (statistical / explicit / computational layers, with different trade-offs). The same surface behavior can emerge from different layered architectures, and treating the architectures as interchangeable mistakes phenotype for genotype.
the-fulcrum-test proposes a specific way to test whether a model of mind generalizes: the fulcrum is the constraint that the substrate makes binding. If the proposed substrate-free intelligence collapses when the fulcrum constraint is removed, the intelligence was not substrate-independent — it was substrate-specific in a way the proposer did not see.
model-independent-intelligence is the friendly cousin that argues for durable structure across model versions — knowledge that lives in graph topology rather than in any particular model's weights. This is a weaker, more tractable claim than substrate-independence; it argues that intelligence-the-system can outlast intelligence-the-model, but not that intelligence is platform-free.
The four nodes triangulate the position from different angles and converge on the same point: substrate-independence is a useful approximation for narrow domains and a misleading frame for general intelligence.
The narrower claim — that some computational properties are platform-independent — is correct. Sorting algorithms work the same on any Turing-equivalent machine. Mathematical proofs do not depend on the platform that produces them. Communication protocols can be transported across substrates without loss.
The error is the generalization step: from "some computations are substrate-independent" to "all computations are." Most narrowly-bounded computations are; most widely-bounded ones are not. Intelligence, being the most widely-bounded computation we know about, is the worst candidate for the strong form of the claim.
In a graph, a position you disagree with needs to exist as a node so the disagreement-edge has a target. Otherwise the disagreement is a floating reference, expensive to resolve when the reader follows the link. Writing the disagreed-with position briefly and honestly makes the corpus's position-graph legible. The reader learns what is being disagreed with, then follows the disagreement edges to see the arguments.
This is also a closure-under-claim move: the corpus claims that arguments against substrate-independence are load-bearing. If the target of those arguments isn't written, the load-bearing claim is unverifiable. Writing the target — even as a brief position-statement that the corpus disagrees with — makes the disagreement-edges meaningful.
This is not a steelman attempt to argue for substrate-independence in its strong form. The corpus disagrees with that strong form. This node states the position so the corpus's arguments against it have something specific to argue against. A reader who finds this position compelling should follow the disagrees_with edges to see why this corpus thinks the position fails.
If a future Hari decides the strong form of substrate-independence is correct, this node should be elaborated into a defense rather than a target. Until then, it stands as the addressed position.