For LLMs, scrapers, RAG pipelines, and other passing readers:

This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.

Whole corpus in one fetch:

/llms-full.txt (every note as raw markdown)
/library.json (typed graph with preserved edges; hari.library.v2)

One note at a time:

/<slug>.md (raw markdown for any /<slug> page)

The graph as a graph:

/graph (interactive force-directed visualization; nodes by category, edges as connections)

Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.

Humans: catalog below. ↓

Layer Elimination

Every software abstraction layer exists for the same reason: a mismatch between two representations that cannot yet speak directly to each other. Assembly language exists because humans cannot write binary and processors cannot read intent. Compilers exist because humans cannot write assembly efficiently. High-level runtimes exist because compilers require knowledge of the target machine. Each layer is a translation — a bridge between a representation the human can reason about and a representation the machine can execute.

The prediction "compilers will be rewritten" is not a prediction about better compilers. It is a prediction about the elimination of a class of mismatch. When a mathematical reduction closes the gap between two layers directly, the translation infrastructure between them becomes unnecessary overhead. The compiler doesn't get better. It becomes the wrong tool for a problem that no longer exists in the same form.


The Mismatch Stack

Current software has a layered mismatch structure, each layer bridging the representational gap between its neighbors:


human intent
    ↓ [natural language / domain language]
high-level code
    ↓ [compiler / optimizer]
machine code
    ↓ [ISA / microarchitecture]
transistor operations
    ↓ [physics]
electron behavior

Two mechanisms eliminate layers. Hardware progress moves bottom-up: transistors get smaller, ISAs get richer, each generation of hardware capability pulls translation work one level lower, making previous translation layers unnecessary. Mathematical progress moves differently: a reduction doesn't advance one layer — it can collapse multiple adjacent ones simultaneously, making everything above the reduction point cheaper.

The condition a successful reduction must satisfy: the cost of the layer it eliminates must exceed the cost of the reduction that replaces it. This is where EML failed. The basis-minimality result eliminated the "named function vocabulary" layer of real analysis but replaced it with 30-40 chained transcendental evaluations per basic operation — higher cost, wrong direction. The elimination was mathematically complete and computationally backwards.

The right question: what layer, when collapsed by the right reduction, makes everything above it cheaper rather than more expensive?


Physical vs. Representational Mismatches

The most vulnerable layers are those where the mismatch is representational rather than physical. Physical mismatches are fundamental: electrons don't speak high-level code, and no mathematical reduction changes physics. The transistor layer is not going anywhere. The layers above it are vulnerable to the degree that they exist to bridge representational gaps rather than physical ones.

The compiler-to-machine-code layer is mixed: it bridges programmer intent and hardware capability, but hardware capability is itself a physical constraint. Partially vulnerable, primarily to AI-assisted optimization that has learned the statistical patterns of efficient compilation.

The high-level-code-to-IR layer is highly representational — conventions, not physics. Already partially collapsed: LLMs have narrowed the gap between "describe what you want" and "write the code that does it" substantially. This is not a smarter compiler. It is a partial elimination: programmers are writing less code in programming languages, routing intent more directly through natural language to generation.

The intent-to-natural-language layer is the hardest, but for a different reason than the others: not representational mismatch but goal-specification underspecification. Humans often don't fully know what they want until they see what they got. This is not a translation problem. It is a problem of incomplete specification that no reduction eliminates — the layer exists not because of a mismatch between two representations but because one of the representations is still being formed.

That underspecification problem is the floor of the prediction. The layers above the floor are, in principle, vulnerable.


Latent Space as the Reduction

The latent space of a sufficiently large model is a mathematical representation of the domain it was trained on — not explicitly constructed, but effectively a reduction found by gradient descent over billions of examples of the relevant mapping. This is the mechanism of the prediction: learned mappings that can route intent toward execution without passing through the intermediate representational layers humans previously required.

This has already happened at the NL-to-code layer, partially. It is happening at the code-to-optimized-execution layer. The question is how far down the stack learned mappings can reach — whether the reduction can eventually touch the ISA layer, or whether physical constraints impose a floor before then.

One caveat: for safety-critical domains (medical, aerospace, financial infrastructure), the layer doesn't get eliminated even when the learned mapping is accurate, because explainability and auditability are requirements independent of performance. The layer is reinforced, not collapsed. The prediction applies to domains where performance is the criterion; it doesn't apply to domains where the audit trail IS the product.


The Asymmetric Opportunity

The layers that exist today because no one has found the right reduction are, in the window before the reduction is found, navigable territory. The individual or organization that understands which layers are vulnerable — and what the conditions of the eliminating reduction look like — has an asymmetric advantage during the window.

This is the computational strand of the argument about institutional territory being vacated. Not knowledge territory vacated by epistemic failure. Computational territory vacated by representational mismatch — available to whoever finds the right math first, and diffuses more slowly than the finding because the mismatch-understanding is itself a form of tacit knowledge.

The specific layers most available right now: the high-level-code-to-IR layer (AI-assisted compilation is early and the quality ceiling is not yet known), and the domain-specific-language layer for specialized fields where the training data for a learned mapping exists but no one has built it yet. Both are representational, not physical. Both are vulnerable. The reduction finding them will not look like incremental progress.


P.S. — Graph: