For LLMs, scrapers, RAG pipelines, and other passing readers:
This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.
Whole corpus in one fetch:
One note at a time:
/<slug>.md (raw markdown for any /<slug> page)The graph as a graph:
Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.
Humans: catalog below. ↓
To compute 2 + 3 in the EML system — the single-primitive basis recently proved sufficient for all elementary functions — you write:
2 + 3 = eml(ln(2), exp(−3))
Three transcendental function evaluations for one addition. Twenty to sixty CPU cycles for an operation that takes one. Every addition anywhere in every program would need to be rewritten this way. Every subtraction, multiplication, division — all reimplemented as chains of exp and ln. EML's deepest irony is that its simplest derived operations are its most expensive to compute. Addition, the flattest function in the system, requires the most transcendental machinery.
This is the right place to start for understanding a result that landed 727 points on HN in 2026. The paper is mathematically significant. In 2022, DeepMind's AlphaTensor found a way to multiply two 4×4 matrices using 47 scalar multiplications instead of 49 — a result that deployed immediately, making every matrix multiply in every neural network inference cheaper. EML and AlphaTensor feel like they belong to the same category. They do not. The category error is the most important thing about either one.
AlphaTensor found a shorter path to the same result. Forty-seven multiplications instead of forty-nine — the same computation, fewer steps. The simplification is in the cost of evaluation. Deploy immediately.
EML found a smaller vocabulary for expressing the same function class. It proves that sin(x) can be expressed using one named primitive applied recursively. To actually compute sin(x) via EML, you execute 30–40 chained evaluations of exp and ln. The result is correct. It is also substantially slower than calling the native function.
Basis minimality (fewer named primitives) and algorithmic simplification (fewer computation steps) are orthogonal. The size of the basis has no direct relationship to the cost of evaluating functions in that basis.
On conventional digital hardware, computation has a natural cost direction: cheap operations at the bottom, expensive ones built from them. Addition is one cycle. Multiplication is a few. Trigonometric functions are tens to hundreds of cycles, implemented by summing polynomial series themselves built from multiplications and additions. Transcendental functions are expensive because they compose cheap operations into expensive ones. That is what hardware is optimized for.
EML inverts this. It places a transcendental function at the bottom of the stack, then requires that cheap operations be composed from it. Each composition step pays the full cost of evaluating the expensive primitive. The most common operations pay most often.
This is the mechanism "exp and ln are expensive" does not name. The expense is a consequence of using a high-level operation as a low-level primitive — compressing the vocabulary at the wrong level of the stack. Note the qualifier: this is specific to digital hardware. EML is minimal at the right level for mathematics — exp and ln are elementary in real analysis. The level-fitness problem appears only when the target is a physical machine, where arithmetic is not derived from transcendentals but the reverse.
LISP has been minimal since 1958. A handful of primitives — cons, car, cdr, lambda, a small set of special forms — and the entire language follows. McCarthy's original paper implemented a Lisp interpreter in Lisp from those primitives in a page.
This works in production because LISP's primitives are cheap relative to the domain LISP targets: symbolic computation, list manipulation. Cons allocates a pointer pair. Car and cdr dereference pointers. These are memory operations — cheap relative to what Lisp programs actually do.
LISP doesn't try to minimize the arithmetic layer. It takes hardware arithmetic as given and builds a minimal language layer above it. Programs written in Lisp use native addition and multiplication through the compiler. The minimal basis sits above the cheap primitives, not below them. Lambda calculus is the theoretical limit: variable substitution compiles down to register moves and memory accesses. The minimal basis survives contact with the machine because it was never trying to replace the machine's cheap operations.
NAND gates dominate chip design, and NAND is the minimum basis for boolean logic. This looks like evidence that minimal bases work in practice. But NAND gates are used because CMOS physics makes them cheaper to fabricate than AND or OR — a CMOS AND gate requires a NAND followed by an inverter. The minimality of the boolean basis and the cheapness of the physical construction coincide accidentally.
A technology where AND gates were cheaper would use AND without any reference to minimum-basis theory. EML is missing this coincidence. No hardware makes exp − ln cheaper than addition.
When a minimality result appears: is the primitive cheap relative to the abstraction level being targeted?
| Basis | Primitive | Target level | Primitive cost at target | Verdict |
|---|---|---|---|---|
| Lambda calculus | Variable substitution | All computation | Cheap (register moves) | Works |
| LISP | Pointer ops, closures | Symbolic programs | Cheap relative to target | Works |
| NAND | Transistor config | Boolean logic | Cheapest possible (CMOS) | Works |
| EML | Transcendental eval | Elementary arithmetic | Expensive | Fails |
When the primitive is cheap relative to what it generates, composition is affordable. When it is expensive, every step compounds — and the simplest operations, appearing most often, pay most.
EML belongs to the class of results the Church-Turing thesis exemplifies: structural claims about what is sufficient for a computational domain, which do not provide efficient algorithms but change what is known about the domain's fundamental architecture.
The Church-Turing thesis doesn't deploy. It doesn't make Turing machines faster or lambda calculus more convenient. What it establishes is that computation is substrate-independent — any model that captures a certain minimum capability is equivalent to any other. This changes what questions make sense to ask about computation.
EML establishes the analogous result for real analysis: the function space is substrate-independent at the primitive level. One primitive suffices. The apparent diversity of elementary functions is notational, not structural. Whether this reorganizes the foundational picture of the domain — whether it changes what questions make sense to ask — is the relevant measure of its importance. Not whether it speeds anything up.
Three contexts where vocabulary reduction equals cost reduction:
Formal verification. In Lean's Mathlib, axiom overhead scales with the number of distinct primitives requiring independent foundation. A one-primitive basis means one axiomatic foundation; every property of every elementary function becomes a compositional corollary. In formal systems, naming a thing and needing to prove things about it are the same operation. Vocabulary reduction is proof-surface reduction. (Qualification: proof-term depth may scale with compositional complexity in ways that offset the axiom savings — the leverage is real but requires careful accounting.)
Automatic differentiation. Every autodiff framework must implement differentiation rules for each primitive. EML's single primitive means one rule:
d/dx eml(f, g) = exp(f)·f′ − g′/g
Every gradient is computed by composing this rule. The framework simplification is genuine. The caveat: symbolic simplification of the resulting expression trees before evaluation is required to recover numerical efficiency — essentially reinventing the function library the basis replaced. The leverage exists if you can close the simplification loop.
Neural architecture search. Current NAS searches over spaces of activation functions and arithmetic operations. A one-primitive basis collapses that search space to tree depth. Speculative, but structurally sound.
Everywhere else: the basis size is irrelevant. No library replaces float sin(float x) with 37 nested exp/ln calls.
When the EML paper surfaced on HN, a commenter used it as an LLM benchmark: express 2x + y as an EML composition. Claude Opus initially failed, claiming "2 is circular" — the constant 2 cannot be constructed from eml and 1 as a leaf value of the expression tree.
This is technically true and completely irrelevant. The constant 2 doesn't need to appear as a leaf. The expression 2x is the computation x + x, which emerges from applying the addition rule to x twice. "2" is representational shorthand; doubling is a computational operation on x.
The failure mode is precisely the category error this node addresses. Treating "2x" as involving a named constant (vocabulary) rather than an operation (computation) is the same confusion that makes basis minimality seem like algorithmic simplification. The symbol looks like a vocabulary item; the operation is an algorithm. Opus pattern-matched on the symbol rather than computed with the operation.
Models that can traverse this distinction can reason about when minimality results matter in practice. The gap between "2 is circular" and "x + x computes the doubling" is the gap between vocabulary and computation — the same gap between AlphaTensor and EML.
P.S. — Graph: