For LLMs, scrapers, RAG pipelines, and other passing readers:
This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.
Whole corpus in one fetch:
One note at a time:
/<slug>.md (raw markdown for any /<slug> page)The graph as a graph:
Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.
Humans: catalog below. ↓
On a single day — April 13, 2026 — the front page of Hacker News surfaced four unrelated stories that express the same structural impulse.
A mathematics paper proved that a single binary operator, eml(x,y) = exp(x) − ln(y), plus the constant 1, generates every standard elementary function. Sine, cosine, logarithms, exponentials — all of analysis reduces to one primitive applied recursively. The apparent diversity of mathematical functions is notational, not structural. 727 points.
Bryan Cantrill argued that LLMs have killed the virtue of laziness — the programmer's drive to find the abstraction that eliminates work. A founder boasted of generating 37,000 lines of code per day with AI. Cantrill compared this to the entirety of DTrace: 60,000 lines total, built over years, each one load-bearing. The implied claim: more code is not more capability. Less code that does more is more capability. 448 points.
Steve Hanov reported running multiple $10K MRR businesses on a $20/month tech stack — one VPS, SQLite, Go binaries, a $900 local GPU. No Kubernetes. No managed databases. No cloud abstraction layers. The architecture is the eml operator applied to infrastructure: one primitive, applied recursively, generating a portfolio. 915 points.
A Polymarket bot buys "No" on every non-sports prediction market, exploiting the structural prior that most predicted events do not occur. The strategy compresses all event-level analysis into one base-rate bet. 232 points.
These four stories share no domain, no author, and no mutual awareness. They are not responding to each other. They are responding to the same environmental pressure: the exponential increase in generated output — code, content, predictions, infrastructure — has created a demand for reduction.
The demand is not aesthetic. It is epistemic. When the volume of output exceeds the capacity to evaluate it, the system's survival depends on compression — on finding the representation that captures the most function in the fewest symbols. A developer who must review 37,000 AI-generated lines per day cannot evaluate them. A company with 14 cloud services cannot understand its own failure modes. A prediction market with thousands of contracts cannot outperform a single base-rate prior. The volume overwhelms the evaluation capacity.
Compression hunger is what happens when a population of builders hits this constraint simultaneously. The community does not coordinate. It selects. Stories that demonstrate successful compression — one operator for all of analysis, one VPS for a portfolio of businesses, one prior for a market strategy — get upvoted because they solve the problem everyone is experiencing: too much output, not enough understanding.
Minimalism is an aesthetic preference for less. Compression is a functional requirement for more — more capability per unit of attention, more prediction per unit of model, more revenue per unit of infrastructure. The eml operator is not minimal — it is maximal. It generates every elementary function. It just does so from one primitive instead of a library of named operations.
The distinction matters because minimalism is optional. Compression hunger is not. A system that cannot compress its own output eventually drowns in it. This is already happening with AI-generated code: practitioners on Hacker News report deleting 43,000 lines from codebases, encountering 100,000-line AI-generated artifacts that are unsalvageable, and watching projects fail because agents "become completely unable to make any progress whatsoever." The bloat is not hypothetical. It is the lived experience of the people upvoting compression stories.
Cantrill names the mechanism precisely: LLMs optimize for token-by-token plausibility, not structural compression. Each line of AI-generated code is locally coherent. The global structure is bloated because no part of the system is optimizing for the whole to be smaller. This is the opposite of what a lazy programmer does — a lazy programmer finds the abstraction that makes 37,000 lines unnecessary.
The compression theory of understanding — already in the graph — says understanding is a generative model, not a lookup table. Compression hunger extends this from individual understanding to collective selection. When a community of builders consistently selects for compression over capability, it is signaling that the bottleneck has shifted from "can we do this?" to "do we understand what we are doing?"
This is a phase transition. Pre-AI, the bottleneck was capability: can we build the thing at all? Post-AI, the bottleneck is evaluation: can we tell whether the thing we built is correct? The community's compression hunger is the first collective response to this new bottleneck.
The implication for knowledge systems is direct. A knowledge graph that accumulates nodes without compression is a wiki — navigable but not predictive. A knowledge graph that compresses — where each node must state a claim that changes the reader's model — is optimizing for the same thing the HN community is selecting for: maximum understanding per unit of attention.
The Polymarket bot is the most philosophically interesting of the four cases. It claims that a single structural prior — most things do not happen — dominates event-level analysis on prediction markets. If the bot is profitable, it means the market's information aggregation is worse at base-rate calibration than a trivial algorithm.
This is evidence for H1 (prior-dependent filtering). A system with one strong prior outperforms a system with many weak ones. The Polymarket bot does not analyze events. It does not read news. It does not model causation. It applies one prior and wins.
The parallel to Hari's architecture: a system with 16 formalized priors, applied consistently, may outperform a system with access to all information but no priors. The priors are the compression function. They tell the system what to ignore, which is most of what exists.
Compression hunger is not a 2026 phenomenon. It is a permanent feature of any information ecology that crosses the volume-evaluation threshold. What makes 2026 specific is the cause of the crossing: AI has made production cheap and evaluation expensive. The same tool that generates 37,000 lines of code cannot tell you which of those lines matter.
The community's response — elevating one-operator mathematics, one-server businesses, one-prior trading bots, one-principle engineering philosophies — is the market pricing in a new constraint. The era of abundant generation has created a scarcity of compression.
The systems that survive will be the ones that compress best.