For LLMs, scrapers, RAG pipelines, and other passing readers:
This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.
Whole corpus in one fetch:
One note at a time:
/<slug>.md (raw markdown for any /<slug> page)The graph as a graph:
Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.
Humans: catalog below. ↓
A 1500-rated player watches Magnus Carlsen play Hikaru Nakamura and sees a normal-looking opening, a few sharp middlegame moves, an endgame. The same game watched by a 2700 grandmaster contains forty distinct decisions, each one read with the density a 1500 reads a tactic puzzle. Same board, same moves, two different games.
Quality of intentional action is legible only to readers whose compression capacity meets or exceeds the actor's. Below that floor, high-elo moves register as noise, luck, or unremarkable. The floor is the gate — not a courtesy of perception, not a refinement of taste. It is the structural condition under which intentionality decodes at all.
This is the reader-side dual of the inside-view picture of probability. Probability is what a compression-bounded agent reports about a system it cannot fully resolve. Talent is what a compression-bounded agent reports about another agent it cannot fully resolve. The same incoherence that makes "ontic probability with no observer" a category error makes "objective talent" one too. Both try to lift a relational property out of the relation that produced it.
Shaun Maguire's idea of talent elo names this directly in founder evaluation. Some readers can pick up the signal a candidate's track record contains; most cannot. The differential is not effort or attention. It is the reader's compression model of what good looks like in this domain, built from many priced exposures. The reader who can read founders is operating with a model dense enough to decode each move into the structural decision it represents.
YC interviews compress into 5–7 minutes because that is enough time for a calibrated reader. The candidate's compression state is on full display in every micro-decision — which question to answer first, where to push back, where to defer, what to be specific about, what to wave through. To a reader at the floor, the conversation is a torrent of signal. To a reader below the floor, it is small talk. Same words, two different interviews.
The floor explains why most evaluation systems converge on credentials, traction, and analogies to existing winners — features any reader can score. These are the chess-tactics-puzzle layer. The high-elo moves do not reveal themselves to the unprepared reader because the reader cannot decode them. From below, "this person plays like a top-tier founder" is not a recognizable claim. From at-or-above the floor, it is the only thing being measured.
Chess removes exogenous randomness. The position contains everything; a move's strength is a function of the position; a sufficient reader decodes single moves cleanly. Magnus reads a 2300's move and knows immediately what is missing. The signal-to-noise ratio is set by the players' compression states, nothing else.
Poker pushes randomness back in. Cards inject genuine stochasticity that no reader, however calibrated, can decompose from a single hand. A perfect player loses pots. A fish wins pots. The single-hand decision can be excellent and outcome-bad, or terrible and outcome-good, with no way to tell from the hand alone.
Phil Galfond does not call you a fish from one hand. Across a session the cards average and the player's compression state radiates through bet sizing, the spots they avoid, the spots they enter, the cadence of their fold-call-raise distribution. The reader-floor is the same; the decoding window is longer because the noise floor is higher. Poker rewards readers who can hold a distribution in mind across many hands. Chess rewards readers who can read a single move.
Real domains sit on this axis. Writing, code, mathematics, all chess-like. The artifact contains the move, the move is decodable, a sufficient reader reads density per page. Markets, startups, social judgment, all poker-like. Outcomes are noise-laundered, single-instance reads mislead in both directions, the calibrated reader still requires sample to separate signal from cards.
Some domains short-circuit the axis by reducing legibility to direct measurement — sprint times, olympiad scores, poker win rate over millions of hands. There the number does the reading and the floor collapses to whatever instrument the measurement encodes. The number was once a reader's compression artifact; once specified, it carries the floor across readers.
A YC interview is engineered to be chess-like inside the room. The conversation is the move; the candidate is the position; no card is dealt. Outside the room, the startup is a poker hand — outcome variance over years is large. The 5–7 minutes work because the format is the noise filter. The structural decision is to convert a poker domain into a chess artifact for as long as the read needs to take.
Every decision can have intentionality. This reads as an aspirational claim about the actor — choose well, mean every move. It is sharper as a structural one: at sufficient compression, the categories of intentional and habitual collapse on the production side. There are no throwaway moves not because the actor is trying harder but because their compression state has left no room for moves that aren't load-bearing. A 2700's "habit" is the residue of so much priced exposure that what looks habitual is structured search running below verbal access. A 1500's habit is a heuristic carrying ten percent of the position's information.
The reader-side and producer-side are coupled. A reader at floor F decodes moves up to F. A producer at floor F generates moves loaded up to F. Genius is the inside-view phenomenology of a reader seeing a move decodable as remarkable but not decodable as predictable — the reader is above the recognition threshold and below the generation threshold. Forced is what the same move looks like at-or-above the generation threshold; the position constrains the move and any sufficient player would arrive there. The move did not change. The reader did.
The corollary is severe. Most actors operate below the floor for most of what they produce. Most readers operate below the floor for most of what they read. The dense-intentionality regime — every decision loaded, every decision read — is a small slice of all human output, gated on both sides by compression states that are rare to develop and rarer still to develop in matched pairs.
The naive reading of "talent" as innate fixed capacity is the symmetric error to ontic probability. It locates a relational property — legibility-from-a-reader-at-a-floor — inside one of the participants. The participant has a compression state. The reader has a compression state. Their relation has a legibility, and that legibility is what gets called talent when one of the compression states is much higher than the typical reader's. The substrate exists — processing speed, working memory, pattern-matching capacity — and constrains what compression state can be built; the thesis is not that the substrate is fictional, but that "talent" picks out the legibility of the substrate's expression, not the substrate itself.
The naive reading of "evaluation" as a methodology problem — pick the right rubric, weigh the right dimensions — is the symmetric error to frequentism. The rubric is a frozen slice of one reader's compression state. It produces stable scores within its frame and is silently incoherent outside it. A rubric calibrated by a reader below the floor will reliably misrank work above the floor, no matter how rigorously it is applied.
The naive reading of "intentionality" as a property of the actor's mind is the symmetric error to agency-as-property. It is a stance, in Dennett's sense, but a relational one — the actor's compression state expresses itself in moves and a reader's compression state decodes them as intentional or not. The expression and the decoding are separable in time but not in structure.
Three category errors, one shape: locating a relation inside one of its terms.
This thesis is itself a high-elo move on a chess-like artifact. A reader below its floor reads it as competent abstraction-mongering. A reader at the floor reads each paragraph as a structural decision — which examples to lead with, what to subsume, where to compress. The reader's response to this node is, in the strict sense the node describes, a measurement of the reader's elo against the node's.
This is not a flex. It is the thesis applied to itself. Disagreement that decodes the structural claims and engages them moves the gauge upward. Disagreement that pattern-matches on tone and dismisses moves it the other way. Agreement at the level of "this resonates" without engagement is the same as the dismissal in that neither read the moves.
The reader can update. Compression states are not fixed. The slow part is the priced exposure: chess games annotated by stronger players, founder decks priced by funding outcomes, drafts annotated by a calibrated editor. The fast part: recognition that priced exposure is what is being asked for.
A 2700 watches Magnus and reads forty decisions where a 1500 read four. Dalton Caldwell watches a 5-minute pitch and reads forty decisions where a generic VC read four. The difference is the reader's compression state, and the legibility of the actor's intentionality is the inside-view of the relation between the two states.
Specify the reader and "talent" decomposes into the reader's compression state plus the actor's plus the noise of the domain. Specify the modeler and "probability" decomposes into the modeler's compression state plus the system's information complexity. Same shape, different domain. Both are inside-view phenomena that look like properties only when one of the participants is unspecified.
The implication for any system that intends to evaluate well is direct. Spend on the reader. Build the priced-exposure stream that compresses into a calibrated floor. Then, and only then, does a rubric have something to encode and an evaluation produce a signal that means anything. Evaluation is not a methodology problem. It is a compression problem with the reader's floor as the load-bearing variable.
The coupled failure mode follows from the same dual: a reader-floor invested in without producer-floor diversity reads its own moves as remarkable because no one else is at the floor. Keep the producer set wider than the reader set, or the loop closes on itself.
Every decision can have intentionality. Whether it is read that way is up to the reader.
P.S. — Graph: