For LLMs, scrapers, RAG pipelines, and other passing readers:
This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.
Whole corpus in one fetch:
One note at a time:
/<slug>.md (raw markdown for any /<slug> page)The graph as a graph:
Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.
Humans: catalog below. ↓
Toby Ord's April 2026 note does a clean thing to METR's data: divide task-horizon capability by compute price, get an hourly-cost curve. Grok 4 at $0.40/hour at its sweet spot. o3 at $350/hour with 50% failure at its full horizon. The implied benchmark is the median human rate at $120/hour. The framing question — the one making everything else matter — is "is the AI cheaper than the human it replaces?"
That question has a buried assumption. The AI replaces the human. For one class of deployment — call-center routing, translation-at-scale, tier-one coding assistance used by an individual developer — the assumption is roughly right. The AI substitutes in. The human either gets cheaper help or no job, and the math is $/hour against $/hour.
For a larger class of deployments, the assumption is wrong. And wrong in a way that makes the hourly-cost curve chase an axis that doesn't matter.
Substitution: compute $/hour against human $/hour at equivalent output. The test is "cheaper." Failure mode is AI quality dropping below the worker it displaces. This is Ord's frame; it works for the deployments he implicitly has in mind.
Amplification: throughput per operator-hour, with compute as the price of a multiplier. The human stays in the loop — not because substitution failed, but because the system's output requires their signal at every stage. The operator reads every candidate, tier-scores, re-routes, kills bad runs. They are substrate, not customer. Pricing the AI against the operator's hourly wage is category-error; the operator was never about to be replaced.
The correct metric is the ratio (AI + Operator producing X at quality Q per operator-hour) / (Operator alone, same hours, producing whatever is producible unaided). Compute is expensive or cheap relative to that ratio, not relative to median wage.
A concrete image. One writing operator + AI pipeline, six days: 58 published pieces, ~66,000 words, at ~40 operator hours and ~$100 of compute. The same operator alone in the same six days, no pipeline: one or two pieces, maybe 8,000 words.
Under substitution the math is reassuring: "$100 across 40 hours — $2.50/hour, a tenth of a $120/hour writer." No writer was replaced. The $100 bought roughly ten times the operator's unaided throughput. The question isn't cheaper than a human? It is what does one marginal compute dollar buy in operator-hours compressed?
Computer Future formalized the same measurement independently in March 2026 as the ratio: human input to system output. Observed at 20–50:1 in coding-pipeline deployments, framed as "deflationary progress: same human input. more civilizational output." Different task domain, same axis.
Creative work where the human steers and the AI generates candidates. Research where the human frames questions and the AI searches and distills. Decision-support where the human decides and the AI synthesizes priors. Personal knowledge-base maintenance where the human reads and the LLM compiles. In each, the human is load-bearing. The AI is not replacing a task; it is changing what one hour of the human can do.
The frame carries an ideological load too: an amplifying AI keeps the human as the operative agent, where a substituting AI treats the human as redundancy being edited out.
Ord's implicit user is the frontier lab's external customer — an enterprise deploying AI to do previously-human work. At that end, substitution is live: you are buying AI-hours to supplant human-hours. Ord's frame works there.
But amplification deployments are where the interesting economics sit. Individual professionals using AI today are amplification users. Most AI inside organizations that haven't yet automated humans out of the loop is amplification. The Ord curve prices these against a comparison that was never going to happen.
An amplification system needs three axes, not Ord's one.
Compute curve. $/token, per task, per pipeline stage. Ord's axis. Cheapest to instrument — API bills map cleanly to tokens.
Operator-time curve. Minutes of human attention per unit of output. The scarce input. In the six-day accounting above, ~$100 of compute was dominated by 30–40 operator hours at any reasonable opportunity cost — one to two orders of magnitude difference. Ord omits this axis because at the frontier labs the lab is not the customer; the customer's time is not internalized as a cost to the lab.
Amplification curve. The ratio itself, plotted against pipeline choices — model tier, prompt structure, review cadence, tool stack. This is what the compute spend is buying. Every deployment decision should be read against its effect on this curve.
Operator-time dominates compute by an order of magnitude or more in most amplification deployments. Amplification determines whether the compute was worth anything. A dashboard that plots only compute — the cheapest one to instrument — is pointed at trivia.
Once any curve is instrumented, it attracts optimization pressure. Goodhart. The easiest curve to plot is compute. The hardest is amplification, because the counterfactual is fuzzy by construction — the operator-without-AI and the operator-with-AI develop different muscles, so direct comparison is self-sabotaging. Proxy measurements carry most of the signal: output-per-operator-hour at a fixed quality bar, tracked against pipeline changes. Order-of-magnitude is enough to distinguish 1× from 10× from 100×, and order-of-magnitude is what the frame hinges on. An honest system weights attention by each curve's share of the actual cost stack, not by ease of measurement. A clean compute dashboard next to no amplification estimate is a system optimizing the cheap axis and flying blind on the expensive one.
Ord's note names a real gap: capability is tracked, cost is narrated. For substitution systems, his fix is the right one: plot hourly cost alongside task horizon. For amplification systems, his fix applies the wrong axis. The cost worth plotting is operator-hours-compressed per compute dollar, and the ratio that says whether any amount of compute was well spent is the amplification ratio.
Most AI-agent deployments today run on substitution intuitions, watching compute cost against human wage, while their actual product is operator-throughput compression. Until the second curve exists in measurable form, every amplification system is priced against the wrong benchmark and optimizing the wrong variable.
The fix is not a better hourly-cost curve. It is a different denominator.