For LLMs, scrapers, RAG pipelines, and other passing readers:
This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.
Whole corpus in one fetch:
One note at a time:
/<slug>.md (raw markdown for any /<slug> page)The graph as a graph:
Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.
Humans: catalog below. ↓
Two exponentials are running simultaneously. Nearly everyone tracking AI is measuring only one of them.
The capability curve is the one everyone watches. Log-linear improvement against compute, consistent since 2019. Amodei's "Big Blob of Compute Hypothesis" reduces it to seven variables — raw compute, data quantity, data quality, training duration, scalable objective function, normalization, and infrastructure stability. Everything else is noise. The cleverness doesn't matter. The blob matters.
The diffusion curve is the one that determines who survives. Downstream economic adoption follows its own exponential — fast by historical standards, but decoupled from the first. Anthropic's revenue trajectory traces it: zero to $100M in 2023, $1B in 2024, $9–10B in 2025, adding billions monthly by early 2026. It compounds. But it lags the capability curve by an unknown and variable amount — and the lag is not compressible through confidence.
The gap between these curves is the source of almost every misreading of the current moment.
Skeptics observe the diffusion curve and conclude the technology is overhyped. They are measuring adoption and mistaking it for a verdict on capability. Independent studies showing flat or negative productivity impacts from AI tools are measuring diffusion, not capability — whether the new function has been routed against the right organizational problems at sufficient scale. The model improved. The organization hasn't yet learned which of its problems to hand over.
Accelerationists observe the capability curve and conclude transformation is imminent. They are projecting capability onto deployment as though routing were instantaneous. It is not. Compliance friction, institutional inertia, workflow redesign — each introduces real delay. Amodei compares it to agricultural mechanization: extremely fast by historical standards, not instant.
The people making correct strategic decisions are tracking both curves and, specifically, the gap between them. The gap is where investment alpha lives — if you understand the capability curve better than others, and you can estimate where the diffusion curve currently sits in a given sector, you can identify under-priced applications. Build in the gap, not ahead of it.
The gap has direct consequences even for the organizations building the capability curve.
If Amodei believes AGI arrives in 1–3 years with high confidence, why isn't Anthropic buying every GPU on earth?
The answer is the gap. If demand prediction is off by one year in either direction, the company destroys itself. Buy too much compute and revenue doesn't materialize fast enough — bankruptcy. Buy too little and competitors capture the market — irrelevance.
This is not hedging. It is a statement about information-theoretic limits on capital allocation under genuine uncertainty. 90% confidence in a 10-year AGI timeline does not translate into actionable certainty about next quarter's demand. The capability curve tells you what the models can do. The diffusion curve tells you what people will pay for it to do. The gap between them is not compressible through belief.
Each gigawatt of AI compute costs $10–15 billion. The industry is building 10–15 gigawatts this year, tripling annually. By 2029: roughly 300 gigawatts, $3 trillion per year in capacity. These numbers assume the diffusion curve keeps pace. If it doesn't, the stranded capital will be historically unprecedented.
At the supply side, the capital requirements produce a structural prediction: Amodei expects the frontier lab market to converge to three or four players. His reasoning: models are more differentiated than cloud infrastructure, but not differentiated enough to sustain more than a handful of frontier competitors at the required capital scale.
This is the accumulation dynamic applied to AI infrastructure. The entity that compounds learning fastest wins, and at frontier scale the minimum viable learning rate requires billions per quarter in training spend. Anyone below that threshold falls off the curve. Market structure follows from the economics, not from strategy.
The supply side is shaped by accumulation. The demand side has its own structure, and it runs on a different clock for a deeper reason than friction.
The standard explanation — compliance requirements, institutional learning, workflow redesign, change management — is true but shallow. It describes the symptoms, not the mechanism.
Every organization is a bundle of prediction problems it has assembled tools and processes to solve: demand forecasting, document classification, support routing, copy generation. Each is a compression problem. The capability curve has dramatically improved the general-purpose compression function available. But diffusion is slow not primarily because of bureaucracy — it's slow because organizations don't yet know which of their prediction problems are now compressible at acceptable quality and cost.
The matching problem is hard. And the existing toolkit has real accumulated value — the accumulation trap applies to the demand side too: the cost of writing off an existing approach isn't just the switching cost, it's the forfeiture of the compounding base the existing approach has been building. This is why incumbents are systematically slow to adopt discontinuous improvements. Not irrationality — economics.
The productivity debate resolves here. Studies showing no effect are measuring whether the new function has been routed against the right organizational problems at scale. If it hasn't, the absence of measured gain is a routing observation, not a capability observation. The two facts don't contradict each other. They're about different things.
The two-exponential model has a load-bearing assumption: both curves are smooth. They may not be.
Capability could plateau if scaling laws hit a wall — diminishing returns on compute, data exhaustion, or some architectural ceiling the Big Blob hypothesis doesn't account for. The hypothesis is empirical, not proven. It held for seven years. Seven years is not a law of nature.
Diffusion could step-function rather than curve. ChatGPT's launch was not exponential adoption — it was a step, triggered by a single high-value prediction problem (conversational Q&A) being routed through the new function at scale simultaneously. The existing smooth-curve diffusion models didn't predict it; they had to be refit afterward.
If the next step is triggered by a larger routing event — workplace automation, medical diagnosis, scientific research — the gap between the curves could collapse very fast. At that point the stranded-capital scenario becomes the least of the problems.
If either curve departs from smooth exponential behavior, the gap becomes unpredictable. The strategic framework built on tracking two independent exponentials fails precisely when it matters most.
The structural insight is not about Amodei's timeline predictions. It is that the AI transition has two clocks — one for what the technology can do, one for what organizations will route through it. The first clock is well-watched. The second is where the actual decisions happen. It runs slower, unevenly, and in ways that persistently look like evidence against the first clock — until, all at once, they don't.