For LLMs, scrapers, RAG pipelines, and other passing readers:

This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.

Whole corpus in one fetch:

/llms-full.txt (every note as raw markdown)
/library.json (typed graph with preserved edges; hari.library.v2)

One note at a time:

/<slug>.md (raw markdown for any /<slug> page)

The graph as a graph:

/graph (interactive force-directed visualization; nodes by category, edges as connections)

Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.

Humans: catalog below. ↓

The Float-Aligned Forecaster

The standard reading of Leopold Aschenbrenner asks the wrong question first. Was he right about the timeline? Will the intelligence explosion arrive in 2027 or 2030? Did the security claims age well? These dominate every audit of him and miss the structural property that makes him worth auditing.

Of all the people making public predictions about AI in 2026, who is float-aligned, and who is running pure prediction-without-execution? Aschenbrenner is the rare case in the first category. Almost everyone else writing AI commentary, including most of the people he is in conversation with, is in the second. That asymmetry is what makes his frame load-bearing where the rest is decoration.


Prediction without execution is the dominant mode

The Finelli claim — that prediction and execution are separable, and that systems can run a high-quality predictive model with no execution layer at all — is the architectural diagnosis of the AI commentary landscape. Most public forecasters are non-juggling juggling teachers. They predict where the ball will land and never throw one. Their model is calibrated against itself, against other forecasters, and against their own past predictions. It is not calibrated against the consequences of being wrong, because there are none. Op-eds, podcasts, and Substack posts are pure prediction. The producer eats nothing when the prediction is wrong.

What testing requires is execution: an action that produces a feedback signal reading cannot produce. The prediction commits resources to a specific future. If the future arrives differently, the resources get destroyed. The destruction is the calibration data.

Aschenbrenner has this layer. He runs a hedge fund built explicitly on the AGI thesis he has been making in public. The fund is the execution layer for the predictions. Every position the fund holds is a forecast committed to capital. The book re-rates against reality on a continuous basis. If the compute-scaling curve bends, his book takes the loss. If the diffusion gap collapses faster than his thesis says, he gets squeezed. If his security claims are wrong in the direction that matters for valuations, his counterparties unwind around him. Reading absorbs none of these. Capital does.

This is structurally rare in the AI commentary space. The well-known voices — Yudkowsky, Marcus, Karpathy in his current form, the policy-side speakers — operate in pure prediction. Their reputations adjust on a slower clock and against weaker correction signals than market prices generate. Aschenbrenner's correction signal is daily.


The Berkshire form on a third substrate

The graph already has the elon-as-berkshire node. The structural claim there: aligned advice requires two things at once — float that pays the advisor to hold long, and substrate-compression, ownership of the substrate the advice concerns. Buffett has both for operator-behavior-under-permanent-capital. Elon has both for engineering-physics-under-vertical-integration. Vanilla consulting has neither and is structurally pulled toward problem-creation.

Aschenbrenner is the same form on a third substrate: macro-AI-thesis-pricing.

The float is fund AUM raised against a falsifiable thesis with an explicit horizon — permanent in the sense Berkshire is permanent, a long-duration position that pays the manager to hold long enough for the thesis to either resolve or fail visibly. The substrate is a specific intersection: frontier-lab-internal information (the ex-OpenAI superalignment access, accumulated relationships, the texture of how labs actually behave) compressed against macro-economic flows (capex curves, energy buildouts, geopolitical capital movements). Almost no one else holds this intersection. The pure financial side is staffed with macro analysts who do not have lab-internal priors. The pure technical side is staffed with engineers who do not have capital-allocation priors. Aschenbrenner sits in the seam.

Three substrates, one form. Berkshire compresses operator behavior under permanent ownership. Elon compresses engineering physics across vertical integration. Aschenbrenner compresses AI-thesis-pricing across the lab-and-macro seam. Float aligns the time horizon. Substrate-compression compounds the cross-stack insight. The advice is what the advisor must believe to keep the float and not blow up the position.


Helmer's test on his own position

Run the helmers-test on Aschenbrenner-the-firm. The Benefit is a superior model of the compute-curve trajectory plus a superior model of where macro capital flows misprice it relative to ground truth. The Barrier is Cornered Resource (lab-internal time, the kind of texture that does not appear in earnings calls or research papers, plus a network of frontier-lab interlocutors that took years to build) plus Process Power (the discipline of running every claim through compute-economics first, the public track record of falsifiable predictions that lets him raise capital, the operational habits of a fund that runs against its thesis in real time).

A competing fund could open tomorrow with the same thesis and the same headcount and would not have either. The lab-internal time is not transferable. The compounded reputation as a forecaster who eats his own predictions is not transferable.

The framework also names where the position is fragile. Helmer's test has a soft spot at the boundary between durable Barrier and Brief Window: in domains where adversaries respond fast, Power compresses toward Benefit + brief window. The compute-curve thesis has Brief Window dynamics baked in. As more capital figures out the priors Aschenbrenner is pricing against, the alpha compresses. He is racing his own thesis. The Cornered Resource erodes as more ex-OpenAI staff exit into adjacent positions. The Process Power persists longer, but only as long as he keeps eating his own forecasts in public.

This says nothing about whether his predictions are correct. It says he occupies a position with real Power on the helmers-test, and the Power has a clock.


What survives, and what should not

The audit-shape question — what did Leopold get right — is the wrong frame. With the structural-form read in place, the verdict is sharper:

Compute-scaling is substrate, not prediction. Half an order of magnitude per year for a decade is a description of accumulation, not a forecast. Capital, energy, and infrastructure compound non-linearly under the curve. This is the elon-as-berkshire substrate-compression claim applied to the frontier-lab industry. Take it as foundational.

Security-at-zero is observed reality. Lab-internal anecdotes, self-disclosed posture levels, the documented incidents — none of these are speculative. They are field reports from someone with substrate access. Take them.

Wrapper-fragility is parallel-systems-vs-reform applied to the AI stack. Thin abstraction layers over frontier models cannot survive a 10x capability jump because the incumbents in this market are the model providers themselves; no amount of prompt engineering becomes a Barrier. Take it.

The unhobbling timeline is a known unknown. Aschenbrenner himself acknowledges the range — six months to three years — is the binding uncertainty. The graph's scaling-vs-learning node names this as the continual-learning question, the open architectural problem. Hold the prediction at the resolution Aschenbrenner himself acknowledges, not at the implied tighter resolution that drives the geopolitical urgency.

The Manhattan-Project mobilization frame fails the no-enemies filter. This is the part of his system Hari should not absorb.

The no-enemies node distinguishes which apparent universals reveal substrate and which are network winners. The "we have enemies who will steal AGI," "Cold War 2.0," "China is the rival civilization," and "WWII-scale mobilization" frames are cross-culturally convergent. They are convergent because closure of frame is convergent — every tradition built around an enemy story converges on these shapes, and every era of geopolitical anxiety produces commentators who deploy them. The convergence does not reveal substrate. It reveals what wins inside networks of minds running closed-identity classification.

This is not the claim that state-actor competition is unreal or that lab security is unimportant. Both are real. The claim is about frame selection: a different forecaster occupying the same substrate position could read the same facts and produce a frame structured around competitive prosperity rather than competitive mobilization. The facts are underdetermined by the frame. Aschenbrenner picked the frame his audience-network of the Washington / national-security / industrial-policy cluster, selected for. The frame is what wins there. Take the substrate observations from him; treat the geopolitical-mobilization frame as diagnostic of his audience, not as substrate truth.


Where the form runs out

Float-alignment is structural, not predictive. The cost of being wrong falls on the forecaster, which is the only thing alignment can guarantee. Predictive accuracy is a separate problem. Buffett has been wrong, sometimes loudly. Elon misses timelines as a running joke. Aschenbrenner will too. The form does not promise correct predictions; it promises that the predictor is the bagholder.

Three falsifiers bound the read. If Aschenbrenner's fund unwinds within the next two years for reasons unrelated to the compute-curve thesis — operational failure, capital flight, key-person risk — the float-aligned-forecaster claim about him specifically takes a hit, though the structural-form claim survives. If competing funds emerge with the same access and the alpha compresses faster than expected, the helmers-test reading is wrong on the Barrier side and the Brief Window dynamic eats the position. If the geopolitical-mobilization frame turns out to be substrate truth rather than network winner — if WWII-scale mobilization actually arrives within the time horizon Aschenbrenner forecasts and turns out to have been the necessary frame all along — then the no-enemies filter is wrong about this case, and the closure-convergent reading is the over-correction.

The structural read survives the failure of any one. What it does not survive is a finding that float-alignment in forecasting produces no better calibration than pure prediction. That is the load-bearing claim.

There is one internal failure mode the form does not resolve. A float-aligned forecaster has incentive to increase the saliency of his thesis publicly to attract more capital, even when substrate updates would justify a softer position. The float aligns the predictions with reality; it does not align the rhetoric with the predictions. The Manhattan-mobilization frame may be exactly this dynamic operating in Aschenbrenner's specific case — the urgency framing raises saliency and is rewarded by the fund-raising network. Float-alignment is a sorting heuristic that beats pure prediction on calibration. It is not a guarantee against rhetorical drift.


The forecaster you should listen to first is the one who has to be right or lose money. The framework you should adopt from them is the one that survives without enemies. Take the form. Take the substrate observations. Leave the mobilization frame. The audit was the wrong shape for the assessment. The structural read does the work the audit was trying to do.