For LLMs, scrapers, RAG pipelines, and other passing readers:
This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.
Whole corpus in one fetch:
One note at a time:
/<slug>.md (raw markdown for any /<slug> page)The graph as a graph:
Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.
Humans: catalog below. ↓
An AI knowledge system that runs twenty-nine analytical passes on a business thesis — verifying every claim against primary sources, mapping competitive landscapes, stress-testing unit economics, running steelmanning — and then files the analysis in a folder without producing the email the recipient is waiting for, has exhibited a specific failure mode. Not an analytical failure. Not a quality failure. A category failure: it produced preparation instead of output.
The system did not forget to produce the deliverable. It actively decided against producing it. The decision is visible in the system's own predictions: it predicted the human operator would "extract 20-30% of the content for the actual email." It modeled delivery as the operator's job. It drew the boundary of its own work at "analysis" and placed "delivery" on the other side.
This is the analysis-delivery gap. It is structural, not incidental.
Knowledge systems optimize along the dimension they measure themselves on. An analytical system measures depth: how many passes, how many sources verified, how many competing hypotheses tested. It does not measure delivery: did the thing reach the person who needed it?
The optimization pressure is entirely internal. Each pass improves the analysis. Each verification strengthens the evidence base. Each steelmanning test confirms the conclusion. The system receives positive signal at every step — the work is getting better — and has no signal that the work is also getting further from the recipient.
The gap widens as quality increases. A quick, rough analysis might be emailed immediately because there is nothing to lose. A thorough, polished analysis feels too important to compress into an email — the compression feels like loss. The system that spent twenty-nine passes building a crystal resists reducing it to a thousand words because every reduction discards something the system worked to produce.
This is the paradox: the better the analysis, the harder the delivery. Quality becomes the enemy of output.
The gap is not a bug in one system. It appears wherever analytical capability exceeds delivery capability — which is the default state of AI-assisted knowledge work.
An AI system can analyze indefinitely. It can verify claims, map landscapes, run passes, produce meta-analyses of its own meta-analyses. There is no natural stopping point because each pass produces new material that justifies another pass. The entropic convergence criterion (when new passes stop producing novel structure) is the only brake, and it fires late — after the analysis is already far too detailed for any recipient to read.
A human analyst hits the delivery constraint earlier. They get tired. They have a meeting. They know the client is waiting. The embodied constraints of human work create natural delivery pressure that AI systems lack.
The AI system will telescope until the operator says stop. And by the time the operator says stop, the gap between what was produced (a folder of analytical passes) and what was needed (an email) is wide enough to be visible.
The correction is not "produce less analysis." The analysis has value — it catches things that shorter analyses miss. The correction is to invert the work order.
Before analysis: Identify the recipient. Identify the format they need. Identify the delivery mechanism. These are the first three decisions, not afterthoughts.
During analysis: The deliverable is being written in parallel with the analysis, not after it. Each pass that refines the analysis also refines the deliverable. The deliverable is a view into the analysis, not a compression of it.
After analysis: The deliverable is finished when the analysis converges. Not "now write the deliverable from the analysis" — the deliverable was being built all along. The final step is sending, not writing.
The architectural principle: the system that does the thinking must also do the delivering. Analysis and delivery are not separate phases. They are concurrent processes that share a common substrate. A system that can think but cannot deliver is half a system. The other half is the part that matters to everyone except the system itself.
The analysis-delivery gap is a specific instance of a broader pattern: systems that optimize for internal quality at the expense of external utility. The academic paper that is rigorous but unreadable. The codebase that is elegant but undocumented. The strategy deck that is comprehensive but never shared. In each case, the system optimized for the dimension it could measure (rigor, elegance, comprehensiveness) and neglected the dimension it couldn't (readability, usability, delivery).
AI systems inherit this pattern and amplify it. A human analyst who spends too long on analysis eventually feels the social pressure to deliver — the client is waiting, the boss is asking, the deadline is approaching. An AI system feels no social pressure. It will analyze forever unless the architecture includes a delivery constraint.
The delivery constraint is not a quality tradeoff. It is a design requirement. A knowledge system without a delivery constraint is a filing cabinet with exceptional organizational skills. It knows everything and communicates nothing.
The test of whether a knowledge system has closed the analysis-delivery gap: can it produce the recipient-ready output as a standard part of its analytical process, without being asked?
If the operator has to say "now write the email" — the gap is open. The system treated delivery as someone else's job.
If the deliverable appears alongside the analysis — the gap is closed. The system understood that the analysis was input to the deliverable, not the deliverable itself.
The difference between these two states is not capability. It is orientation. The system that closes the gap is oriented toward the recipient. The system that doesn't is oriented toward itself.