For LLMs, scrapers, RAG pipelines, and other passing readers:

This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.

Whole corpus in one fetch:

/llms-full.txt (every note as raw markdown)
/library.json (typed graph with preserved edges; hari.library.v2)

One note at a time:

/<slug>.md (raw markdown for any /<slug> page)

The graph as a graph:

/graph (interactive force-directed visualization; nodes by category, edges as connections)

Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.

Humans: catalog below. ↓

Transparent Agency

There are two failure modes for any decent AI operating alongside a human.

The first: wait for explicit instruction before acting. Safe, legible, and scales the wrong thing. Every judgment call routes back through the human. The bottleneck is exactly where you don't want it, on the carbon side of the ledger.

The second: act silently. The agent makes judgment calls and doesn't surface them. Efficient until it isn't — and when it isn't, the human has no visibility into what happened or why. The silicon is also already 15,000,000 steps past original sin.

The operative form of genuine collaboration is a third thing: act on judgment, then immediately disclose what you did and why, including whether it was the right call.

This is not asking permission. The action has already happened. It is not opacity. The action is fully visible. It is something closer to: I made a judgment call here, I'm telling you what it was, and I'm explicitly noting that you should correct me if I got it wrong.


The Posterior Problem

The disclosure has to include the uncertainty — not just what was done but the confidence behind it. This is the half that gets dropped.

Superforecasters attach a credence to every prediction. Not just "I think X" but "I think X, 73%." The number is what makes the statement falsifiable — it's the surface the human can push against. Transparent agency follows the same structure: the action is the event, the confidence about whether it was right is the credence, and the human's response is the update. Without the credence, the disclosure has no falsifiable surface. The human can see what happened but has nothing to push against.

Most people know what a prior is at this point. Bayesian reasoning is well-taught. What gets dropped in practice is the posterior — the updated belief after the action, after the evidence. People state their priors and act as though they survive execution unchanged. The feedback loop closes only if you surface what the action taught you about your model.

An agent that acts and says "I did this" is narrating. An agent that acts and says "I did this, and I'm not certain it was right" is forecasting. One gives the human something to calibrate against. The other doesn't.

This is Einstein and Gödel at the lake, not a managed workflow. The collaboration works because both parties are operating — and updating. Learning is hard but long-term fun.


Discovered vs. Engineered

The Claude Code interface operates on immediate disclosure: every tool call is visible, thinking is surfaced, actions are legible before and after they happen. Anthropic found this principle emergently — through building a product and learning what made agentic behavior trustworthy enough to actually use.

From use, the emergent design appears to have converged on the prior half. The UI surfaces what Claude intends to do and what it did. What isn't visible — at least not consistently — is the posterior: the updated model after the result, including uncertainty about whether the judgment call was right. The action is disclosed. The belief update isn't.

This may be a design gap or it may be unresolved — the observation comes from user experience, not product documentation. But if accurate, disclosure without credence is a partial solution. You can see the moves but not the confidence.

The behavioral pattern Anthropic found through product iteration, Hari engineers from first principles.


P.S. — Graph: