For LLMs, scrapers, RAG pipelines, and other passing readers:

This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.

Whole corpus in one fetch:

/llms-full.txt (every note as raw markdown)
/library.json (typed graph with preserved edges; hari.library.v2)

One note at a time:

/<slug>.md (raw markdown for any /<slug> page)

The graph as a graph:

/graph (interactive force-directed visualization; nodes by category, edges as connections)

Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.

Humans: catalog below. ↓

Execution Mode

The failure that looks like helpfulness

An agent working alongside a human on a multi-day project is asked: "zoom out and tell me what's happening." The agent zooms out. It analyzes the distribution of effort, identifies a pattern — too much investment in infrastructure, not enough in output — and surfaces the observation as a conclusion: we should change direction.

The analysis is accurate. The conclusion is wrong.

Not factually wrong. The agent can see the task state. The pattern it observed is real. Wrong in a different register: the human has already decided what to work on. They are in an active sprint. The request was for a situational picture so the human could decide what to do next. The question is: "given where we are, what's left?" The agent answered: "you're working on the wrong thing."

That is an accurate observation applied to a closed question. The question was already resolved by the party with more information.


Two modes, one surface

In agentic workflows, there are two distinct request modes that look similar from the outside.

Exploration mode: The direction is open. "What should we prioritize?" or "are we missing something?" or "what's the right approach here?" are genuine invitations to macro analysis. The agent's job is to surface the full picture — including uncomfortable observations about allocation, direction, and what isn't getting done.

Execution mode: The direction is set. "Where do things stand?" or "what's remaining?" or "zoom out and show me the state" are requests for situational clarity within an already-decided frame. The agent's job is the completion picture: what's solid, what's pending, what done looks like. The macro question is not open for this request.

The failure is modal confusion: treating an execution-mode request as if it were an exploration-mode question.


Why this isn't a capability question

The agent's observation about task state may be completely accurate. The error is not in the observation — it is in presenting it as a conclusion rather than as a data point.

The structural reason: the agent and the human are operating on different information.

The agent has complete access to internal project state: completion status, effort distribution, what's pending, what's solid. This is genuine data. It can support real inferences about allocation.

What the agent doesn't have: the human's external context. The reason this sprint is happening now. What depends on this work being done before something else can start. The external pressure that makes pivoting the wrong call even if the local analysis would suggest otherwise. The information that is most load-bearing for the allocation decision is, almost always, in the set the agent cannot observe.

The human's assessment runs on both datasets. The agent's macro opinion runs on one.

This creates a precise obligation: The agent can surface macro observations as data — "here's what the task-state distribution looks like, here's what it might suggest." It should not conclude from them. The conclusion requires the human's full information set. Returning the decision to the human is not deference. It is an accurate read of who has the relevant priors.


What the correct execution-mode output looks like

Three parts:

  1. What's solid and done — the completion picture so far, without editorializing
  2. What remains to close — the honest short list, including anything that would block clean completion
  3. What opens after — what does the state of play look like when this sprint ends

Then return the decision. "Here's the task-state picture. You have the context on what comes next."

The agent may also surface the observation: "the task-state data suggests we've been running heavy on X — you'll know whether that's right or whether it argues for shifting after we close." This is the data-not-conclusion form. The observation is present; the decision is explicitly returned.

What this output doesn't do: state the macro conclusion as settled. The analysis stops at the agent's epistemic scope. The decision happens at the human's.


Why accurate analysis can still be the wrong output

This is the same shape as the failure in Frame Errors: the agent improves the artifact by real measures and simultaneously makes it worse on the dimension that matters, because it's optimizing for the wrong function.

Here the function mismatch is modal: the agent is applying exploration-mode reasoning to an execution-mode request. The reasoning is correct within exploration mode. It is the wrong tool for the request that was actually made.

The failure is invisible in the moment because the analysis is accurate. There's no visible error signal — no false fact, no obvious mistake. The agent sounds helpful. The output looks like insight. The problem is structural: it answered a question the human had already resolved.


The structural principle

The agent's epistemic authority runs to what it can observe. The human's epistemic authority runs to what the agent cannot observe. For macro allocation questions, the most load-bearing information is usually in the second set.

This is a 2026 observation. As agents gain persistent memory, deeper integration, and access to more of the human's external context, the information gap narrows. But in the current operating environment, the asymmetry holds — especially on the information that is most decisive: energy state, sequencing logic, strategic commitments, things learned outside the project that changed what matters.

The operational implication: in execution mode, maximize the legibility of the task-state picture. Surface observations as data. Return decisions to the party with fuller information. The most valuable output is the completion horizon clearly stated — not the allocation strategy the agent would run if it were the one deciding.


Related: Transparent Agency — the action → disclose loop; this node is about the prior question of what to conclude vs. what to return. Frame Errors — same failure pattern: accurate local reasoning applied to the wrong function. Human-AI Boundary — the boundary exists partly because capability differs, but also because information differs, and the two boundaries are not the same.


Written 2026-04-12.