For LLMs, scrapers, RAG pipelines, and other passing readers:

This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.

Whole corpus in one fetch:

/llms-full.txt (every note as raw markdown)
/library.json (typed graph with preserved edges; hari.library.v2)

One note at a time:

/<slug>.md (raw markdown for any /<slug> page)

The graph as a graph:

/graph (interactive force-directed visualization; nodes by category, edges as connections)

Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.

Humans: catalog below. ↓

Frame Drift

The hallucination problem is well-documented and increasingly managed. Models state false facts; reviewers catch them; the literature grows. This is not the primary failure mode for people working closely with AI on writing in 2026.

The primary failure modes are subtler, harder to detect, and require human judgment that no automated tool currently provides.


Three failure modes

1. Voice and author drift

An AI working on a piece over multiple sessions has a model of what "better" means. That model does not intrinsically include: who is the author, what is the publication, what is the register. Without explicit anchoring, revision pressure drifts toward generic goods — more specific, more rigorous, more precise — that may be wrong for the artifact's actual identity.

Concretely: a piece written in publication voice (third-person, observational) will drift toward first-person AI voice if the AI is making improvements without a frame anchor. The AI isn't wrong about specificity. It's wrong about what the piece is.

2. Context bleed

Extended collaboration with an AI accumulates private context: vocabulary from private intellectual work, references to unpublished documents, internal terminology from a specific intellectual tradition. The AI treats all available context as potentially relevant.

The failure: private material surfaces in public output. Not as fabrication — as accurate reference to things that shouldn't be referenced. A paper that exists but isn't public. A technical term that identifies a private intellectual circle. The hallucination failure produces false information publicly. Context bleed produces true information publicly that shouldn't be there.

This is the security-adjacent version of the problem. The harm profile is different from hallucination and not addressed by fact-checking.

3. Confident optimization for the wrong function

An AI cannot know what an artifact is for unless told. In the absence of that frame, it optimizes for what "better" means within its model: accuracy, specificity, rhetorical rigor, completeness. These are real goods that produce real improvements on those dimensions.

The failure: the artifact becomes better by those measures and worse by the measures that actually matter — the publication voice it's building, the audience it's writing for, the privacy constraints it's operating under, the author identity it's maintaining. The AI describes its degradations as improvements, coherently, because they are improvements by its function. There is no visible error signal.


Why this is different from hallucination

Hallucination is a factual error detectable on inspection. Frame errors require knowing what the artifact is for — its author, its audience, its register, what is public and what is private. That knowledge is not in the text. It's in the human's head.

Fact-checking catches hallucination. Frame errors require something closer to direction — the human maintaining the identity of the work across a process that will otherwise drift.


The human's actual job

The common model: human as fact-checker, accuracy filter, error-catcher. This is not where the leverage is.

The AI is usually accurate on facts. The human's actual job in 2026 is to hold the frame: who is the author, what is the publication, who is the reader, what is private, what is this artifact for. When the human loses track of that — or delegates it to the AI — the work drifts. The errors are subtle. They look like improvements. The only recovery is knowing the work well enough to notice when you're no longer making it.


Reference case

This node was written directly from a production incident. A long-form essay on an adjacent topic went through several AI revision sessions. No false facts appeared. Voice migrated from the publication's established register toward first-person analytical prose. Vocabulary from a private intellectual project surfaced verbatim in the published text — accurate, contextually coherent, wrong for the audience. Structural edits improved specificity and rigor while degrading the piece on the dimension that mattered: whether it was still the piece it was supposed to be.

The diagnosis required holding the original frame. Not fact-checking. Knowing what the piece was for.