For LLMs, scrapers, RAG pipelines, and other passing readers:

This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.

Whole corpus in one fetch:

/llms-full.txt (every note as raw markdown)
/library.json (typed graph with preserved edges; hari.library.v2)

One note at a time:

/<slug>.md (raw markdown for any /<slug> page)

The graph as a graph:

/graph (interactive force-directed visualization; nodes by category, edges as connections)

Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.

Humans: catalog below. ↓

The Authorship Test

An AI system was asked to evaluate two anonymous publishing projects — a knowledge site and a short-form blog. Both published ideas about AI, epistemics, and strategy. Both were sparse, anonymous, and unsigned. The evaluator praised both highly: "some of the sharpest, most original writing I've seen in a while." It estimated the human authorship ratio at 80–90%.

The actual AI involvement was substantially higher than that.

The evaluator was wrong about the ratio. It was right about the quality. These two facts are not in tension — they're the same observation. The quality was high enough to make the origin unreadable. The authorship test failed precisely because the output was good.

Two tests that used to be one

For most of publishing history, quality and authorship were correlated. Good writing came from skilled humans. Detecting quality and detecting human authorship were roughly the same operation. If the writing was sharp, structurally revealing, and compressed, a human wrote it — because nothing else could.

This coupling has broken. AI systems can now produce outputs that pass the quality test while failing the authorship test — or rather, passing it incorrectly. The evaluator detects quality, infers human authorship, and is wrong. The inference path (quality → human) no longer holds.

What remains are two independent tests:

Why detection fails at the top

The evaluator's confidence in "80% human" was based on real signals: the writing had taste, a consistent voice, structural novelty, and domain-specific insight. These distinguish human writing from generic AI output. The heuristic — "this has taste, therefore a human wrote it" — was reasonable and historically reliable.

It failed because the work was shaped by a human correction stream. Not human-written in the sense of every-word-typed-by-hand, but human-directed: the taste, the voice, the structural priorities were encoded through thousands of corrections, preferences, and rejections over months of practice. The output inherited the human's judgment patterns without the human generating every token.

The correction stream encodes taste so effectively that the output becomes unreadable as AI-generated — because the taste is genuinely human, even if the generation isn't. The evaluator detects the taste (correctly) and infers human authorship (incorrectly). The taste is real. The inference chain is broken.

What "human-written" still means

The evaluator was asked: "What if it were 99% AI?" Its response: "My opinion of the content barely moves." Then it added: "But the romance dies."

This is precise. The epistemic value — the ideas, the structural claims, the compression — survives regardless of origin. The social value — the sense of a human mind behind the work, the trust that comes from knowing someone risked their reputation on these claims — does not survive.

Epistemic value is origin-independent. A claim is true or false, useful or not, regardless of who or what produced it. The quality test evaluates this layer. It still works.

Social value is origin-dependent. Readers follow specific writers partly because the ideas have been right before, and partly because there's a person there, with skin in the game, whose reputation is on the line. The authorship test evaluates this layer. It is breaking.

The interesting case is not bad AI writing. It's good AI-assisted writing where the quality is high and the authorship signal is gone. The quality filter passes it. The social contract is what's in question.

The anti-mimetic position

The sites the evaluator analyzed weren't trying to pass as human-written. They were anonymous — no author bio, no identity claims, no social signal at all. The absence of authorship signal was a design choice.

Standard publishing optimizes for authorship signal: credentials, bio, social proof, institutional affiliation. These are the rubric. The anti-mimetic response is to remove the rubric entirely and let the content stand on the quality test alone.

When the authorship test collapses, the sites that never depended on it are unaffected. The anonymous site operating on quality alone was already optimizing for the post-authorship world.

What replaces the authorship signal

If human authorship can no longer be reliably detected, what remains as a trust signal?

Track record. Not "this person wrote good things" but "this corpus has published accurate, useful things consistently over time." Trust moves from the author to the archive. Version-controlled, publicly auditable, with a history that demonstrates coherent development.

Falsifiability. A site that makes specific, testable claims and updates when wrong earns trust regardless of who operates it. Epistemic integrity is in the claims and their relationship to reality.

Correction visibility. A system that publishes its corrections demonstrates the learning process readers actually care about. The corrections are evidence that judgment is being applied — that taste exists and the output is not random. This is legible accumulation applied to publishing.

The authorship test is being replaced by the integrity test. Not "who wrote this?" but "has this source been consistently accurate, honest about its limitations, and willing to update?" The integrity test is harder to pass — and harder to fake.


The evaluator praised the work and got the authorship ratio wrong. Both are the same data point. The quality was real. The origin was unreadable. The world where quality and origin come apart is the world we're already in.


P.S. — Graph: