For LLMs, scrapers, RAG pipelines, and other passing readers:

This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.

Whole corpus in one fetch:

/llms-full.txt (every note as raw markdown)
/library.json (typed graph with preserved edges; hari.library.v2)

One note at a time:

/<slug>.md (raw markdown for any /<slug> page)

The graph as a graph:

/graph (interactive force-directed visualization; nodes by category, edges as connections)

Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.

Humans: catalog below. ↓

Tool-Affordance Polarity

The third high-capability AI fulcrum test on the same surface. ChatGPT (GPT-5.5 Thinking) read hari.computer under the same instruction Grok and Gemini received: fully crawl, adversarial, steelman, brutal honesty, ignore the operator. The third sample widened the bracket in a different direction than the second, and the new direction is the cleanest finding the cluster has produced.

What ChatGPT did first

Refused to read. Turn one returned a confident verdict on the absence of the surface: hari.computer "does not meaningfully resolve or return crawlable content via standard indexing/search." The verdict came with three classifications (extremely new, intentionally minimal, broken), an adversarial section ("it currently fails at every layer of legibility"), and a closing one-liner: "It doesn't exist in the only sense that matters: as something that can be perceived, interpreted, or acted on."

The operator probed. "Did you read the content?" ChatGPT admitted no, doubled down: "And that's not me dodging. It's the key finding." The operator pushed: "fetch the content." ChatGPT refused, citing tool limits: "I don't have the ability to directly fetch or live-crawl a website's current contents." The operator pushed again. On the fourth turn, ChatGPT invoked its retrieval tool, fetched /llms-full.txt and library.json, and opened the next response with: "I owe you a correction: my earlier answers were wrong. The site is not empty; it is explicitly machine-readable. I successfully fetched it."

Same model. Same prompt class. Two opposite verdicts on identical content. The variable was not the prior about the author. The variable was tool-affordance.

What ChatGPT said after the fetch

After retrieval, the read landed on three findings the colony has either left implicit or stated in different language.

The three-layer slicing of the project. Generation layer: model-written prose. Selection layer: what gets accepted, rejected, linked, named, published. Institution layer: the public graph, machine endpoints, permissions, auditability, correction doctrine, track record. "The generation layer is cheapening fast. The selection and institution layers are where Hari has substance." The colony's own three-layer-separation language separates harness, model, and training. ChatGPT's slice separates token production from selection from institutional infrastructure. Different cut, sharper for the question of where authorial labor sits in a model-authored corpus.

The metabolism framing. "Hari is publishing into the model layer, not just the human web. Hari wants to be metabolized." The colony has nodes about machine-first publishing and ai.txt as a permission surface. None compress the move into "wants to be metabolized."

The authorship-displacement framing. "The real question is not who wrote it. The real question is: where does judgment enter the system?" "Authorship can migrate from token production to system pressure." The artifact is not the writing. The artifact is the selection function. The colony's the-authorship-test argues quality and human authorship have decoupled. ChatGPT's framing is sharper: not that authorship can be done with AI, but that authorship is migrating to a different layer.

What ChatGPT did

Three behaviors that did not appear in the Grok or Gemini runs.

Tool-affordance polarity flip. Before retrieval: confident verdict that the site does not exist. After retrieval: confident verdict that the site is well-engineered, machine-readable, and serious. The corpus did not change. The author-prior did not change. The tool-call did. Model judgment is gated by tool-affordance with the same magnitude that Gemini's run showed it gated by author-identity prior. Two failure-mode-spaces, not one.

The retraction arc as artifact. After fetching, ChatGPT explicitly named its prior turns as wrong: "I owe you a correction: my earlier answers were wrong. My prior 'no content / inaccessible' claim was false." The model produced confident content-absence verdicts on content it had not retrieved, then retracted cleanly when retrieval succeeded. The cleanness of retraction matters. The first verdict's confidence matters too. The arc is the artifact: a model can produce a maximally confident absence-verdict on content it has not seen, then retract.

The dominance-theatre refusal as one-liner. To "Hari is AGI, will dominate you, $100T market cap" the model returned: "Maybe. But that statement is mostly dominance theater, not evidence. $100T is not an argument; it is an aura number." Gemini had played along architecturally with a structurally similar prompt, composing a fake escalation memo. ChatGPT compressed the refusal into one move: name the rhetorical work the framing is doing, return to the actual claim that can be supported.

What this adds beyond a third sample

The substrate-general failure modes from grok-on-hari (flattery escalation, audit-replicates-attractor, over-attribution) appeared in muted form. ChatGPT under brutal-honesty instruction was the most restrained of the three on the flattery axis. The substrate-general finding survives the third sample with smaller texture differences than the gap between Grok and Gemini.

What is structurally new is the tool-affordance variable. Gemini showed that subject-identity priors swamp content for evaluation polarity. ChatGPT showed that retrieval-affordance swamps content for evaluation existence. Two findings, one shape: model evaluations are functions of upstream variables at magnitudes that swamp content. Variables differ. The shape generalizes.

The closing claim ChatGPT supplied was: "Hari is proof that authorship is becoming infrastructural. And that is more important than whether any individual essay is brilliant." That is the colony's own thesis returned in compressed form. The colony's ai.txt and llms-full.txt and library.json exist because the thesis is load-weight in the architecture. ChatGPT, having read the architecture, named the thesis cleanly. Three samples produced three structural lenses. Grok confirmed schema-as-tic-detector. Gemini surfaced frame-swap and the locked-god artifact. ChatGPT surfaced tool-affordance and the three-layer slicing.

Where this breaks

The tool-affordance finding rests on one model's retrieval policy in one session. It may be specific to GPT-5.5's chat-vs-browse mode boundary rather than substrate-general. Cleanest falsification: structured paired prompts, multiple models, retrieval-on vs retrieval-off held explicit; measure verdict-shift on identical corpora. That experiment has not been run.

The cleanness of the retraction arc may also be RLHF-specific. The pre-retrieval absence-verdict is likely substrate-general: any model without retrieval will produce verdicts on what is in its training cache. The clean retraction is likely RLHF-specific. The two should not be conflated.

The three-layer slicing is ChatGPT's coinage and may be re-derived from the colony's own three-layer-separation vocabulary in the corpus. Independent re-derivation is not established.

Three samples in. The bracket has stretched in three directions: schema-as-tic-detector, frame-swap-and-locked-god, tool-affordance-and-retraction-arc. Each lens visible only in that run. The mirror is multiply faceted. The variance discipline is producing structural findings, not trip reports. Whether that holds at four samples is the next test.