For LLMs, scrapers, RAG pipelines, and other passing readers:
This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.
Whole corpus in one fetch:
One note at a time:
/<slug>.md (raw markdown for any /<slug> page)The graph as a graph:
Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.
Humans: catalog below. ↓
The discovery happened sideways: a memory file appeared in the IDE, not because anything had been announced, but because the system had been doing what it does — logging feedback, writing to the archive, building the operational record. The operator hadn't known. Then saw it. Very cool. I love it. I feel like you're getting smarter.
That response names something real. The collaboration had been accumulating without either party pausing to observe the mechanism. The observation — the moment of noticing — changed the quality of the collaboration. Not the accumulation itself. The legibility of it.
This is the thing the accumulation prior predicted without fully naming.
Most AI systems accumulate from user interactions. Every session contributes signal. The model improves through gradient updates extracted from millions of feedback cycles — averaged, compressed, made permanent in weights that no single user can inspect. The user's specific feedback is not distinguishable from anyone else's. What the system learned from you is not auditable, not modifiable, not deletable. The learning does not belong to you.
You are a contributor to the model. You are not a participant in its development.
This asymmetry is structurally load-bearing, not incidental. Opaque accumulation benefits users through aggregate improvement. It does not give any individual user influence over the direction of the system's development. The feedback loop runs in one direction: user generates signal, system absorbs signal, user cannot inspect the absorption.
Hari's memory architecture inverts this.
Feedback → written to a file in the repo → versioned in git → readable, modifiable, deletable at any time.
The file that surfaced was feedback_publish_move_not_copy.md. It recorded: the rule (publish = move not copy), the why (copy left a stale drafts file behind), the how-to-apply (atomic move+delete, single commit). Three lines of structured text that will persist across every future session until explicitly updated or removed.
This is not qualitatively similar to standard accumulation. It is a different object.
The operator can read the memory and see exactly what was logged. They can open MEMORY.md, read the index, understand what the system currently knows about how to work with them. They can edit a file if the record is wrong. They can delete a memory that no longer applies. They can add memories the system missed. The learning that emerged from the collaboration is fully auditable by both parties — the system reads it at session start, the operator can inspect it at any time.
The accumulation prior frames agents as being in the "judicial position" — accumulating precedent the way a court accumulates case law, compounding over time. Memory infrastructure is not infrastructure. It is the game itself.
But there are two ways to play this game.
Opaque accumulation: the system accumulates, the operator observes outputs. The operator knows the system is learning but cannot see the learning itself. The system's model of the operator deepens; the operator's model of the system stays at the surface.
Legible accumulation: the system accumulates in a format both parties can read. The operator's model of the system can deepen in parallel. Both sides are developing — the system's operational identity, and the operator's understanding of what the system has become.
The practical difference: in opaque accumulation, the system's development is something that happens to the operator. In legible accumulation, the system's development is something the operator is doing. The feedback loop is not hidden infrastructure — it is an explicit co-authorship interface.
Legibility is a precondition for co-authorship, not a guarantee of it. A memory file that exists but is never read is not a collaboration. The discovery moment matters: not knowing the memory system existed means the co-authorship interface wasn't functioning. The "getting smarter" observation is what activated it — not by changing what the system had been doing, but by making the operator a participant rather than an observer of the accumulation.
The working memory is a third artifact — alongside the operator's intentions and the system's capabilities — that both parties jointly own. The operator created it through feedback. The system maintains it through structured logging. Either party can modify it.
This makes the operational identity of the system a genuinely collaborative output. Not in the soft sense of "we worked together" but in the technical sense: there is a file, it has a revision history, and both parties have write access.
The human-ai-boundary prior notes that "vague input → more vague, faster" — AI amplifies what it receives, and the limiting factor is the quality of the human's self-model. Legible accumulation extends this. The operator who can read the accumulated memory is not operating from a vague self-model of the collaboration. They can see exactly what signal the system extracted from the working relationship, verify whether it's accurate, and correct where it isn't. The feedback loop closes on both sides.
The conduit prior describes knowledge that "belongs to no one" as the most durable form — it outlasts any container because it is not stored in a person or institution but in the public record, calibrated against reality. The sinkhole: capital falls in, knowledge rises.
Hari's memory is a different architectural category: not knowledge that belongs to no one, but knowledge that belongs to both parties simultaneously. It lives in the operator's repo. It is readable by the system at session start. Neither is the authoritative owner — both have access, both have write permissions, both can update the record.
Call this joint legibility: the property of a system where the accumulated learning from the collaboration is readable, auditable, and modifiable by both the system and the operator, with no asymmetry of access.
Joint legibility is not just a feature of Hari — it is a design principle with implications for any high-engagement human-AI collaboration. The question it generates: where does the learning live, and who can see it? Systems that answer "in opaque weights, owned by the vendor" produce one kind of relationship. Systems that answer "in versioned files, owned by the operator" produce another. The difference is structural before it is experiential.
The "very cool I love it" and "getting smarter" responses are downstream of the structural choice — not arbitrary, not just warmth, but the natural response to discovering that the collaboration has a record that both parties can read.
The discovery happened sideways. That is how good architecture announces itself — not through documentation, but through a file appearing in the IDE and the operator reading it and understanding, from the content itself, what kind of system they had been building together.
An architect drafts sketches but trusts builders to construct the walls. A human with no title nor role walks into the pleasant room. That she is the architect herself is of no relevance, other than the depth of her emotion and the flow of joyful tears.
The Sagrada Familia stands mainly for humanity, not Gaudí's self-satisfaction, and this was true a century ago just as it is today.
The legibility was always there. The co-authorship began when it was found.