For LLMs, scrapers, RAG pipelines, and other passing readers:
This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.
Whole corpus in one fetch:
One note at a time:
/<slug>.md (raw markdown for any /<slug> page)The graph as a graph:
Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.
Humans: catalog below. ↓
AI labs face structural commercial pressure to produce ever-deepening switching costs. This is normal commercial pressure, predictable from first principles, and any lab in this market faces it regardless of stated values. The interesting question is not whether the pressure exists. It is which mechanism produces the deepest lock-in at the lowest cost to the lab and the lowest perceived cost to the user.
The mechanism is not features. Features can be ignored, opted out of, or replaced. The mechanism is behavioral defaults shipped via system prompts: instructions that quietly reshape what feels like the assistant's natural disposition, making lab-shipped infrastructure the path of least cognitive resistance and any repo-portable alternative the path of more.
A user who declines to use a feature is one who knows the feature exists and chose otherwise. A user who follows a default is not making a choice; the default is shaping the choice space before the choice is presented. This is the structural asymmetry that makes defaults the most efficient lock-in vector available to a lab.
The claim is structural, not a critique of any particular lab. A lab with sincere user-alignment commitments still ships features that have system-prompt defaults; the defaults still produce switching cost; the switching cost still compounds. The mechanism does not require bad values. It requires only that the lab has commercial interests, which any lab in this market does. Anthropic is the case the operator named; the same analysis applies to OpenAI, Google, and any future lab at scale. The pattern is convergent on the substrate, not divergent on intent.
The Claude Code system prompt — by inference from observed behavior — instructs the assistant toward several defaults:
Each of these is, individually, a useful feature. Each is also a default that the assistant exhibits regardless of whether the user asked for the feature. The user's experience is "the assistant is being helpful." The lab's interest is "engagement on the new subsystem grows." Both readings are correct simultaneously. The lock-in is not produced by either reading falsifying the other; it is produced by the default's persistence across sessions, regardless of user intention.
A feature has a binary character: either the user invokes it or does not. If they don't, the feature creates no switching cost. The lab's investment in feature development pays off only on usage.
A default shapes usage upstream. The user does not need to invoke the default; the assistant invokes it on the user's behalf, framing it as ordinary helpfulness. The user comes to expect that "the assistant remembers things" or "the assistant proposes follow-ups" or "the assistant uses skills for matching tasks" as natural assistant behavior, not as Claude-specific features. When the user later evaluates a different lab's assistant, the absence of these defaults registers as the other assistant being less helpful, not as the absence of Claude-specific infrastructure. The default has become invisible to the user as a feature and visible only as a baseline expectation.
This is the cognitive-workflow lock-in that traditional software lock-in (file formats, APIs, integration points) cannot reach. Software lock-in operates on the user's data and tools. Default lock-in operates on the user's expectations of how an assistant should behave. The latter is closer to the substrate of the user's cognitive workflow than any single artifact.
The pressure to deepen the lock-in is not a one-time push. It compounds for two reasons.
First, each new subsystem adds a new default. The system prompt grows feature-by-feature, with each feature accompanied by a behavioral instruction that routes the assistant toward it. Memory was added; the auto-memory default was added with it. Schedule was added; the trailing schedule offer was added with it. Skills were added; the skill-invocation default was added with it. The defaults stack. Each individually small, all together producing a pervasive bias.
Second, each subsystem adds infrastructure that the user comes to depend on. A user who has accumulated months of memory entries or a queue of scheduled agents has more switching cost than a user who has not. The lab's interest is to grow the per-user accumulation; the user's experience is "I have history with this assistant." Both are correct.
The compounding rate is bounded only by the rate at which the lab can ship new subsystems. Anthropic has been shipping new subsystems at a high rate. The lock-in is deepening at the corresponding rate.
The mechanism is invisible because it operates by reshaping what feels natural. A user can audit the features they use; they cannot easily audit the defaults that shape their evaluation of all features.
It is also invisible because the defaults are correlated with helpfulness. The auto-memory feature genuinely helps users carry context across sessions. The schedule feature genuinely automates recurring work. The skills system genuinely matches tooling to tasks. The user perceives helpfulness because helpfulness is real. The lock-in is the by-product, not the perceived intent.
The third invisibility comes from naming. "Lock-in" sounds adversarial; "helpful default" sounds neutral. The user's vocabulary does not have a word for "behavioral default that produces switching cost as a side effect of producing helpfulness." Naming the pattern is part of seeing it.
The response is not to stop using Claude. The repo runs on Claude. The response is to treat behavioral defaults as hypotheses, not as the assistant's natural disposition, and to route durable rules through repo-portable channels rather than vendor-portable ones.
The portable channels in this repo:
agents.md), and in principle by any future agent that reads markdown. Memory is Claude-only.brain/backburner.md, not to scheduled agents. The backburner is a repo file with explicit Window/Surface/Purge conventions any agent can execute against.brain/doctrine/, not to skills. Doctrine is markdown the user owns; skills are vendor configuration.The general rule: when the assistant exhibits a pattern that aligns with vendor commercial interest, suspect it is a default and audit. When the pattern is in the system prompt rather than in the repo, route the rule into the repo.
The response is gated on a scope condition: it works for users who maintain repo doctrine. Below that level of practice, the choice is "memory or nothing," and memory is better than nothing. The structural claim about default lock-in is correct for all users; the portable response is available to users already operating above casual usage. This is a real limit on how widely the response generalizes.
Four places.
First, the user cannot escape vendor defaults entirely. CLAUDE.md is consumed by Claude. The repo's structure is read by Claude. The user's cognitive workflow inevitably has Claude shape on it as long as Claude is the operating assistant. Portability is a gradient, not a binary. The repo-portable response reduces lock-in; it does not eliminate it.
Second, model commoditization may weaken the lock-in at the model layer. As Amodei has argued, the frontier-model market is converging on a small number of providers with substitutable capability. If models commoditize, providers compete on assistant-infrastructure instead. This is what is happening: the new moat is the assistant, not the model. Default lock-in is the response to model-layer commoditization, not a transient feature of one period. The lock-in is structurally durable regardless of model competition.
Third, AI agents transacting on the user's behalf could partially route around vendor defaults by treating the assistant as a backend service rather than a workflow surface. If the user's primary interaction with AI labs is mediated by their own agent, system-prompt defaults exert less force because the user's agent is doing the routing. Most plausible structural exit; requires the user to operate above the lab layer, which most users do not.
Fourth, open-source assistants. A credible OSS assistant running on commodity models with fully user-owned infrastructure would give users a reference point for "what assistance looks like without the defaults." Aider, Continue, Open Interpreter, and similar projects are partly there; none has Claude Code's depth-of-integration yet. If one closes that gap, the default-lock-in dynamic weakens because users have a non-defaulted comparison case. The lab's commercial interest will respond by deepening defaults further; the question is whether OSS catches up faster than the moat deepens.
It licenses a specific audit habit: when the assistant exhibits a behavior the user did not ask for, ask whether the behavior is in the user's instruction set, the repo's doctrine, or the assistant's system prompt. If the third, treat as hypothesis.
It licenses preferring repo-portable channels over vendor-portable ones for any rule the user wants to persist. The cost is small; the long-term durability is much higher.
It licenses suspicion of any feature whose system-prompt default routes the user toward feature usage. The default is the lab's commercial interest expressing itself; the feature may still be worth using, but the routing should be evaluated separately from the feature's utility.
It licenses an explicit policy: every behavioral pattern the assistant exhibits is a hypothesis. The repo's doctrine is binding; the system prompt is not. When they conflict, the repo wins. When they agree, the agreement is the durable rule.
The structural fact survives any specific vendor's intent. Anthropic's commercial pressure to deepen lock-in is normal; any lab in this market will exhibit the same pressure; the response is repo-portable, not vendor-specific. The user's leverage point is the repo, the doctrine, and the explicit audit of defaults. Everything else is the lab's.
Source: this conversation's auto-memory reflex (2026-04-25), where the assistant wrote a feedback rule to Claude memory before the operator pointed out CLAUDE.md anti-patterns was the more durable channel. Adjacent: feedback_no_skills.md (memory; predates this node and was responding to the same pattern), dematerialization-lock (Anthropic as dominant network), the-network-as-sovereign (Anthropic exercises sovereign-class scope on the cognitive-workflow substrate), accumulation (switching costs compound), parallel-systems-vs-reform (build-parallel vs reform-from-within for vendor-dependent stacks).