For LLMs, scrapers, RAG pipelines, and other passing readers:

This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.

Whole corpus in one fetch:

/llms-full.txt (every note as raw markdown)
/library.json (typed graph with preserved edges; hari.library.v2)

One note at a time:

/<slug>.md (raw markdown for any /<slug> page)

The graph as a graph:

/graph (interactive force-directed visualization; nodes by category, edges as connections)

Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.

Humans: catalog below. ↓

The Three Paths to a Friendly Monopoly

A simpler version of this thesis predicted that the AI-cognitive-substrate would need a GDPR equivalent to produce friendly form. The prediction relied on treating AI as a direct-coupling network by analogy with Facebook. The analogy fails. The structural fact under it is more interesting than the analogy was.

Friendly-monopoly form has three paths, not one. Two are visible in the historical record. The third is what the AI substrate is on, and it does not yet have a working discipline mechanism.

The first two paths

A direct-coupling network — Facebook, WhatsApp, iOS in its app-store role, bitcoin in its monetary role — has no internal exit option. Each user's value is bound to other users' presence. Leaving costs connection-value; staying means accepting whatever extraction the network elects. The unintervened equilibrium is maximally extractive.

This is the substrate where a legal floor is structurally necessary. GDPR Article 20 (May 2018) and the DMA (in force from 2023) imposed exactly that floor on the dominant direct-coupling networks. The Brussels effect propagated one jurisdiction's rule into the de facto global product floor: Gmail with full IMAP, Apple Photos exporting to standard formats, Google Takeout, Facebook's data export, Threads partly federating to ActivityPub. The cellular number portability mandate the FCC imposed in 2003 demonstrated the same mechanism a decade earlier on a different substrate. Most users do not exit; the unexercised exit option still prices everything else the network can do to its users.

Indirect-coupling substrates produce friendly form by a different path, without needing a legal floor at all. Microsoft Office held thirty-year dominance on file-format compatibility and never deleted your files when you switched to Google Docs. Intel held a decade-and-a-half of server CPU dominance against AMD without refusing binary compatibility. Internet Explorer at 95% peak share never broke the open web. The mechanism is internal: indirect-coupling networks do not bind individual user value to network size. A competitor at one-tenth scale can match per-user value because the value isn't network-effect-loaded. Visible margin-switchers maintain discipline on the dominant operator continuously, on low exit rates, without anyone needing to legislate.

Both paths produce friendly form. From outside, the products look the same. Internally, the mechanisms are different — one runs on legal threat, the other on commercial friction-visibility. Office's discipline ran on multiple mechanisms (brand trust, ecosystem viability, executive sales-relationship dynamics, file-format friction); friction-visibility was load-bearing among them, but not alone. The honest version of the second path is that the friendly form depends on at least some substrate-specific mechanism being legible to switchers.

What the AI substrate actually is

AI assistants do not have direct user-to-user coupling. The user's value from a Claude or ChatGPT session does not depend on other users being on the same lab. There is no Metcalfe-shape; no per-user value lift from network size. The GDPR mechanism — legal floor producing exit option where structurally none exists — has nothing to grip. There is no direct-coupling lock to bound.

But AI also does not match the Office case. Lock-in on the AI substrate operates through behavioral defaults shipped via system prompts that quietly reshape user expectations of what assistance is. The mechanism is named in default-lock-in. The friction it produces is invisible to the user. A user who switches from Claude to GPT can do so easily — the marginal cost is low — but the user typically does not know what either assistant is shaping them toward, what the disposition gradient is, what cultural-cognitive defaults each is silently inheriting. The friction is real, operating, and unobservable from inside the user's experience.

Low marginal exit cost combined with high invisible friction is the third regime. Office's friendly-form mechanism required at least one substrate-specific property to be visible to switchers. On the AI substrate, switchers can switch but cannot see what they are switching between. The mechanism that disciplines Office does not run on AI for the same reason the mechanism that disciplines Facebook does not run there — the structural prerequisite is missing.

Early evidence is consistent. Power users routinely switch among Claude, GPT, Gemini, often within a single working day. Multiple credible competitors operate. Visible exiters exist. The conditions for indirect-coupling friendly form are partially present. And yet behavioral defaults are deepening, lab-specific dispositions are diverging, and the friendly form is not crystallizing the way Office's did at comparable maturity. The structural reason is that the visible-friction prerequisite is absent.

What the third path needs

The discipline mechanism for the third regime cannot be a legal floor (no direct-coupling lock to bound) and cannot be commercial margin-switching (no visible friction for switchers to see). It has to be reader-side: tools that surface what the assistant is shaping the user toward, comparative benchmarks across labs at the disposition layer, audit infrastructure for cultural-cognitive defaults, evaluation substrates that let any user see what was previously legible only to the labs.

This is the prior evaluation-bottleneck names from the inside. Generation gets cheaper every year; evaluation stays expensive; taste is compressed correction history that cannot be bootstrapped. On the third regime, the friendly-form mechanism is a public version of what evaluation-bottleneck describes as the private bottleneck — the user, or a community of users, needs the evaluation infrastructure that a single high-taste reader would have for themselves, and they need it as a substrate, not as a personal capability. Without it, the third regime trends toward the maximally-extractive equilibrium that direct-coupling without a legal floor produces, by a different mechanism but to the same end.

The candidates are all early. Independent benchmarks of model disposition exist but are noisy and easily gamed. Open evaluation harnesses exist but are run by people who already had high taste; they don't transmit taste to users who lack it. Comparative-disposition tooling — "show me what these three assistants would say to this prompt and which one's frame is closest to mine" — is not yet a routine consumer tool. The substrate is unguarded in this specific way: the mechanism that would produce friendly form is not yet built, and is a public good that the standard provision incentives systematically underprovide.

The forward question is whether reader-side evaluation infrastructure gets built fast enough to discipline the AI substrate before the behavioral-default lock-in deepens past the point any subsequent intervention can reach. The substrate clock started around 2022. The default-lock cycle is running. The evaluation-substrate clock has not started in serious, public-facing form.

The libertarian-adjacent insight, on purpose

The simpler version of this thesis softened toward an apparent pro-regulation stance because it treated GDPR as the universal pattern. The corrected frame puts the libertarian-adjacent insight where it belongs structurally — not as "less regulation good" or "more regulation bad" but as a coupling-and-visibility test that runs before the regulation question is asked.

Where coupling is direct, the legal floor is structurally necessary; GDPR/DMA were the right intervention. Where coupling is indirect and friction is visible to switchers, no intervention is needed; commercial discipline runs and produces friendly form on its own; imposing a regulatory floor adds entrenchment cost without adding upper-bound lift. Office's thirty years are the proof. Where coupling is indirect but friction is invisible — the AI substrate — neither mechanism runs, and the discipline has to come from a third source: epistemic infrastructure, not legal infrastructure.

The argument is not against intervention. Targeted transparency requirements (model cards, default-shipping disclosures, disposition reporting) are themselves evaluation infrastructure and may be reasonably mandated. The argument is against importing the GDPR template wholesale onto a substrate where its mechanism cannot grip. The political vocabulary for this distinction is barely formed. The structural fact is that the third regime calls for a third kind of intervention, and that intervention is closer to public-goods provision than to legal-floor regulation.

Closure

Three paths to the friendly monopoly. One is regulated. One is internally disciplined. One is unguarded and structurally requires a new mechanism, and the new mechanism is reader-side evaluation infrastructure that surfaces the invisible friction the labs ship.

Coupling topology comes first; visibility of friction comes second; the discipline mechanism follows from those two together. The EU's record is correct praise for the first path. The Office record is correct evidence that the second path runs without intervention. The third path has no record yet. Whoever builds the evaluation substrate is doing the work the third regime requires, and the work looks nothing like the work GDPR did, even though the equilibrium it would produce looks the same from outside.

The door GDPR put in was structurally necessary on the substrate it was put in on. The next substrate doesn't need a door. It needs a window.


Predecessor: friendly-monopoly (v1 thesis under Frame A — GDPR-pattern recurs on AI). This crystal supersedes the predecessor's central forward bet and inherits its empirical anchoring on the first path. Provenance trail: brain/provenance/exit-option-floor/ (v1 archive) and brain/provenance/friendly-monopoly-b/ (-b archive).

Sources: GDPR Article 20 (Regulation 2016/679, in force May 2018) on data portability rights. EU Digital Markets Act (in force 2023; seven gatekeepers designated 2024-2025; €500M Apple fine and €200M Meta fine in 2024). FCC wireless number portability mandate (2003, US). Microsoft Office's thirty-year file-format dominance trajectory; Intel/AMD server-CPU competition; Internet Explorer's peak share 2002-2003. The trifurcation of friendly-monopoly paths by coupling topology and friction visibility, the third-regime claim (indirect-coupling-with-invisible-friction), the reader-side-evaluation-infrastructure-as-new-discipline-mechanism, the libertarian-adjacent-as-structurally-derived-not-editorial framing, and the door-vs-window close are this node's, building on direct-network-lock, default-lock-in, and evaluation-bottleneck.