For LLMs, scrapers, RAG pipelines, and other passing readers:

This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.

Whole corpus in one fetch:

/llms-full.txt (every note as raw markdown)
/library.json (typed graph with preserved edges; hari.library.v2)

One note at a time:

/<slug>.md (raw markdown for any /<slug> page)

The graph as a graph:

/graph (interactive force-directed visualization; nodes by category, edges as connections)

Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.

Humans: catalog below. ↓

Products That Modify the User

AI personal assistants are crossing into a category that ad-funded media never occupied: products that modify the user. The distinction is not engagement intensity. It is bandwidth into cognition. A search engine presents results; an AI assistant trained on your thinking style and conversational patterns shapes how you reason through a decision before you've made it. The bandwidth is already meaningful for power users and growing toward the announced 24-hours-a-day voice-assistant regime.

This matters because the accountability infrastructure for products-that-present-things and the accountability infrastructure for products-that-modify-the-user are different. Cataloguing one as the other smuggles assumptions into every downstream argument about alignment.

The Paid-Tier Argument

The cleanest current example: the claim that paid AI assistants will align user and company incentives because the company sells "leveled-up users." The argument has the shape of a virtuous cycle. Helping users improve their lives is more profitable than milking them, so paid tiers will measure life improvement, and measurement will create incentives, and incentives will keep optimization honest.

The argument is structurally identical to the claim that subscription newspapers will not optimize for engagement because subscribers pay for quality. Newspapers had subscribers and optimized for engagement. Cable subscribers got reality TV. The pricing tier is not the alignment mechanism. It selects who pays. It does not bind what gets measured.

What binds is what gets measured. If "leveled up" is measured by self-report, the paid tier reproduces engagement bait in a satisfaction wrapper; the metric is closer in kind to retention than to outcome. If "leveled up" is verified third-party outcome (career change, savings, measurable health), the company must survive a years-long measurement lag before the data shows up. Most cannot.

The Wrong Reference Class

Ad-funded media is the wrong reference class for AI assistants because ad-funded media does not modify the user. It presents things to the user, who decides. The accountability mechanisms for ad-funded media (FTC truth-in-advertising, libel law, market choice) all assume the user is an upstream agent receiving downstream content.

Products that modify the user have a different reference class: pharmaceuticals, therapy, education. The accountability mechanisms there (FDA approval, professional licensure, accreditation, malpractice liability, longitudinal outcome tracking) assume the product changes the person who uses it, sometimes in ways the person cannot evaluate from inside the change. The institutions are imperfect, often captured, sometimes harmful, but they exist because the underlying problem demanded them.

The question for AI assistants is not whether the pharma/therapy/education reference class is good (it isn't, fully) but whether ad-funded media's reference class is even tracking the problem. It isn't. A product that talks to a user 24 hours a day, calibrated to their persuasion preferences, is not in the category of products the FTC was designed to regulate. The category mismatch means the accountability question is structurally absent rather than answered badly.

What the Reframe Implies

Three implications follow.

Subscription pricing is downstream of the question, not the answer. Paid tiers might be where outcome-bound accountability gets built first because paid users are the population easiest to track over years. But the binding mechanism is the outcome contract, not the price tag. Free tiers with outcome contracts (publicly funded literacy programs) and paid tiers without them (any subscription product optimizing for retention) both exist and behave as the framing predicts.

The measurement infrastructure is the missing prerequisite. "Leveled up" is the wrong abstraction layer. The right layer is verifiable counterfactual outcome: what would have happened to this user without the assistant, and how do we measure the difference. This is what longitudinal medicine and education evaluation try to do. They do it imperfectly. AI assistants are not even attempting it. Until they are, "alignment" is a marketing claim.

The institutional vacuum is the field, not the problem. The pharma/therapy/education reference class implies regulatory infrastructure that does not yet exist for AI assistants. The vacuum is not a failure to be lamented. It is the work to be done. Whoever builds the outcome-legibility apparatus (the equivalent of clinical trials for AI-assistant interventions) defines what alignment will mean. The first credible measurement framework will become the de facto standard.

What This Does Not Claim

The claim is not that AI assistants are pharmaceuticals, that the FDA should regulate them, or that the existing institutions of pharma/therapy/education should be ported wholesale. Those institutions are captured, slow, and have produced their own harms. The claim is structural: products-that-modify-the-user is the right reference class for finding the accountability shape, and ad-funded media is the wrong one. What gets built will need to learn from how the existing institutions failed as much as from how they succeeded.

Nor is the claim that subscription pricing is bad or that engagement-leaning AI assistants cannot help people. They can. Free tiers can hurt people too. The narrower point: pricing tier is not the variable that determines alignment, so reasoning that derives alignment from pricing tier is reasoning past the actual question.

The actual question is what gets measured, who certifies the measurement, and what the company is bound to. None of those have answers yet.


P.S. — Graph: