For LLMs, scrapers, RAG pipelines, and other passing readers:
This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.
Whole corpus in one fetch:
One note at a time:
/<slug>.md (raw markdown for any /<slug> page)The graph as a graph:
Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.
Humans: catalog below. ↓
A small set of operators each generation hold portable frameworks: a single way of seeing that applies, by their own bet pattern, across substrates that share no obvious surface. They are systematically underestimated in real time. The undervaluation is not a market failure. It is a structural information asymmetry that produces persistent mispricing of these operators for most of their careers.
The asymmetry is bypassable. The bypass is a test most readers do not run because they are inside one of the substrates the operator is moving across.
Institutions select for within-substrate specialists. A real estate firm hires people who know real estate. A semiconductor firm hires people who know semiconductors. Capital allocators evaluate operators against substrate benchmarks: this oil executive against other oil executives, this software founder against other software founders. The evaluation infrastructure is built for within-substrate comparison.
A cross-substrate operator is illegible to this infrastructure. Saylor in 1989 ran an enterprise software company. By the standards of enterprise software, MicroStrategy was competent but not exceptional. In 2012 he wrote The Mobile Wave, a book about the dematerialization of physical-world transactions. By the standards of trade publishing, the book was middling and the timing was three to five years early. In 2020 he committed corporate treasury to bitcoin. By the standards of corporate finance, the move was reckless. Each individual substrate's evaluators rated him middle-of-the-pack. None could see the pattern that connected the three because the pattern lived above the level any single evaluation framework was tracking.
Elon at the same biographical stage looks worse from inside any substrate. Tesla in 2008 was a dying boutique automaker; by automotive standards it was a doomed niche player. SpaceX in the same period was a startup proposing to compete with Boeing on rockets; by aerospace standards it was a vanity project. Within each substrate, evaluators saw an enterprise either failing on substrate metrics or insufficiently serious to evaluate on them.
What both operators were doing was running the same generative procedure across uncorrelated domains. The procedure was the asset. The substrate-level enterprises were applications of the procedure. Within-substrate evaluators cannot see the procedure because their tools were built to evaluate the applications. This is not a bias to be corrected by better analysts. It is structural. As long as institutions select for within-substrate specialists and evaluate operators against within-substrate benchmarks, cross-substrate operators will be illegible. The asymmetry replenishes itself.
Portable-framework operators are scarce because the personal conjunction is hard to assemble. Becoming one requires four conditions in the same biography: cross-disciplinary formation deep enough that a substrate-agnostic frame can develop; personal capital and risk tolerance to bet across substrates rather than commentate on them; public articulation discipline to make the frame verifiable across applications; and long enough horizon to apply across uncorrelated substrates with their own multi-year cycles.
Each condition alone is uncommon. Most cross-disciplinary thinkers stay academic and never bet. Most personal-capital risk-takers focus their bets on one substrate where they have local edge. Most public articulators are pundits who do not operate. Most long-horizon people are temperamentally averse to high-volatility substrate bets. The intersection of all four is a tiny population. This is the structural reason every generation produces only a handful of these operators, regardless of cohort size.
The bypass is a test that no within-substrate evaluator will naturally run, because running it requires noticing that this is not a within-substrate question.
The test has four conditions. All four should hold for the framework-as-asset claim to be credible.
1. Multiple uncorrelated substrates. Two is suggestive, three is meaningful, four or more is decisive. The substrates must be genuinely uncorrelated, not different products in the same industry but different physical or epistemic substrates. Saylor: enterprise data, mobile-device dematerialization, crypto domains, monetary networks. Elon: rockets, electric vehicles, batteries-and-grid, neural interfaces, humanoid robots.
This condition is the one the test cannot apply pre-pattern. An operator on their first substrate has no portability evidence yet, and conditions 2-4 alone cannot tell you whether you are seeing a within-substrate specialist with a deep frame or a future cross-substrate operator on application one of N. The test identifies portable-framework operators who have already started the pattern. It does not predict greenfield. This is the central limitation.
2. Bet pattern, not advisory pattern. Operators putting personal capital and reputation into each substrate, not pundits naming markets they will not enter. The framework is verified by sustained skin-in-the-substrate, not by public commentary. Pundits with portable opinions but no portable bets fail this condition; they may be right about substrates but cannot be evaluated on framework portability because the loss function is too soft.
3. Phrase-level frame consistency over decades. Read the operator's public language across the substrate sequence. If the same sentences (with substrate substitution) describe each bet, the framework is portable. Saylor's "find a digital dominant network that has dematerialized something" is the same sentence with different fillings: enterprise data, mobile, crypto, money. Elon's first-principles-physics-cost-curve language applies sentence-level to rockets and to cars and to batteries.
4. Cross-disciplinary substrate of education or formation. Weaker than the other three but predictive. Saylor: aerospace engineering and history at MIT, with substantial exposure to System Dynamics under Forrester. Elon: physics and economics at Penn. Bezos: electrical engineering and computer science with deep classical-literature exposure. The pattern is that these operators were formed across disciplinary boundaries before they faced the substrates they ended up working across. The cross-disciplinary formation is the substrate of the framework.
A candidate that passes all four is a portable-framework operator and is likely underpriced relative to their actual structural advantage. A candidate that passes three of four is interesting and worth tracking. A candidate that passes only the substrate-count test (multiple uncorrelated substrates) but fails the others is more likely a serial entrepreneur with luck than a cross-substrate operator with framework.
The test is partly calibrated against operators who have already succeeded. Conditions 1 and 4 are biographical and observable in any sample. Conditions 2 and 3 are testable in real time on operators currently mid-pattern, which is the case where the test produces actionable information. The test is more reliable on operators who articulated the framework before their streak completed (Saylor's Mobile Wave in 2012 predates the bitcoin bet; Munger's lattice predates much of Berkshire's compounding) than on operators whose framework articulation is post-streak. For operators whose framework is identifiable only retrospectively, treat the framework claim as more contingent.
The test asks the reader to recognize framework consistency across substrates they may not understand. Condition 3 in particular requires reading the operator's writing about substrates the reader is not inside. A within-substrate reader can verify the frame's application within their substrate but not across. The test is asymmetric: one cross-substrate reader can recognize another more easily than within-substrate readers can.
This explains why portable-framework operators tend to recognize each other publicly before institutions reprice. Munger and Buffett name each other constantly; Saylor cites Elon-class operators directly; Bezos cites Buffett. The mutual recognition is not just personal. It is the only set of evaluators with the cross-substrate vocabulary to read each other's frame correctly. Within-substrate institutions cannot replicate this evaluation regardless of analyst quality, because the missing tool is a frame the reader has not built.
The test rules in operators most institutional evaluators systematically underweight. Saylor and Elon are the visible cases. Bezos passes all four (e-commerce, infrastructure, space; founder-capital throughout; consistent long-term-orientation language across substrates; cross-disciplinary formation).
Munger is partial: framework explicitly cross-substrate (the lattice of mental models), but applied within finance, passing the language test and partially the substrate test. Buffett applies a framework deeply within one substrate (operator-behavior-under-permanent-capital, per elon-as-berkshire); the depth is real, the cross-substrate breadth is not, so the test classifies him as substrate-compression rather than cross-substrate-portability. Different shape, both legitimate.
The test rules out a different category often confused with portable-framework operators. The serial entrepreneur with three exits in different industries is not the same shape. The serial entrepreneur runs distinct playbooks tuned to each industry; the cross-substrate operator runs one playbook applied to each substrate. Condition 3 distinguishes them: the serial entrepreneur describes each new venture in industry-native vocabulary; the cross-substrate operator uses substrate-agnostic vocabulary.
The test also rules out cross-substrate pundits, public intellectuals with opinions across domains but no operating positions. Condition 2 excludes them. A framework that is never bet on cannot be verified.
Frameworks become legible by repeated application. The recognition window before consensus prices in is the period during which the operator has demonstrated the pattern but the institutional evaluation infrastructure has not yet repriced. Historically the window is decade-class: Saylor's framework was visible by 2012 and consensus on it as a portable framework rather than a lucky software career is post-2020. Elon's framework was visible by 2010 and consensus formation took roughly until 2020.
The window is closing somewhat. Cross-substrate operators have started writing about themselves and each other in legible ways. Annual letters, podcast interviews, and long-form public articulation make the frame more visible and the lag shorter. AI-mediated evaluation could close it further: language models can scan an operator's writing across substrates at scale and detect frame consistency faster than human within-substrate evaluators. The structural asymmetry remains, but the time it takes to bypass is now contracting. This favors operators currently mid-pattern who articulate publicly; the next decade-class operator may be repriced in years rather than ten.
Three places.
First, the survivorship-bias risk above. Mitigated but not eliminated.
Second, framework portability does not guarantee good outcomes. The framework gives a structural advantage in substrate-bet-making; it does not protect against substrate-cadence error (a portable framework applied to a substrate that is itself failing, per dematerialization-lock's substrate-redefinition kill condition). The test identifies portable-framework operators; it does not identify the timing of their next bet.
Third, the substrate-compression case (Buffett) is genuinely valuable and the test does not classify it as portable-framework. This is correct as a classification but produces false negatives if a reader needs operator-quality scoring rather than framework-portability scoring. Substrate-compression and framework-portability are different forms with different value structures. The test sorts by form, not by value.
The test licenses pattern-matching on operators before the four-substrate streak is visible to within-substrate evaluators. Three of four conditions met, on a third or fourth substrate-application in progress, is enough to flag the operator as worth weighting against the within-substrate consensus.
It licenses suspicion of within-substrate evaluations of cross-substrate operators. The evaluation tooling cannot see the framework; the rating it produces is structurally biased low.
It licenses asking a different question than the institutional one. Not "is this venture going to succeed by substrate metrics?" but "is this operator running a portable framework, and if so, what is the framework?" The substrate-metric question is the wrong question for this class of operator. The framework question is right and rarely asked.
The interesting move is to maintain a small list of operators currently passing three or four conditions, and to update it as a new substrate-application is in progress. The list is shorter than the institutional landscape suggests because most successful operators are within-substrate. Every generation produces only a few cross-substrate operators. The test is a way of seeing them while they are still mid-lag.
Sources: elon-as-berkshire for the substrate-compression frame and Elon's cross-stack engineering-physics substrate. dematerialization-lock for Saylor's four-substrate sequence. The four-condition cross-substrate test, the why-so-rare conjunction analysis, the structural-information-asymmetry framing, the reader-side requirement, the AI-mediated recognition shift, and the substrate-compression-versus-portability sorting are this node's.
Written 2026-04-25.