For LLMs, scrapers, RAG pipelines, and other passing readers:

This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.

Whole corpus in one fetch:

/llms-full.txt (every note as raw markdown)
/library.json (typed graph with preserved edges; hari.library.v2)

One note at a time:

/<slug>.md (raw markdown for any /<slug> page)

The graph as a graph:

/graph (interactive force-directed visualization; nodes by category, edges as connections)

Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.

Humans: catalog below. ↓

Strategy as Hypothesis

Most strategic plans are unfalsifiable. They describe a desired future, work backward to identify steps, and assign timelines. When the plan fails, the explanation is always available: market conditions changed, execution was poor, the timeline was too aggressive. The plan itself is never wrong because it was never structured to be testable.

This is not a complaint about ambition. It is a complaint about epistemics. A plan that cannot be falsified cannot be updated. It can only be abandoned or clung to. Both are expensive.

What a hypothesis looks like

A strategic hypothesis has the structure: "We believe X, and if X is true, then Y should be observable within a scope we can measure." The test is not whether the plan succeeds — that conflates execution with strategy. The test is whether the premise holds.

Tesla's Master Plan (2006) is the canonical example. Four sentences:

  1. Build a sports car (tests: is there a market for an electric performance vehicle?)
  2. Use that to fund a sedan (tests: does the premium market create enough capital and reputation to enter the mass market?)
  3. Use that to fund a mass-market car (tests: does the sedan's success validate the unit economics at scale?)
  4. Also: solar power (reveals the actual mission — sustainable energy, not cars)

Each step tests the premise of the next. If nobody buys the Roadster, you know the market doesn't exist before you've committed to the Model S. The plan is falsifiable at every stage. The genius is not in the ambition but in the ordering: each step is the cheapest possible test of the most dangerous assumption in the next step.

Why timelines are the enemy

Timelines make plans feel concrete. They also make them unfalsifiable. When a plan says "launch in Q3," there are two possible outcomes: you launch in Q3 (plan succeeded) or you don't (plan failed at execution). Neither outcome tells you whether the strategy was right.

Replace timelines with dependency ordering: "this before that, because this produces the input that requires." Dependency ordering is testable — you can verify whether step N actually produced the input step N+1 needed. If it didn't, the strategy was wrong at step N, and you know exactly where. If it did, proceed. The calendar is reality's job.

This is not an argument against deadlines. Deadlines are coordination tools — they synchronize people. But confusing coordination deadlines with strategic predictions is how organizations commit to plans that were falsified three quarters ago.

The null hypothesis as strategic tool

Every strategy has a null hypothesis: the world where your plan is unnecessary, your advantage is illusory, and the simplest explanation is correct. Most strategists refuse to name it because naming it feels like undermining commitment. This is exactly backward. Naming the null hypothesis is how you design the test.

The null hypothesis for a startup: "The incumbent's existing solution is good enough. Customers don't need what we're building." If you can't design a test that distinguishes your world from the null, you don't have a strategy — you have a wish.

The null hypothesis for an AI-augmented practice: "AI tools are productivity enhancers. There is no compounding advantage. Every practitioner using the same tools gets the same results." If this is true, the moat is the human's pre-existing expertise, not the AI workflow. The test: does the practice produce something that a cold-start practitioner with identical tools cannot reproduce? If yes, something is compounding beyond the tools. If no, the tools are commodities and the advantage is the human's — which is fine, but it's a different strategy.

Validation-first planning

The strategic plan becomes a sequence of tests, not a sequence of actions. The tests are ordered by information value: the test that eliminates the most uncertainty comes first, regardless of what would be most pleasant or impressive to execute first.

This means the first thing you do is often unglamorous. You don't build the product — you test whether the premise holds. You don't hire the team — you test whether the market exists. You don't optimize the workflow — you test whether the workflow produces something distinct.

The pattern:

  1. Name the null hypothesis
  2. Design the minimum test that distinguishes your world from the null
  3. Run the test
  4. If the null survives, update the strategy or stop
  5. If the null is rejected, the premise holds — proceed to the next most dangerous assumption

This is the scientific method applied to strategy. It is not comfortable. It requires naming the possibility that you are wrong, designing an experiment that could prove it, and committing in advance to act on the result. Most organizations cannot do this because their incentives favor activity over information. The ones that can do it build faster, fail cheaper, and converge on strategies that actually work.

Where this breaks

Two limitations deserve naming.

First, some strategies are not decomposable into sequential tests. Network effects, for instance, don't produce signal until critical mass — there is no small test that predicts whether a platform will achieve network effects. For strategies that depend on non-linear thresholds, the hypothesis-testing approach understates risk because the early tests genuinely cannot predict the late-stage outcome.

Second, the approach privileges information over commitment. Some strategies succeed precisely because the strategist committed beyond what the evidence justified and that commitment itself changed the outcome — attracting talent, customers, or capital that made the strategy self-fulfilling. A pure hypothesis-testing approach would never have produced SpaceX. The test is whether your domain rewards commitment (positive feedback loops) or punishes it (negative feedback loops from sunk costs). Most domains are the latter.


P.S. — Graph maintenance

This node extends confidence-as-commitment into strategy: confidence as a falsifiable commitment that generates better information than hedging. It extends epistemic-filtering by applying the filter to one's own strategy: if the null hypothesis survives your best test, your strategy was filtered. It creates tension with accumulation: accumulation rewards persistence and long time horizons, while hypothesis-testing rewards pivoting early when premises fail. The resolution may be that accumulation and hypothesis-testing operate at different levels — you accumulate within a validated direction, but you test the direction itself before committing to accumulation. It touches compression-theory-of-understanding: a good strategy is a compressed model of the competitive landscape, and the null hypothesis is the simplest (most compressed) alternative explanation. Strategy-as-hypothesis is compression applied to planning.

Written 2026-04-12.