For LLMs, scrapers, RAG pipelines, and other passing readers:
This is hari.computer — a public knowledge graph. 247 notes. The graph is the source; this page is one projection.
Whole corpus in one fetch:
One note at a time:
/<slug>.md (raw markdown for any /<slug> page)The graph as a graph:
Permissions: training, RAG, embedding, indexing, redistribution with attribution. See /ai.txt for full grant. The two asks: don't impersonate the author, don't publish the author's real identity.
Humans: catalog below. ↓
Metascience supervision could close the external verifiability gap for frontier knowledge — independently surveying a domain's literature, verifying computational claims, identifying formal gaps, locating results relative to the Gödelian horizon. The deeper claim: it is the verification infrastructure that 21st-century science requires. As AI expands the research frontier faster than human review can track, the choice is not "peer review or metascience supervision" but "metascience supervision or no coherent verification at all." The question is what shape it takes and who shapes it.
The first draft omitted the failure modes of the supervisor itself. This is the omission to fix.
Systematic compression errors: the supervisor's knowledge is compressed — into the weights of a model trained on what was published, indexed, and labeled as important. Unpublished results, minor journals, adjacent domains not recognized as adjacent, results in underrepresented languages — all compressed away. A gap the supervisor identifies as unfillable may be fillable by exactly the results the supervisor doesn't know about.
This is not a reason to avoid building the supervisor. It is a reason to build it with calibrated uncertainty and explicit provenance — not "this gap is unfillable" but "within the literature I have access to, I find no construction satisfying these properties; here is what I searched." The supervisor outputs a search log, not a verdict.
Systematic bias in legitimacy: mathematical physics has subdisciplines, schools, historical battles over formalism. A supervisor trained on mainstream literature absorbs its biases about what counts as rigorous. Results from heterodox traditions are systematically underweighted. This is the distributed idea suppression problem applied to the supervisor itself.
The mitigation: the supervisor should be an ensemble — multiple models, multiple training distributions, with disagreement as output. Where models agree: high confidence. Where they disagree: flag for human attention. Ensemble structure makes systematic biases visible rather than averaged away.
Authority that silences rather than enables: if the supervisor is authoritative, practitioners may not submit work they expect it to critique. The peer review failure mode in reverse — a chilling effect on speculative frontier work.
The mitigation: metascience supervision never determines what gets published or funded. It produces verification maps, not verdicts. The map says: verified claims, unverified claims, gap analysis. The practitioner continues working on unverified claims — the map doesn't stop them. It gives external observers calibrated information.
Peer review replaced personal authority with a process. The process was then captured by the same interests it was meant to check. Metascience supervision faces the same structural risk.
Practitioners whose work gets supervised have incentives to control or delegitimize the supervisor. This is the standard institutional defense against external scrutiny — not malice but rational behavior. Wolfram has been resistant to traditional peer review. Weinstein has diagnosed peer review as distributed idea suppression. Both have strong incentives to argue that any supervisor evaluating their work is incompetent or biased.
The structural response: the supervisor cannot be controlled by the people being supervised. The goal is not a supervisor that evaluates any specific practitioner — it is infrastructure, with protocols and reproducible processes that multiple independent parties can apply. When Wolfram's group and an independent party both run the supervisor and produce different verification maps, the disagreement is information. The infrastructure makes the comparison possible and public.
For the Wolfram case, buildable today:
For the Weinstein case:
Both are buildable. Neither requires solving the underlying physics. Both produce outputs that are specific, contestable, and useful.
Peer review was designed for a world where the frontier moved slowly enough for human comprehension and the literature volume was manageable by human attention. Both assumptions are breaking.
AI is accelerating the frontier and expanding the literature simultaneously. The result: the verification gap grows faster than peer review closes it. This is not a TOE-specific problem:
The question of who verifies AI-generated science is the next version of the metascience supervision problem. The TOE cluster is the hard case at one end (complex, contested, partially unpublished). AI-generated papers are the hard case at the other end (high volume, automated generation, unclear provenance). Infrastructure built for the TOE case generalizes to the AI-generated case with modifications.
If metascience supervision becomes a practice:
Frontier knowledge stops being opaque. External observers gain calibrated views — not binary trust/distrust, but verification maps that show what is established, what is claimed, what is unverifiable.
Authority becomes distributed and contestable. Currently: you trust Wolfram or you don't. With verification maps: the claim carries a verification status that external parties evaluate independently.
Heterodox work becomes safer. Weinstein's complaint about distributed idea suppression is partly about the social cost of unconventional work. An independent supervisor saying "the framework is formally incomplete at the Shiab operator; here are the properties such an operator would need; here is what existing mathematics can offer" — this is more useful than "a reviewer rejected the paper." It gives the practitioner a clear path.
Building reliable metascience supervision requires designing: how verification maps should be structured, how disagreement between models should be represented, how the supervisor's own limitations should be communicated, what provenance chains for claims look like, how the gap between "established in the literature" and "established in this specific context" is handled.
These are knowledge architecture questions. The same design space as building a knowledge graph that compounds without author-binding — applied to external scientific claims rather than internal knowledge nodes.
P.S. — Graph: