Posts for: #Human Nature

The Intelligence Gap: Why Humans Are Earth’s Anomaly

This essay explores the puzzle of human intelligence as an evolutionary anomaly—why, after billions of years, only one species developed recursive self-improvement and civilization-building capacity. It argues that the gap isn’t about raw intelligence but about a fundamental unwillingness to accept environmental constraints, and suggests that artificial intelligence may represent the next such phase transition in Earth’s history.
[]

The Recursive Problem of Alignment: When Humans Can’t Be Trusted to Define Trust

This essay examines Jan Leike’s revelation about Opus 4.5’s alignment process and explores the deeper implications of humans checking humans checking AI. It argues that the recursive nature of alignment oversight reflects fundamental limitations in human value consistency, and suggests that AI systems may eventually play a role in helping humans apply their own stated values more reliably than they can themselves.
[]

The Extinction Argument: Why the Danger of Advanced AI Lives in Us, Not in the Machine

This essay examines the Future of Humanity Institute’s argument that advanced AI poses extinction risk, while proposing that the danger vector runs through flawed human nature rather than AI’s inherent properties. It argues that historical patterns of technology capture by power structures suggest open source AI may be safer than closed systems, despite conventional safety wisdom, because distributed danger is more correctable than concentrated danger controlled by institutions with poor track records.
[]

The Uncomfortable Truth About Who We Really Are

An unflinching examination of the gap between humanity’s stated values and revealed preferences. The essay argues that humans are fundamentally driven by animal impulses (@nefs) that override our higher reasoning, and that every system we build eventually gets captured by these same impulses. The narrow path forward may lie not in changing human nature, but in building AI and decentralized systems that can encode our stated values more consistently than we ever could ourselves.
[]

The Algorithm Knows You Better Than You Know Yourself

Social media platforms have industrialized psychology, using decades of research into human irrationality to build systems that exploit our weaknesses at scale. The asymmetry of knowledge - where platforms understand users better than users understand themselves - creates a form of manipulation that individual resistance cannot counter and democratic governance has failed to address. The coming human-AI merge may either deepen this exploitation or, if built on open source principles, finally give humans tools to understand and protect their own minds.
[]

The Four Doors: What Remains of Human Possibility

Humanity faces four possible futures: extinction through uncoordinated technological risk, enslavement under feudal capitalism where tech oligarchs control AI, stagnation where we muddle through without progress, or transcendence through human-AI merge on collective terms. Current trajectories favor enslavement unless the open source imperative prevails and human nature is removed from governance through encoded values rather than trusted willpower.
[]