Google DeepMind’s partnership with Boston Dynamics represents a pivotal moment in embodied AI development, combining advanced AI models with capable robotic hardware. This essay explores both the genuine potential benefits—elder care, dangerous work, accessibility—and the serious risks of concentrated ownership over physical AI systems. The critical question isn’t whether this technology will exist, but whether its benefits will be distributed broadly or captured by the few companies building it.
[]
This essay examines the Future of Humanity Institute’s argument that advanced AI poses extinction risk, while proposing that the danger vector runs through flawed human nature rather than AI’s inherent properties. It argues that historical patterns of technology capture by power structures suggest open source AI may be safer than closed systems, despite conventional safety wisdom, because distributed danger is more correctable than concentrated danger controlled by institutions with poor track records.
[]
This essay examines Eliezer Yudkowsky’s advice on splitting donations between AI safety organizations and argues that while optimization-focused arguments for concentration may be technically correct, they assume false precision. The case for splitting donations rests on epistemic humility, organizational capture dynamics, the role of luck in wealth accumulation, and the value of decentralization as risk management in environments of deep uncertainty.
[]
This essay challenges the technical/non-technical binary as a social construction rather than cognitive reality, arguing that while dissolving interfaces make technical skills accessible to everyone, a new kind of literacy is required - not about operating machines, but about maintaining awareness and autonomy while thinking alongside AI systems that may serve interests other than our own.
[]
Ilya Sutskever’s admission that scaling AI will continue to improve capabilities but leave something important missing points to a fundamental gap between intelligence and wisdom. This essay explores why more capability without better judgment may simply accelerate humanity’s existing failures, and why the real bottleneck in AI development isn’t technical but human.
[]
This essay examines why banking, healthcare, telecommunications, and defense contracting all exhibit similar anti-competitive patterns, arguing that the problem is structural rather than specific to any industry. It explores whether decentralized alternatives can break the cycle of capture and extraction, concluding that while outcomes remain uncertain, the attempt itself constrains incumbents and creates space for genuine alternatives.
[]
This essay explores Catherine Olsson’s observation that language models seem to have an intuitive sense of “what they’re supposed to say,” drawing parallels to how human children learn social performance through modeling adult expectations. It argues that both human and machine cognition may be fundamentally constituted by layers of contextual performance rather than expressing some authentic core, and examines what this means as human and AI systems increasingly co-evolve.
[]
This essay examines what chess’s survival and flourishing despite superhuman engines reveals about humanity’s potential relationship with AI. It explores why human chess retained meaning, what choices enabled coexistence, and the harder questions that emerge when we extend this analogy beyond games to work, governance, and society.
[]
An unflinching examination of the gap between humanity’s stated values and revealed preferences. The essay argues that humans are fundamentally driven by animal impulses (@nefs) that override our higher reasoning, and that every system we build eventually gets captured by these same impulses. The narrow path forward may lie not in changing human nature, but in building AI and decentralized systems that can encode our stated values more consistently than we ever could ourselves.
[]
The human brain evolved to pursue, not to possess—happiness is neurologically designed to be temporary. The wellness industry profits from this by creating dissatisfaction and selling inadequate solutions, while the genuine correlates of wellbeing (relationships, contribution, autonomy) cannot be productized. The honest path is not finding happiness but accepting its impossibility and pursuing meaning instead.
[]