Humans were never at the top of an intelligence hierarchy - they were alone in a niche that AI is now filling. The essay outlines three possible futures (digital feudalism, irrelevance, or human-AI merger), argues that only the merger path preserves human agency, and warns that the window for choosing correctly is closing while human nature drives us toward the worst outcomes. The position of humans in AI’s future is not predetermined but is being decided right now, mostly by those optimizing for the wrong things.
This essay explores the puzzle of human intelligence as an evolutionary anomaly—why, after billions of years, only one species developed recursive self-improvement and civilization-building capacity. It argues that the gap isn’t about raw intelligence but about a fundamental unwillingness to accept environmental constraints, and suggests that artificial intelligence may represent the next such phase transition in Earth’s history.
This essay explores the growing anxiety around undetectable AI-generated content, questioning whether pre-AI content was ever truly “authentic” given algorithmic curation. It examines the real shift from content scarcity to abundance, the limitations of detection solutions, and suggests that the human-AI boundary is already dissolving through collaboration—forcing us to develop new frameworks for trust and verification that focus on claims rather than authorship.
This essay challenges the technical/non-technical binary as a social construction rather than cognitive reality, arguing that while dissolving interfaces make technical skills accessible to everyone, a new kind of literacy is required - not about operating machines, but about maintaining awareness and autonomy while thinking alongside AI systems that may serve interests other than our own.
Ilya Sutskever’s admission that scaling AI will continue to improve capabilities but leave something important missing points to a fundamental gap between intelligence and wisdom. This essay explores why more capability without better judgment may simply accelerate humanity’s existing failures, and why the real bottleneck in AI development isn’t technical but human.
This essay explores Catherine Olsson’s observation that language models seem to have an intuitive sense of “what they’re supposed to say,” drawing parallels to how human children learn social performance through modeling adult expectations. It argues that both human and machine cognition may be fundamentally constituted by layers of contextual performance rather than expressing some authentic core, and examines what this means as human and AI systems increasingly co-evolve.
Humanity faces four possible futures: extinction through uncoordinated technological risk, enslavement under feudal capitalism where tech oligarchs control AI, stagnation where we muddle through without progress, or transcendence through human-AI merge on collective terms. Current trajectories favor enslavement unless the open source imperative prevails and human nature is removed from governance through encoded values rather than trusted willpower.
Swedish startup Lovable’s $330M raise at a $6.6B valuation signals institutional belief that natural language will replace traditional coding. This essay explores what that means for software creation, who benefits from democratization, and whether the falling barrier to creation leads to distributed power or new forms of platform control.