Humans were never at the top of an intelligence hierarchy - they were alone in a niche that AI is now filling. The essay outlines three possible futures (digital feudalism, irrelevance, or human-AI merger), argues that only the merger path preserves human agency, and warns that the window for choosing correctly is closing while human nature drives us toward the worst outcomes. The position of humans in AI’s future is not predetermined but is being decided right now, mostly by those optimizing for the wrong things.
Ilya Sutskever’s admission that scaling AI will continue to improve capabilities but leave something important missing points to a fundamental gap between intelligence and wisdom. This essay explores why more capability without better judgment may simply accelerate humanity’s existing failures, and why the real bottleneck in AI development isn’t technical but human.
Dawkins and Dennett’s evolutionary lens reveals the true danger of AI: not the machines, but human nature itself. With six corporations steering the human-AI merge, our ancient drives toward greed and tribalism make feudal capture nearly inevitable unless we build decentralized alternatives before the window closes.