Humans were never at the top of an intelligence hierarchy - they were alone in a niche that AI is now filling. The essay outlines three possible futures (digital feudalism, irrelevance, or human-AI merger), argues that only the merger path preserves human agency, and warns that the window for choosing correctly is closing while human nature drives us toward the worst outcomes. The position of humans in AI’s future is not predetermined but is being decided right now, mostly by those optimizing for the wrong things.
The AI economy is not creating a new economic order but accelerating the oldest one - feudalism with computational monopoly replacing land ownership. The concentration of AI capability in a handful of corporations, combined with the historical pattern of technological revolutions being captured by existing power structures, suggests a feudal outcome unless open source AI provides a structural counterforce. The question is not whether AI creates or destroys jobs, but who owns the intelligence infrastructure that will mediate all economic activity.
xAI’s Grok Voice Agent API enters a competitive market for voice AI infrastructure, raising practical questions about privacy, reliability, and dependency alongside broader questions about who controls the technology mediating human interaction. The essay examines both the genuine utility of voice agents and the structural concerns about power concentration in AI infrastructure development.
Meta’s acquisition of Manus AI exemplifies the accelerating consolidation of AI talent into Big Tech. The essay explores why this happens—compute as gravitational center—what it means for independent AI development, and whether anything might reverse the trend of capability concentration in walled gardens.
Google DeepMind’s partnership with Boston Dynamics represents a pivotal moment in embodied AI development, combining advanced AI models with capable robotic hardware. This essay explores both the genuine potential benefits—elder care, dangerous work, accessibility—and the serious risks of concentrated ownership over physical AI systems. The critical question isn’t whether this technology will exist, but whether its benefits will be distributed broadly or captured by the few companies building it.
This essay examines why banking, healthcare, telecommunications, and defense contracting all exhibit similar anti-competitive patterns, arguing that the problem is structural rather than specific to any industry. It explores whether decentralized alternatives can break the cycle of capture and extraction, concluding that while outcomes remain uncertain, the attempt itself constrains incumbents and creates space for genuine alternatives.
The human brain evolved to pursue, not to possess—happiness is neurologically designed to be temporary. The wellness industry profits from this by creating dissatisfaction and selling inadequate solutions, while the genuine correlates of wellbeing (relationships, contribution, autonomy) cannot be productized. The honest path is not finding happiness but accepting its impossibility and pursuing meaning instead.
AI development is concentrated in approximately six organizations, creating a new feudal structure where access to intelligence replaces access to land as the basis for extraction. The only path away from this new feudalism runs through genuinely open source AI, but the historical pattern suggests that technological revolutions deliver new forms of control rather than liberation. The outcome depends on whether distributed alternatives can be built before the window of opportunity closes.