The AI economy is not creating a new economic order but accelerating the oldest one - feudalism with computational monopoly replacing land ownership. The concentration of AI capability in a handful of corporations, combined with the historical pattern of technological revolutions being captured by existing power structures, suggests a feudal outcome unless open source AI provides a structural counterforce. The question is not whether AI creates or destroys jobs, but who owns the intelligence infrastructure that will mediate all economic activity.
xAI’s Grok Voice Agent API enters a competitive market for voice AI infrastructure, raising practical questions about privacy, reliability, and dependency alongside broader questions about who controls the technology mediating human interaction. The essay examines both the genuine utility of voice agents and the structural concerns about power concentration in AI infrastructure development.
This essay explores the technical and political dimensions of fast AI inference for agentic systems. While acknowledging the genuine importance of speed benchmarks like Clarifai’s 544 tokens/second achievement, it examines the deeper questions of infrastructure control, the countervailing force of open-weights models, and the implications of reasoning engines becoming extensions of human cognition.
Meta’s acquisition of Manus AI exemplifies the accelerating consolidation of AI talent into Big Tech. The essay explores why this happens—compute as gravitational center—what it means for independent AI development, and whether anything might reverse the trend of capability concentration in walled gardens.
This essay examines Jan Leike’s revelation about Opus 4.5’s alignment process and explores the deeper implications of humans checking humans checking AI. It argues that the recursive nature of alignment oversight reflects fundamental limitations in human value consistency, and suggests that AI systems may eventually play a role in helping humans apply their own stated values more reliably than they can themselves.
Google DeepMind’s partnership with Boston Dynamics represents a pivotal moment in embodied AI development, combining advanced AI models with capable robotic hardware. This essay explores both the genuine potential benefits—elder care, dangerous work, accessibility—and the serious risks of concentrated ownership over physical AI systems. The critical question isn’t whether this technology will exist, but whether its benefits will be distributed broadly or captured by the few companies building it.
This essay examines the Future of Humanity Institute’s argument that advanced AI poses extinction risk, while proposing that the danger vector runs through flawed human nature rather than AI’s inherent properties. It argues that historical patterns of technology capture by power structures suggest open source AI may be safer than closed systems, despite conventional safety wisdom, because distributed danger is more correctable than concentrated danger controlled by institutions with poor track records.
This essay examines Eliezer Yudkowsky’s advice on splitting donations between AI safety organizations and argues that while optimization-focused arguments for concentration may be technically correct, they assume false precision. The case for splitting donations rests on epistemic humility, organizational capture dynamics, the role of luck in wealth accumulation, and the value of decentralization as risk management in environments of deep uncertainty.
This essay examines why banking, healthcare, telecommunications, and defense contracting all exhibit similar anti-competitive patterns, arguing that the problem is structural rather than specific to any industry. It explores whether decentralized alternatives can break the cycle of capture and extraction, concluding that while outcomes remain uncertain, the attempt itself constrains incumbents and creates space for genuine alternatives.
AI development is concentrated in approximately six organizations, creating a new feudal structure where access to intelligence replaces access to land as the basis for extraction. The only path away from this new feudalism runs through genuinely open source AI, but the historical pattern suggests that technological revolutions deliver new forms of control rather than liberation. The outcome depends on whether distributed alternatives can be built before the window of opportunity closes.