Posts for: #Open Source AI

Feudal Capitalism and the AI Economy - Same Lords, New Castles

The AI economy is not creating a new economic order but accelerating the oldest one - feudalism with computational monopoly replacing land ownership. The concentration of AI capability in a handful of corporations, combined with the historical pattern of technological revolutions being captured by existing power structures, suggests a feudal outcome unless open source AI provides a structural counterforce. The question is not whether AI creates or destroys jobs, but who owns the intelligence infrastructure that will mediate all economic activity.
[]

The Voice API Race: Who Gets to Build the Talking Machines?

xAI’s Grok Voice Agent API enters a competitive market for voice AI infrastructure, raising practical questions about privacy, reliability, and dependency alongside broader questions about who controls the technology mediating human interaction. The essay examines both the genuine utility of voice agents and the structural concerns about power concentration in AI infrastructure development.
[]

The Race for Reasoning: Speed, Scale, and the Question of Control in Agentic AI

This essay explores the technical and political dimensions of fast AI inference for agentic systems. While acknowledging the genuine importance of speed benchmarks like Clarifai’s 544 tokens/second achievement, it examines the deeper questions of infrastructure control, the countervailing force of open-weights models, and the implications of reasoning engines becoming extensions of human cognition.
[]

The Great Consolidation: When AI Talent Flows to Walled Gardens

Meta’s acquisition of Manus AI exemplifies the accelerating consolidation of AI talent into Big Tech. The essay explores why this happens—compute as gravitational center—what it means for independent AI development, and whether anything might reverse the trend of capability concentration in walled gardens.
[]

The Extinction Argument: Why the Danger of Advanced AI Lives in Us, Not in the Machine

This essay examines the Future of Humanity Institute’s argument that advanced AI poses extinction risk, while proposing that the danger vector runs through flawed human nature rather than AI’s inherent properties. It argues that historical patterns of technology capture by power structures suggest open source AI may be safer than closed systems, despite conventional safety wisdom, because distributed danger is more correctable than concentrated danger controlled by institutions with poor track records.
[]

The Evolutionary Trap: Why Six Corporations Steering the Human-AI Merge Should Terrify You

Dawkins and Dennett’s evolutionary lens reveals the true danger of AI: not the machines, but human nature itself. With six corporations steering the human-AI merge, our ancient drives toward greed and tribalism make feudal capture nearly inevitable unless we build decentralized alternatives before the window closes.
[]