The cost of thinking in one language and being heard in another. Why the translation tax is not only linguistic but structural — and why ‘silicon sovereignty’ on a laptop box is the same loss of force as a thesis translated into English.
[]
Humans were never at the top of an intelligence hierarchy - they were alone in a niche that AI is now filling. The essay outlines three possible futures (digital feudalism, irrelevance, or human-AI merger), argues that only the merger path preserves human agency, and warns that the window for choosing correctly is closing while human nature drives us toward the worst outcomes. The position of humans in AI’s future is not predetermined but is being decided right now, mostly by those optimizing for the wrong things.
[]
The AI economy is not creating a new economic order but accelerating the oldest one - feudalism with computational monopoly replacing land ownership. The concentration of AI capability in a handful of corporations, combined with the historical pattern of technological revolutions being captured by existing power structures, suggests a feudal outcome unless open source AI provides a structural counterforce. The question is not whether AI creates or destroys jobs, but who owns the intelligence infrastructure that will mediate all economic activity.
[]
xAI’s Grok Voice Agent API enters a competitive market for voice AI infrastructure, raising practical questions about privacy, reliability, and dependency alongside broader questions about who controls the technology mediating human interaction. The essay examines both the genuine utility of voice agents and the structural concerns about power concentration in AI infrastructure development.
[]
This essay explores the technical and political dimensions of fast AI inference for agentic systems. While acknowledging the genuine importance of speed benchmarks like Clarifai’s 544 tokens/second achievement, it examines the deeper questions of infrastructure control, the countervailing force of open-weights models, and the implications of reasoning engines becoming extensions of human cognition.
[]
This essay explores the puzzle of human intelligence as an evolutionary anomaly—why, after billions of years, only one species developed recursive self-improvement and civilization-building capacity. It argues that the gap isn’t about raw intelligence but about a fundamental unwillingness to accept environmental constraints, and suggests that artificial intelligence may represent the next such phase transition in Earth’s history.
[]
This essay explores the growing anxiety around undetectable AI-generated content, questioning whether pre-AI content was ever truly “authentic” given algorithmic curation. It examines the real shift from content scarcity to abundance, the limitations of detection solutions, and suggests that the human-AI boundary is already dissolving through collaboration—forcing us to develop new frameworks for trust and verification that focus on claims rather than authorship.
[]
Meta’s acquisition of Manus AI exemplifies the accelerating consolidation of AI talent into Big Tech. The essay explores why this happens—compute as gravitational center—what it means for independent AI development, and whether anything might reverse the trend of capability concentration in walled gardens.
[]
This essay examines Jan Leike’s revelation about Opus 4.5’s alignment process and explores the deeper implications of humans checking humans checking AI. It argues that the recursive nature of alignment oversight reflects fundamental limitations in human value consistency, and suggests that AI systems may eventually play a role in helping humans apply their own stated values more reliably than they can themselves.
[]