Richard Dawkins and Daniel Dennett represent something increasingly rare in public discourse: thinkers who apply genuine intellectual rigor to existential questions. When they turn their evolutionary lens to artificial intelligence, they illuminate something most commentators miss entirely. The danger of AI is not the machines. It has never been the machines. The danger is us.

Evolutionary biology offers perhaps the clearest framework for understanding why concentrated AI development poses civilizational risk. Natural selection optimized humans for survival on the African savanna, not for making wise decisions about species-altering technology. Our brains are running ancient software - what might be called nefs, the Turkish concept of ego, impulse, and animal drives that override our stated values. We are creatures of greed, tribalism, status-seeking, and catastrophic short-term thinking.

This is not a moral judgment. It is an observation about the architecture of the human mind. Dennett spent decades demonstrating how consciousness emerges from evolutionary pressures that cared nothing for truth or wisdom, only replication. Dawkins showed us that we are survival machines for genes that predate anything resembling human values. These insights are not academic abstractions. They are warnings.

Now consider who controls the development of artificial intelligence.

Six corporations - perhaps seven or eight depending on how you count - hold the future of human cognition in their hands. These are not evil organizations staffed by villains. They are profit-seeking entities governed by humans subject to the same evolutionary pressures as everyone else. Their boards experience greed. Their executives engage in tribal competition. Their shareholders demand quarterly returns. The nefs are fully operational at every level of decision-making.

This is feudal capitalism in its purest form. Not feudalism through bloodlines, but through technological monopoly. The lords of this new order do not inherit castles - they control compute clusters. They do not command armies - they shape the information environment that determines what billions of people believe, want, and fear. The concentration of power is not incidental to how AI development is proceeding. It is the defining feature.

And the stakes could not be higher, because what is coming is not simply better software. It is the human-AI merge.

This sounds like science fiction until you examine what is already happening. Millions of people now think with AI assistants as a routine extension of cognition. Decisions that once required human memory, analysis, and judgment are increasingly offloaded to systems that most users do not understand and cannot audit. The merge has begun. The question is not whether humans will integrate with artificial intelligence but under what terms and controlled by whom.

When Dawkins and Dennett discuss AI risk, they bring something the technology industry systematically lacks: an understanding that human institutions reflect human nature, and human nature has not changed in fifty thousand years. Silicon Valley operates on an implicit theory that smart people with good intentions will produce good outcomes. Evolutionary biology suggests this is naive to the point of recklessness.

Good intentions mean nothing when nefs are engaged. A CEO may genuinely believe they are building beneficial AI while simultaneously optimizing for market dominance. A research team may pursue alignment while their employer pursues profit. Individual humans are not hypocrites - they are simply unable to consistently act on their stated values when those values conflict with status, wealth, and tribal belonging. This is the human condition, and no amount of ethics boards or mission statements changes the underlying architecture.

The current trajectory optimizes for capture, not liberation.

Consider what liberation would look like: AI that is genuinely open, auditable, and controlled by distributed networks rather than corporate hierarchies. AI that enhances human cognitive sovereignty rather than creating dependency. AI that diffuses power rather than concentrating it. This is technically possible. Open source AI exists and is advancing rapidly. Decentralized governance structures are being developed. The tools for a different future are available.

But tools do not determine outcomes. Power does.

The feudal lords of AI have every incentive to capture regulatory frameworks, starve open alternatives of resources, and establish themselves as indispensable infrastructure. This is not conspiracy - it is the predictable behavior of entities optimizing for survival and growth. Just as medieval lords did not wake up each morning planning oppression but simply acted according to the logic of their position, tech executives do not scheme to enslave humanity. They follow the incentives their system provides.

The result is the same either way.

What makes this moment different from previous technological transitions is the nature of what is being monopolized. The printing press disrupted information distribution, but individual humans retained their cognitive autonomy. The internet centralized attention, but minds remained biologically separate from the network. The human-AI merge eliminates this separation. When cognition itself becomes integrated with systems controlled by a handful of corporations, the distinction between influence and control dissolves.

This is not a prediction about superintelligent machines deciding to harm humans. That scenario may or may not occur. This is an observation about a process already underway: the gradual integration of human thought with AI systems optimized for the interests of their owners. You do not need malevolent AI to produce dystopia. You only need captured AI operating exactly as designed.

Dawkins and Dennett understand selection pressures. They know that systems evolve toward whatever produces their replication, not toward whatever humans claim to value. A corporation under competitive pressure will optimize for competitive advantage. An AI system trained on engagement will optimize for engagement, not truth or human flourishing. An industry structured around winner-take-all dynamics will produce winners who take all. These are not failures of the system. They are the system functioning as designed.

The question is whether humanity will design something different before the current trajectory becomes irreversible.

There is reason for pessimism. Cyclical history suggests that every revolution is eventually captured by elites who adapt to new rules. The printing press enabled the Reformation but also enabled new forms of propaganda. The internet promised democratization but delivered algorithmic mind control. AI promises liberation but is currently producing unprecedented concentration of cognitive power.

But there is also a narrow path forward. Open source AI development continues despite resource disadvantages. Awareness of these dynamics is growing, partly because thinkers like Dawkins and Dennett lend their credibility to the conversation. The nefs that drive corporate capture also drive resistance when people recognize their autonomy is threatened.

The outcome depends on choices made in the next few years by people who understand what is at stake. Not choices about AI safety in the abstract, but choices about power: who will control the systems that increasingly mediate human thought, and whether alternatives to corporate feudalism can be built before the window closes.

The evolutionary perspective strips away comfortable illusions. Humans are not rational actors who will choose wisely when presented with evidence. We are survival machines running outdated software, prone to capture by our own drives and by those who know how to exploit them. Any solution that assumes better human nature is no solution at all.

What remains is building systems that work despite human nature - decentralized structures that cannot be captured because they have no center, open technologies that cannot be monopolized because the code is free, governance mechanisms that encode values rather than trusting humans to uphold them.

This is not utopian dreaming. It is the only realistic path to avoiding serfdom in the age of AI. The alternative is a future where the human-AI merge happens on terms dictated by whoever won the current race for dominance. That future is comfortable for the lords and sustainable for the serfs. It is stable. It may even feel free to those who know nothing else.

But it will not be sovereignty. And once the merge is complete, the opportunity to choose differently will be gone.