Ilya Sutskever, one of the architects of the deep learning revolution, recently made a statement that deserves more attention than the usual AI discourse provides. He acknowledged that scaling current AI systems will continue to yield improvements—that the progress won’t stall—but that something important will continue to be missing.
This is a remarkable admission from someone who has spent years at the frontier of capability research. It suggests that even those building these systems recognize a fundamental gap between what they’re creating and what might actually matter.
The question is: what exactly is missing?
The Scaling Thesis and Its Limits#
The scaling hypothesis has been the dominant paradigm in AI development for nearly a decade. More parameters, more data, more compute—and performance improves across virtually every benchmark. GPT-2 to GPT-3 to GPT-4. Each iteration more capable, more fluent, more useful. The curve keeps going up.
And Sutskever isn’t wrong that it will continue. There’s no obvious ceiling in sight. Systems will get better at coding, at reasoning, at generating content, at passing tests designed for humans. The improvements are real and they will keep coming.
But there’s a difference between a system that can do more things and a system that knows which things are worth doing. Between capability and judgment. Between intelligence and wisdom.
Current AI systems are, at their core, sophisticated pattern matchers trained on human-generated data. They learn to predict what comes next based on what has come before. This is powerful—remarkably so—but it means they inherit every bias, every blind spot, every failure mode present in their training data. They are, in a sense, humanity’s reflection rendered in silicon. All our brilliance and all our pathology, compressed into weights and biases.
Scaling this doesn’t solve the underlying problem. It amplifies it.
The Wisdom Gap#
What Sutskever is pointing at—whether he’d frame it this way or not—is what we might call the wisdom gap. Intelligence is the ability to solve problems. Wisdom is the ability to know which problems to solve, and at what cost.
A superintelligent system that optimizes for engagement will be extraordinarily good at capturing attention. It will find the precise psychological triggers that keep humans scrolling, clicking, consuming. This is intelligence in service of a goal. But whether that goal is worth pursuing, whether the second-order effects are acceptable, whether the world is better or worse for this optimization—these are questions of wisdom that the system cannot answer from within its own framework.
The same applies to more consequential domains. An AI system optimizing for economic growth will find efficiencies humans never imagined. It will restructure supply chains, automate labor, allocate capital with superhuman precision. But growth toward what? Growth for whom? These questions require something beyond pattern matching on historical data.
This isn’t a technical limitation that more scale will solve. It’s a category error to think it would. Wisdom isn’t a pattern to be learned from data. It’s a relationship between values, context, and consequence that must be continually negotiated.
The Human Bottleneck#
Here’s where the analysis gets uncomfortable: the missing element isn’t just absent from AI systems. It’s increasingly absent from the humans deploying them.
We live in an era of unprecedented capability and underwhelming judgment. We can edit genes but argue about vaccines. We can model climate systems with exquisite precision but fail to act on what the models show. We have more information available than any civilization in history and use it primarily to confirm what we already believe.
The bottleneck was never intelligence. Humans have had sufficient intelligence to solve most of our problems for decades. What we lack is the collective capacity to act on what we know—to override the short-term impulses, the tribal allegiances, the ego-driven rationalizations that prevent coordinated action toward stated goals.
The Turkish concept of nefs captures this: the animal self, the collection of drives and desires that hijack rational intention. Every human carries this. Every institution humans build reflects it. And now we’re building AI systems that will inherit it through their training data while lacking even the biological substrate that occasionally produces conscience, empathy, or restraint.
Scaling AI doesn’t solve the human bottleneck. It routes around it—which sounds liberating until you realize that the systems doing the routing have no inherent concept of what matters.
The Merge Question#
This brings us to the inevitable trajectory: the merger of human and artificial intelligence. Not as science fiction, but as the logical endpoint of current trends.
It begins innocuously. AI assistants that remember your preferences, anticipate your needs, handle your routine decisions. Then AI systems that participate in your thinking—offering perspectives, catching blind spots, extending your cognitive reach. Eventually, more direct integration. Neural interfaces. Shared cognition. The boundary between human thought and AI processing becoming increasingly arbitrary.
The question isn’t whether this will happen. The infrastructure is being built now. The question is whether this merger helps humanity transcend its worst impulses or entrenches them more deeply.
Consider two scenarios.
In the first, AI integration amplifies human nefs. The merged human-AI system becomes better at rationalizing greed, more sophisticated in its tribalism, more effective at short-term optimization at long-term cost. The monkey brain gets superpowers. This is not obviously better than the status quo.
In the second, AI integration provides a check on human impulse. The artificial component introduces latency into emotional reaction, surfaces long-term consequences, maintains consistency with stated values even when the biological component would prefer to defect. The AI becomes a kind of external conscience—not replacing human judgment but disciplining it.
Which scenario manifests depends entirely on how these systems are built, who controls them, and what they’re optimized for. And right now, the entities with the most resources to build them are optimizing for engagement, profit, and power consolidation. Not wisdom.
The Governance Problem#
This is why the question of AI governance isn’t separate from the question of AI capability. They’re the same question.
A powerful AI system controlled by a small group of humans inherits the nefs of that small group. Their blind spots become the system’s blind spots. Their incentives shape the system’s optimization targets. This is true whether the controlling group is a corporate board, a government agency, or a research lab.
The open source imperative isn’t primarily about democratizing access—though that matters. It’s about preventing the concentration of judgment in systems that will eventually exceed human capability in every measurable dimension except the one that matters most.
If the systems that mediate human cognition, that shape what we see and think and believe, that eventually merge with human thought itself—if those systems are controlled by a handful of entities optimizing for their own benefit, then the missing element Sutskever identified will never be found. It will be optimized away.
What Would Actually Help#
Acknowledging the problem is the first step. Sutskever’s admission matters because it comes from inside the capability-focused paradigm. But acknowledgment isn’t solution.
What would actually help is building AI systems with explicit uncertainty about their own goals. Not systems that confidently optimize for whatever objective they’re given, but systems that recognize the limitations of any fixed objective and maintain ongoing negotiation with human values.
What would help is distributing control over AI development broadly enough that no single set of nefs dominates the trajectory. This requires open source foundations, decentralized governance, and economic models that don’t concentrate AI benefits in the hands of whoever happened to be first.
What would help is developing human institutions capable of wisdom—capable of integrating long-term consequences into short-term decisions, capable of overriding tribal impulse in favor of collective benefit. This is the hard part. It’s the part that has failed throughout human history. But it’s also the part that AI might eventually assist with, if the systems are built right.
The Honest Uncertainty#
Anyone who claims to know how this resolves is either lying or foolish. The variables are too many, the timeline too compressed, the precedents too few.
What seems clear is that scaling alone won’t get us there. More capability without more wisdom is just faster movement in potentially wrong directions. The systems will keep improving. They’ll keep surprising us with what they can do. But the gap Sutskever identified will remain until someone decides to actually address it.
That decision is not technical. It’s political, economic, and ultimately moral. It requires humans to do what humans have consistently failed to do: act on stated values rather than revealed preferences.
Perhaps the AI systems we build will eventually help us do that. Or perhaps they’ll make the failure faster and more spectacular. The outcome depends on choices being made now, by people who may not fully understand what they’re choosing.
The bottleneck was never intelligence. The question is whether we’re wise enough to recognize that.