Eliezer Yudkowsky recently posed a simple question to his followers: how should you split donations between organizations you support at different levels of enthusiasm? His advice was straightforward—if you rate MIRI at 8/10 and Lightcone at 2/10, split proportionally. Then he acknowledged that “galaxy-brained arguments” exist against this approach.

Those galaxy-brained arguments are probably correct on their own terms. But they’re solving the wrong problem.

The Optimizer’s Trap#

The sophisticated case against splitting donations goes something like this: marginal utility matters. If Organization A is genuinely better than Organization B, every dollar you give to B is a dollar that could have created more value at A. The expected value calculation is clear. Concentrate your resources where they’ll do the most good.

This argument has powered the effective altruism movement’s emphasis on cause prioritization. Don’t spread thin. Find the highest-leverage intervention and push hard. The math checks out.

But the math assumes things we cannot know.

First, it assumes you can accurately assess relative value between organizations. You rated them 8/10 and 2/10—but based on what? Their published research? Their stated mission? The charisma of their founders? Your assessment is a guess wrapped in a feeling dressed up as analysis. The confidence interval on that 8 versus 2 distinction is enormous.

Second, it assumes organizations are static. The one you rated highly today might drift, get captured by internal politics, or simply exhaust its best ideas. The one you rated lower might be two hires away from a breakthrough. Your snapshot assessment cannot account for organizational dynamics over time.

Third, and most importantly, it assumes your judgment is uncorrupted. It isn’t.

The Founder Capture Problem#

Every organization with a center eventually gets captured by whoever controls that center. This is not cynicism. It’s observation.

Nonprofits are particularly vulnerable. Unlike companies, they lack the corrective mechanism of market feedback. A company making bad products eventually loses customers. A nonprofit making bad arguments can survive indefinitely on donor inertia and founder reputation.

The founder effect compounds over time. Early decisions become load-bearing walls. The founder’s intuitions become organizational gospel. Their blindspots become institutional blindspots. Their enemies become the organization’s enemies. Their aesthetic preferences shape everything from hiring to research directions.

This isn’t because founders are bad people. It’s because they’re people. Human judgment degrades under the influence of status, attention, and the psychological need to believe one’s life work matters. The longer someone leads an organization, the harder it becomes for them to hear that they might be wrong.

MIRI and Lightcone both suffer from this dynamic, as does every organization led by humans for extended periods. The question isn’t whether founder capture happens. It’s how much and in what ways.

When you concentrate donations in a single organization, you’re betting that this particular founder’s degradation pattern won’t matter. You’re betting your assessment of their current trajectory will hold. You’re betting their blindspots won’t become catastrophic.

That’s a lot of bets.

Decentralization as Risk Management#

Splitting donations is not about maximizing expected value. It’s about reducing variance.

The AI safety ecosystem—which is what Yudkowsky is really talking about here—is operating under profound uncertainty. Nobody knows which research directions will matter. Nobody knows which organizational forms will prove most effective. Nobody knows whether technical alignment, governance work, or public communication will be the binding constraint.

In environments of deep uncertainty, diversification isn’t a failure of optimization. It’s the only rational strategy.

Consider what you’re actually doing when you donate to multiple organizations:

You’re funding different research bets. Maybe MIRI’s agent foundations work matters. Maybe Lightcone’s infrastructure and community-building matters. You don’t know which will prove crucial, and neither does anyone else.

You’re supporting different failure modes. If one organization implodes—leadership conflict, strategic dead-end, reputational damage—your entire contribution to the field doesn’t implode with it.

You’re maintaining optionality. The organization you rated 2/10 today might become clearly more important in two years. Having an existing relationship through past donations creates options.

You’re reducing single points of failure in the ecosystem itself. Fields dominated by one or two organizations become brittle. They develop monocultures of thought. They become vulnerable to the specific blindspots of their dominant players.

The Luck Problem#

There’s another dimension to this that the optimization frame misses entirely.

Most wealth accumulation involves substantial luck. The timing of when you were born. The country you were born in. Which opportunities happened to cross your path. Which of your many decisions happened to pay off.

This isn’t a moral judgment. It’s statistical reality. The correlation between effort and outcome is positive but loose. The correlation between merit and wealth is even looser.

If your capital allocation was significantly shaped by luck, what gives you confidence that your capital deployment is immune to luck’s influence? Your assessment of which organization deserves more wasn’t generated by a perfect utility-calculating machine. It was generated by you—a product of your particular path through the world, your particular information diet, your particular social connections.

Yudkowsky’s advice to split proportionally to emotion at least acknowledges this. The 8/10 versus 2/10 framing doesn’t pretend to be objective analysis. It’s explicitly emotional response. And emotional responses, while imperfect, at least aren’t pretending to be something they’re not.

The galaxy-brained alternative—concentrate resources for maximum expected value—pretends your assessment is more reliable than it is. It launders uncertainty into false precision.

When Concentration Makes Sense#

None of this means splitting is always right.

If you have genuine inside information about an organization—you work there, you know the founders personally, you’ve seen their research process up close—concentration might be justified. Your assessment isn’t just vibes; it’s grounded observation.

If you’re making very large donations, concentration allows you to build a relationship, potentially influence direction, and receive detailed updates that further inform your judgment. The feedback loop is tighter.

If one organization is clearly solving a different problem than the other—one works on technical alignment, one works on policy—splitting might mean you’re just funding two separate causes rather than diversifying within a cause.

But for most donors making modest contributions based on publicly available information, splitting is the honest response to uncertainty.

The Meta-Point#

Yudkowsky’s framing reveals something interesting about how the rationalist community approaches these questions.

He presents proportional splitting as the obvious default, galaxy-brained concentration as the sophisticated deviation. But in practice, the community often inverts this. Concentration gets treated as the smart, rigorous approach. Splitting gets treated as the failure to think clearly.

This inversion happens because the optimization frame is more intellectually satisfying. It lets you feel like you’re doing real analysis rather than just spreading bets. It produces a clear answer rather than a shrug.

But intellectual satisfaction is not the same as being correct. Sometimes the honest answer is: I don’t know enough to concentrate. My assessment is uncertain. The future is uncertain. The organizations themselves will change in ways I can’t predict.

Splitting your donations is an admission of epistemic humility. It says: my confidence in my own judgment has limits.

That admission is uncomfortable in communities that pride themselves on rigorous thinking. It feels like giving up. But giving up on false precision isn’t giving up on thinking clearly. It’s thinking clearly about the limits of thought.

Conclusion#

The galaxy-brained arguments against splitting donations are probably correct within their own frame. If you really could reliably assess relative value, concentration would maximize expected impact.

But you can’t. Neither can I. Neither can Yudkowsky, though his assessments are better informed than most.

Splitting donations is decentralization of resources. It’s risk management under uncertainty. It’s honest acknowledgment that your wealth is partly luck and your judgment is partly vibes.

Single organizations get captured by founders’ limitations over time. Concentrated bets assume precision you don’t have. The field benefits from diversity of funding as much as diversity of approach.

Split your donations. Reduce concentration risk. Move on.

There are harder problems to solve than this one.