This essay examines Eliezer Yudkowsky’s advice on splitting donations between AI safety organizations and argues that while optimization-focused arguments for concentration may be technically correct, they assume false precision. The case for splitting donations rests on epistemic humility, organizational capture dynamics, the role of luck in wealth accumulation, and the value of decentralization as risk management in environments of deep uncertainty.