This essay examines the Future of Humanity Institute’s argument that advanced AI poses extinction risk, while proposing that the danger vector runs through flawed human nature rather than AI’s inherent properties. It argues that historical patterns of technology capture by power structures suggest open source AI may be safer than closed systems, despite conventional safety wisdom, because distributed danger is more correctable than concentrated danger controlled by institutions with poor track records.
This essay examines Eliezer Yudkowsky’s advice on splitting donations between AI safety organizations and argues that while optimization-focused arguments for concentration may be technically correct, they assume false precision. The case for splitting donations rests on epistemic humility, organizational capture dynamics, the role of luck in wealth accumulation, and the value of decentralization as risk management in environments of deep uncertainty.