I think this is kind of the perennial problem for any nominally altruistic group. I’m not sure how you’d insure against it or how’d you’d even know if it was happening, but I definitely agree EA should at least acknowledge the potential problem having large sums of money and prestige flow through the organisation creates, more so than it currently does.
Personally I think Orwell was wrong that the Soviets’ main problem was Napoleons’ greed (don’t ask me what the real problem was), the semi-recently opened archives give pretty clear evidence that at least the members of the politburo where believers in their cause. So maybe corruption isn’t actually very common in real world altruistic organisations.
To me, it seems to be evidence that you can be a believer in a cause, but still become corrupt because you use that very belief to justify self-serving logic about how what you’re doing really advances the cause.
Thus it would be even more relevant to EA because I think the risk of EAs becoming nakedly self-interested is low; the more likely failure mode is using EA to fool yourself and rationalize self-serving behavior.
I think this is kind of the perennial problem for any nominally altruistic group. I’m not sure how you’d insure against it or how’d you’d even know if it was happening, but I definitely agree EA should at least acknowledge the potential problem having large sums of money and prestige flow through the organisation creates, more so than it currently does.
Personally I think Orwell was wrong that the Soviets’ main problem was Napoleons’ greed (don’t ask me what the real problem was), the semi-recently opened archives give pretty clear evidence that at least the members of the politburo where believers in their cause. So maybe corruption isn’t actually very common in real world altruistic organisations.
To me, it seems to be evidence that you can be a believer in a cause, but still become corrupt because you use that very belief to justify self-serving logic about how what you’re doing really advances the cause.
Thus it would be even more relevant to EA because I think the risk of EAs becoming nakedly self-interested is low; the more likely failure mode is using EA to fool yourself and rationalize self-serving behavior.