I don’t consider this rambling. I didn’t grok it the first time I read your comment, but it seems plenty insightful now. Thanks for helping out!
maybe with xrisk the amount of good to be done is so huge that we don’t hit a limit for a while
It seems to me the bottleneck here isn’t the output of good to be achieved in the future. However, the bottleneck could be the input of donation targets for the present. For example, every organization seeking to reduce existential risk reduction we can think of could hit points at which further donation isn’t a good giving opportunity.
This scenario isn’t too implausible. The Future of Life Institute could grant the $10 million donation it received from Elon Musk to the MIRI, the FHI, and all other low-hanging fruits for existential risk reduction. If those organizations hit more similar windfalls, or retain the current body of donors, all those organizations might not be able to allocate further funds effectively. I.e., they may hit a point of room-for-more funding issues, for multiple years. Suddenly, effective altruism would need seek brand new opportunities for reducing existential risk, which could be difficult.
I think you’re imagining a scenario where every organization either:
is not seriously addressing existential risk, or
has run out of room for more funding
One reason this could happen would be organizational: organizations lose their sense of direction or initiative, perhaps by becoming bloated on money or dragged away from their core purpose by pushy donors. This doesn’t feel stable, as you can always start new organizations, but there may be a lag of a few years between noticing that existing orgs have become rubbish and getting new ones to do useful stuff.
Another reason this could happen would be more strategic: that humanity actually can’t think of any things it can do that will reduce existential risk. Perhaps there’s a fear that meddling will make things worse? Orgs like FHI certainly put resources into strategizing, so this setup wouldn’t be a result of a lack of creative thinking. It might be something more fundamental to do with ensuring the stability of a system as complex as today’s technological world being a Really Hard Problem.
Even if we don’t hit a complete wall, we might hit diminishing returns. If there turns out to be some moral or practical reason why xrisk is on parity with poverty and animals (in terms of importance) then EA would essentially be running out of stuff to do.
Which we eventually want—but not while the world is full of danger and suffering.
I don’t consider this rambling. I didn’t grok it the first time I read your comment, but it seems plenty insightful now. Thanks for helping out!
It seems to me the bottleneck here isn’t the output of good to be achieved in the future. However, the bottleneck could be the input of donation targets for the present. For example, every organization seeking to reduce existential risk reduction we can think of could hit points at which further donation isn’t a good giving opportunity.
This scenario isn’t too implausible. The Future of Life Institute could grant the $10 million donation it received from Elon Musk to the MIRI, the FHI, and all other low-hanging fruits for existential risk reduction. If those organizations hit more similar windfalls, or retain the current body of donors, all those organizations might not be able to allocate further funds effectively. I.e., they may hit a point of room-for-more funding issues, for multiple years. Suddenly, effective altruism would need seek brand new opportunities for reducing existential risk, which could be difficult.
I think you’re imagining a scenario where every organization either:
is not seriously addressing existential risk, or
has run out of room for more funding
One reason this could happen would be organizational: organizations lose their sense of direction or initiative, perhaps by becoming bloated on money or dragged away from their core purpose by pushy donors. This doesn’t feel stable, as you can always start new organizations, but there may be a lag of a few years between noticing that existing orgs have become rubbish and getting new ones to do useful stuff.
Another reason this could happen would be more strategic: that humanity actually can’t think of any things it can do that will reduce existential risk. Perhaps there’s a fear that meddling will make things worse? Orgs like FHI certainly put resources into strategizing, so this setup wouldn’t be a result of a lack of creative thinking. It might be something more fundamental to do with ensuring the stability of a system as complex as today’s technological world being a Really Hard Problem.
Even if we don’t hit a complete wall, we might hit diminishing returns. If there turns out to be some moral or practical reason why xrisk is on parity with poverty and animals (in terms of importance) then EA would essentially be running out of stuff to do.
Which we eventually want—but not while the world is full of danger and suffering.