I think you’re imagining a scenario where every organization either:
is not seriously addressing existential risk, or
has run out of room for more funding
One reason this could happen would be organizational: organizations lose their sense of direction or initiative, perhaps by becoming bloated on money or dragged away from their core purpose by pushy donors. This doesn’t feel stable, as you can always start new organizations, but there may be a lag of a few years between noticing that existing orgs have become rubbish and getting new ones to do useful stuff.
Another reason this could happen would be more strategic: that humanity actually can’t think of any things it can do that will reduce existential risk. Perhaps there’s a fear that meddling will make things worse? Orgs like FHI certainly put resources into strategizing, so this setup wouldn’t be a result of a lack of creative thinking. It might be something more fundamental to do with ensuring the stability of a system as complex as today’s technological world being a Really Hard Problem.
Even if we don’t hit a complete wall, we might hit diminishing returns. If there turns out to be some moral or practical reason why xrisk is on parity with poverty and animals (in terms of importance) then EA would essentially be running out of stuff to do.
Which we eventually want—but not while the world is full of danger and suffering.
I think you’re imagining a scenario where every organization either:
is not seriously addressing existential risk, or
has run out of room for more funding
One reason this could happen would be organizational: organizations lose their sense of direction or initiative, perhaps by becoming bloated on money or dragged away from their core purpose by pushy donors. This doesn’t feel stable, as you can always start new organizations, but there may be a lag of a few years between noticing that existing orgs have become rubbish and getting new ones to do useful stuff.
Another reason this could happen would be more strategic: that humanity actually can’t think of any things it can do that will reduce existential risk. Perhaps there’s a fear that meddling will make things worse? Orgs like FHI certainly put resources into strategizing, so this setup wouldn’t be a result of a lack of creative thinking. It might be something more fundamental to do with ensuring the stability of a system as complex as today’s technological world being a Really Hard Problem.
Even if we don’t hit a complete wall, we might hit diminishing returns. If there turns out to be some moral or practical reason why xrisk is on parity with poverty and animals (in terms of importance) then EA would essentially be running out of stuff to do.
Which we eventually want—but not while the world is full of danger and suffering.