We could devote 100% of currently available resources to existential risk reduction, live in austerity, and never be finished ensuring our own survival. However, if increase the value of futures where we survive, we will develop more and more resources that can then be put to existential risk reduction. People will be not only happier, but also more capable and skilled, when we create a world where people can thrive rather than just survive. The highest quality futures are the most robust.
Edit: I misinterpreted the prompt initially (I think you did too); “value of futures where we survive” is meant specifically as “long-run futures, past transformative AI”, not just all future including the short term. So digital minds, suffering risk, etc. Pretty confusing!
This argument seems pretty representative here; so I’ll just note that it is only sensible under two assumptions:
Transformative AI isn’t coming soon—say, not within ~20 years. &
If we are assuming a substantial amount of short-term value is in in-direct preparation for TAI, this excludes many interventions which primarily have immediate returns, with possible long-term returns accruing past the time window. So malaria nets? No. Most animal welfare interventions? No. YIMBYism in Silicon Valley? Maybe yes. High skilled immigration? Maybe yes. Political campaigns? Yes.
Of course, we could just say either that we actually aren’t all that confident about TAI, or that we are, but immediate welfare concerns simply outweigh marginal preparation or risk reduction.
So either reject something above; or simply go all in on principle toward portfolio diversification. But both give me some pause.
We could devote 100% of currently available resources to existential risk reduction, live in austerity, and never be finished ensuring our own survival. However, if increase the value of futures where we survive, we will develop more and more resources that can then be put to existential risk reduction. People will be not only happier, but also more capable and skilled, when we create a world where people can thrive rather than just survive. The highest quality futures are the most robust.
Edit: I misinterpreted the prompt initially (I think you did too); “value of futures where we survive” is meant specifically as “long-run futures, past transformative AI”, not just all future including the short term. So digital minds, suffering risk, etc. Pretty confusing!
This argument seems pretty representative here; so I’ll just note that it is only sensible under two assumptions:Transformative AI isn’t coming soon—say, not within ~20 years.&If we are assuming a substantial amount of short-term value is in in-direct preparation for TAI, this excludes many interventions which primarily have immediate returns, with possible long-term returns accruing past the time window. So malaria nets? No. Most animal welfare interventions? No. YIMBYism in Silicon Valley? Maybe yes. High skilled immigration? Maybe yes. Political campaigns? Yes.Of course, we could just say either that we actually aren’t all that confident about TAI, or that we are, but immediate welfare concerns simply outweigh marginal preparation or risk reduction.So either reject something above; or simply go all in on principle toward portfolio diversification. But both give me some pause.