Executive summary: The option value argument for reducing extinction risk is weak, since it fails in the dystopian futures where it’s most needed. Future agents in such worlds likely won’t have the motivation or coordination to shut down civilization.
Key points:
The option value argument says we should reduce extinction risk so future agents can decide whether to continue civilization. But this requires dystopian futures to have the altruism and coordination to shut down, which is unlikely.
Bad futures with indifferent or malicious agents won’t make the right decisions about ending civilization. So option value doesn’t help in the cases where it’s most needed.
Most expected disvalue comes from uncontrolled futures passing points of no return, not deliberate choices after moral reflection. So option value doesn’t preserve much value for downside-focused views.
Reducing extinction risk doesn’t entail reducing s-risks, which could still occur in survived dystopian futures. So it’s not the most robust approach under moral uncertainty.
Some option value remains, but it is not strong enough alone to make extinction risk an overwhelming priority compared to improving future quality.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: The option value argument for reducing extinction risk is weak, since it fails in the dystopian futures where it’s most needed. Future agents in such worlds likely won’t have the motivation or coordination to shut down civilization.
Key points:
The option value argument says we should reduce extinction risk so future agents can decide whether to continue civilization. But this requires dystopian futures to have the altruism and coordination to shut down, which is unlikely.
Bad futures with indifferent or malicious agents won’t make the right decisions about ending civilization. So option value doesn’t help in the cases where it’s most needed.
Most expected disvalue comes from uncontrolled futures passing points of no return, not deliberate choices after moral reflection. So option value doesn’t preserve much value for downside-focused views.
Reducing extinction risk doesn’t entail reducing s-risks, which could still occur in survived dystopian futures. So it’s not the most robust approach under moral uncertainty.
Some option value remains, but it is not strong enough alone to make extinction risk an overwhelming priority compared to improving future quality.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.