Isn’t the move here something like, “If doom soon, then all pre-doom value nets to zero”?
Which tbh I’m not sure is wrong. If I expect doom tomorrow, all efforts today should be to reduce it; one night’s sleep not being bitten by mosquitoes doesn’t matter. Stretching this outward in time, doesn’t change the calculus much for a while, maybe about a lifetime or a few lifetimes or so. And a huge chunk of xrisk is concentrated in this century.
The x-risk models actually support the opposite conclusion though. They are generally focused on the balance of two values, v and r—where v the value of a time period and r is the risk of extinction in that period.
If r is sufficiently high, then it operates as a de facto discount rate on the future, which means that the most effective way to increase good is to increase the present v rather than reduce r. For an analogy, if a patient has an incredibly high risk of succumbing to terminal cancer, the way to increase wellbeing may be to give them morphine and palliative care rather than perscribe them risky treatments that may or may not work (and might only be temporary)
Now one could argue against this by saying ‘do not get go gentle into that good night, in the face of destruction we should still do our best’. I have sympathy with that view, but it’s not grounded in the general ‘follow the EV’ framework of EA, and it would have consequences beyond supporting longtermism
Isn’t the move here something like, “If doom soon, then all pre-doom value nets to zero”?
Which tbh I’m not sure is wrong. If I expect doom tomorrow, all efforts today should be to reduce it; one night’s sleep not being bitten by mosquitoes doesn’t matter. Stretching this outward in time, doesn’t change the calculus much for a while, maybe about a lifetime or a few lifetimes or so. And a huge chunk of xrisk is concentrated in this century.
The x-risk models actually support the opposite conclusion though. They are generally focused on the balance of two values, v and r—where v the value of a time period and r is the risk of extinction in that period.
If r is sufficiently high, then it operates as a de facto discount rate on the future, which means that the most effective way to increase good is to increase the present v rather than reduce r. For an analogy, if a patient has an incredibly high risk of succumbing to terminal cancer, the way to increase wellbeing may be to give them morphine and palliative care rather than perscribe them risky treatments that may or may not work (and might only be temporary)
Now one could argue against this by saying ‘do not get go gentle into that good night, in the face of destruction we should still do our best’. I have sympathy with that view, but it’s not grounded in the general ‘follow the EV’ framework of EA, and it would have consequences beyond supporting longtermism