I don’t think that high x-risk implies that we should focus more on x-risk all else equal—high x-risk means that the value of the future is lower. I think what we should care about is high tractability of x-risk, which sometimes but doesn’t necessarily correspond to a high probability of x-risk.
Good point, I think if X-risk is very low it is less urgent/important to work on (so the conditional works in that direction I reckon). But I agree that the inverse—if X-risk is very high, it is very urgent/important to work on—isn’t always true (though I think it usually is—generally bigger risks are easier to work on).
I think high X-risk makes working on X-risk more valuable only if you believe that you can have a durable effect on the level of X-risk—here’s MacAskill talking about the hinge-of-history hypothesis (which is closely related to the ‘time of perils’ hypothesis):
Or perhaps extinction risk is high, but will stay high indefinitely, in which case in expectation we do not have a very long future ahead of us, and the grounds for thinking that extinction risk reduction is of enormous value fall away.
I don’t think that high x-risk implies that we should focus more on x-risk all else equal—high x-risk means that the value of the future is lower. I think what we should care about is high tractability of x-risk, which sometimes but doesn’t necessarily correspond to a high probability of x-risk.
Good point, I think if X-risk is very low it is less urgent/important to work on (so the conditional works in that direction I reckon). But I agree that the inverse—if X-risk is very high, it is very urgent/important to work on—isn’t always true (though I think it usually is—generally bigger risks are easier to work on).
I think high X-risk makes working on X-risk more valuable only if you believe that you can have a durable effect on the level of X-risk—here’s MacAskill talking about the hinge-of-history hypothesis (which is closely related to the ‘time of perils’ hypothesis):