I do have some reservations about (variance) normalisation, but it seems like a reasonable approach to consider. I haven’t thought about this loads though, so this opinion is not super robust.
Just to tie it back to the original question, whether we prioritise x-risk or WAS will depend on the agents who exist, obviously. Because x-risk mitigation is plausibly much more valuable on totalism than WAS mitigation is on other plausible views, I think you need almost everyone to have very very low (in my opinion, unjustifiably low) credence in totalism for your conlusion to go through. In the actual world, I think x-risk still wins. As I suggested before, it could be the case that the value of x-risk mitigation is not that high or even negative due to s-risks (this might be your best line of argument for your conclusion), but this suggests prioritising large scale s-risks. You rightly pointed out that million years of WAS is the most concrete example of s-risk we currently have. It seems plausible that other and larger s-risks could arise in the future (e.g. large scale sentient simulations), which though admittedly speculative, could be really big in scale. I tend to think general foundational research aiming at improving the trajectory of the future is more valuable to do today than WAS mitigation. What I mean by ‘general foundational research’ is not entirely clear, but, for instance, thinking about and clarifying that seems more important than WAS mitigation.
Great—I’m glad you agree!
I do have some reservations about (variance) normalisation, but it seems like a reasonable approach to consider. I haven’t thought about this loads though, so this opinion is not super robust.
Just to tie it back to the original question, whether we prioritise x-risk or WAS will depend on the agents who exist, obviously. Because x-risk mitigation is plausibly much more valuable on totalism than WAS mitigation is on other plausible views, I think you need almost everyone to have very very low (in my opinion, unjustifiably low) credence in totalism for your conlusion to go through. In the actual world, I think x-risk still wins. As I suggested before, it could be the case that the value of x-risk mitigation is not that high or even negative due to s-risks (this might be your best line of argument for your conclusion), but this suggests prioritising large scale s-risks. You rightly pointed out that million years of WAS is the most concrete example of s-risk we currently have. It seems plausible that other and larger s-risks could arise in the future (e.g. large scale sentient simulations), which though admittedly speculative, could be really big in scale. I tend to think general foundational research aiming at improving the trajectory of the future is more valuable to do today than WAS mitigation. What I mean by ‘general foundational research’ is not entirely clear, but, for instance, thinking about and clarifying that seems more important than WAS mitigation.