It sounds like you’re assuming a common scale between the theories (maximizing expected choice-worthiness)).
A common scale isn’t necessary for my conclusion (I think you’re substituting it for a stronger claim?) and I didn’t invoke it. As I wrote in my comment, on negative utilitarianism s-risks that are many orders of magnitude smaller than worse ones without correspondingly huge differences in probability get ignored for the latter. On variance normalization, or bargaining solutions, or a variety of methods that don’t amount to dictatorship of one theory, the weight for an NU view is not going to spend its decision-influence on the former rather than the latter when they’re both non-vanishing possibilities.
I would think something more like your hellish example + billions of times more happy people would be more illustrative. Some EAs working on s-risks do hold lexical views.
Sure (which will make the s-risk definition even more inapt for those people), and those scenarios will be approximately ignored vs scenarios that are more like 1⁄100 or 1/1000 being tortured on a lexical view, so there will still be the same problem of s-risk not tracking what’s action-guiding or a big deal in the history of suffering.
Ah, in the quote I took, I thought you were comparing s-risks to x-risks where the good is lost when giving non-negligible credence to non-negative views, but you’re comparing s-risks to far worse s-risks (x-risk-scale s-risks). I misread; my mistake.
A common scale isn’t necessary for my conclusion (I think you’re substituting it for a stronger claim?) and I didn’t invoke it. As I wrote in my comment, on negative utilitarianism s-risks that are many orders of magnitude smaller than worse ones without correspondingly huge differences in probability get ignored for the latter. On variance normalization, or bargaining solutions, or a variety of methods that don’t amount to dictatorship of one theory, the weight for an NU view is not going to spend its decision-influence on the former rather than the latter when they’re both non-vanishing possibilities.
Sure (which will make the s-risk definition even more inapt for those people), and those scenarios will be approximately ignored vs scenarios that are more like 1⁄100 or 1/1000 being tortured on a lexical view, so there will still be the same problem of s-risk not tracking what’s action-guiding or a big deal in the history of suffering.
Ah, in the quote I took, I thought you were comparing s-risks to x-risks where the good is lost when giving non-negligible credence to non-negative views, but you’re comparing s-risks to far worse s-risks (x-risk-scale s-risks). I misread; my mistake.