Fair enough on the definitions. I had this talk in mind, but Max Daniel made a similar point about the definition in parentheses. I’m not sure people have cases like astronomical numbers of (not extremely severe) headaches in mind, but I suppose without any kind of lexicality, there might not be any good way to distinguish. I would think something more like your hellish example + billions of times more happy people would be more illustrative. Some EAs working on s-risks do hold lexical views.
EDIT: below was based on a misreading.
With even a tiny weight on views valuing good parts of future civilization the former could be an extremely good world, while the latter would be a disaster by any reasonable mixture of views. Even with a fanatical restriction to only consider suffering and not any other moral concerns, the badness of the former should be almost completely ignored relative to the latter if there is non-negligible credence assigned to both.
This to me requires pretty specific assumptions about how to deal with moral uncertainty. It sounds like you’re assuming a common scale between the theories (maximizing expected choice-worthiness), but that too could lead to fanaticism if you give any credence to lexicality. While I think there’s an intuitive case for it when comparing certain theories (e.g. suffering should be valued roughly the same regardless of the theory), assuming a common scale also seems like the most restrictive approach to moral uncertainty among those discussed in the literature, and I’m not aware of any other approach that would lead to your conclusion. If you gave equal weight to negative utilitarianism and classical utilitarianism, for example, and used any other approach to moral uncertainty, it’s plausible to me that s-risks would come out ahead of x-risks (although there’s some overlap in causes, so you might work on both).
You could even go up a level to and use a method for moral uncertainty for your uncertainty over which approach to moral uncertainty to use on normative theories, and as long as you don’t put most of your credence in a common scale approach, I don’t think your conclusion would follow.
It sounds like you’re assuming a common scale between the theories (maximizing expected choice-worthiness)).
A common scale isn’t necessary for my conclusion (I think you’re substituting it for a stronger claim?) and I didn’t invoke it. As I wrote in my comment, on negative utilitarianism s-risks that are many orders of magnitude smaller than worse ones without correspondingly huge differences in probability get ignored for the latter. On variance normalization, or bargaining solutions, or a variety of methods that don’t amount to dictatorship of one theory, the weight for an NU view is not going to spend its decision-influence on the former rather than the latter when they’re both non-vanishing possibilities.
I would think something more like your hellish example + billions of times more happy people would be more illustrative. Some EAs working on s-risks do hold lexical views.
Sure (which will make the s-risk definition even more inapt for those people), and those scenarios will be approximately ignored vs scenarios that are more like 1⁄100 or 1/1000 being tortured on a lexical view, so there will still be the same problem of s-risk not tracking what’s action-guiding or a big deal in the history of suffering.
Ah, in the quote I took, I thought you were comparing s-risks to x-risks where the good is lost when giving non-negligible credence to non-negative views, but you’re comparing s-risks to far worse s-risks (x-risk-scale s-risks). I misread; my mistake.
Fair enough on the definitions. I had this talk in mind, but Max Daniel made a similar point about the definition in parentheses. I’m not sure people have cases like astronomical numbers of (not extremely severe) headaches in mind, but I suppose without any kind of lexicality, there might not be any good way to distinguish. I would think something more like your hellish example + billions of times more happy people would be more illustrative. Some EAs working on s-risks do hold lexical views.
EDIT: below was based on a misreading.
This to me requires pretty specific assumptions about how to deal with moral uncertainty. It sounds like you’re assuming a common scale between the theories (maximizing expected choice-worthiness), but that too could lead to fanaticism if you giveanycredence to lexicality. While I think there’s an intuitive case for it when comparing certain theories (e.g. suffering should be valued roughly the same regardless of the theory), assuming a common scale also seems like the most restrictive approach to moral uncertainty among those discussed in the literature, and I’m not aware of any other approach that would lead to your conclusion. If you gave equal weight to negative utilitarianism and classical utilitarianism, for example, and used any other approach to moral uncertainty, it’s plausible to me that s-risks would come out ahead of x-risks (although there’s some overlap in causes, so you might work on both).You could even go up a level to and use a method for moral uncertainty for your uncertainty over which approach to moral uncertainty to use on normative theories, and as long as you don’t put most of your credence in a common scale approach, I don’t think your conclusion would follow.A common scale isn’t necessary for my conclusion (I think you’re substituting it for a stronger claim?) and I didn’t invoke it. As I wrote in my comment, on negative utilitarianism s-risks that are many orders of magnitude smaller than worse ones without correspondingly huge differences in probability get ignored for the latter. On variance normalization, or bargaining solutions, or a variety of methods that don’t amount to dictatorship of one theory, the weight for an NU view is not going to spend its decision-influence on the former rather than the latter when they’re both non-vanishing possibilities.
Sure (which will make the s-risk definition even more inapt for those people), and those scenarios will be approximately ignored vs scenarios that are more like 1⁄100 or 1/1000 being tortured on a lexical view, so there will still be the same problem of s-risk not tracking what’s action-guiding or a big deal in the history of suffering.
Ah, in the quote I took, I thought you were comparing s-risks to x-risks where the good is lost when giving non-negligible credence to non-negative views, but you’re comparing s-risks to far worse s-risks (x-risk-scale s-risks). I misread; my mistake.