Rather than capping shrimp welfare ranges at 0.1 (relative to humans), I would just exclude the undiluted experience model or cap welfare ranges (for both chickens and shrimp) at around 1, or report welfare range model-conditional estimates, because I think the equality-based model is not implausible, and probably many potential donors would give it substantial weight.
If you’re taking means, you run into the two envelopes problem, and the undiluted experience model (and generally, the higher the variance in relative welfare ranges across models) is going to make it particularly bad. Typically, I think it’s only reasonable to treat the human welfare range as constant across models to normalize by and take expected values relative to based on agent-relative reasons, because we could otherwise consider the chicken or shrimp or fruit fly welfare range constant (or use some other scales, possibly multiple scales and moral uncertainty between them and an approach other than to maximize expected choiceworthiness, like a moral parliament), and you could end up with very different results under each.
In general, I’d want to report the cost-effectiveness of each intervention separately conditional on each welfare range model, because this doesn’t commit us to any specific way to identify units between the models (intertheoretic comparisons), or any specific approach to moral uncertainty or even specific probabilities to each model (and it’s also a sensitivity analysis with respect to the welfare range model).
I’m also working on an article on this topic, but for some prior writing, see:
Thanks for your comment! I agree that there are plenty of options that would be useful other than using the raw welfare ranges or using the welfare ranges with a cap on shrimp at 0.1x humans’ level. Here are the results with a cap on both shrimp’s and chickens’ welfare ranges at 1x humans’:
Summary Statistics: Weighted Hours of Disabling-Equivalent Pain Averted Per Dollar Donated to SWP: 5th, 25th, 50th, 75th, 95th percentiles: [2.234e-04 4.837e+00 6.347e+01 4.592e+02 4.776e+03] Mean: 1155.78 Summary Statistics: Weighted Hours of Disabling-Equivalent Pain Averted Per Dollar Donated to THL: 5th, 25th, 50th, 75th, 95th percentiles: [2.301e+01 1.023e+03 2.935e+03 6.537e+03 1.595e+04] Mean: 4811.45
Breaking out the cost-effectiveness results conditional on each welfare range model (or conditional on including/excluding the undiluted experience model) would be fantastic, but is probably outside the scope of what I have time to do.
I don’t really understand your middle paragraph. Can you elaborate on what you mean by “agent-relative reasons?” I do understand the issue whereby which welfare range is taken to be constant can drive the outcome of an EV calculation. But I *think* that only ends up changing the results if your unit for welfare range is based one of the animals in the comparison? I think I’d get identical conclusions if I took fruit flies’ welfare range to be the constant instead of humans’, it would only change if I used chickens’ or shrimp’s as the constant. And I’m not trying to take EVs across multiple moral systems, I’m holding the moral system constant and taking EVs across different estimates of chickens’ and shrimp’s capacities for realizing welfare, which seems like it avoids some further pitfalls.
If you used fruit flies (or other very small-brained animals) as having constant welfare range and included the neuron count model as a possibility, then the neuron count model would skew the expected values towards favouring chickens. This post illustrates this, although the model assumptions are different (conscious subsystems is an empirical/descriptive hypothesis about the number of moral patients conditional on each theory of consciousness and welfare, not a model or theory of consciousness and welfare per moral patient):
https://forum.effectivealtruism.org/posts/vbhoFsyQmrntru6Kw/do-brains-contain-many-conscious-subsystems-if-so-should-we
By agent-relative reasons, I just mean each agent could have some reasons to fix their own welfare range without similar reasons to fix others’ welfare ranges. Like maybe you just do ethics by projecting and extrapolating how you value your own experiences, which you assume to be fixed in value.
I think different theories of welfare (including different theories of hedonistic welfare), captured by the different models for welfare ranges, are effectively different normative/moral theories. So, if you do take expected values across them, you are taking expected values across different normative/moral stances.
Rather than capping shrimp welfare ranges at 0.1 (relative to humans), I would just exclude the undiluted experience model or cap welfare ranges (for both chickens and shrimp) at around 1, or report welfare range model-conditional estimates, because I think the equality-based model is not implausible, and probably many potential donors would give it substantial weight.
If you’re taking means, you run into the two envelopes problem, and the undiluted experience model (and generally, the higher the variance in relative welfare ranges across models) is going to make it particularly bad. Typically, I think it’s only reasonable to treat the human welfare range as constant across models to normalize by and take expected values relative to based on agent-relative reasons, because we could otherwise consider the chicken or shrimp or fruit fly welfare range constant (or use some other scales, possibly multiple scales and moral uncertainty between them and an approach other than to maximize expected choiceworthiness, like a moral parliament), and you could end up with very different results under each.
In general, I’d want to report the cost-effectiveness of each intervention separately conditional on each welfare range model, because this doesn’t commit us to any specific way to identify units between the models (intertheoretic comparisons), or any specific approach to moral uncertainty or even specific probabilities to each model (and it’s also a sensitivity analysis with respect to the welfare range model).
I’m also working on an article on this topic, but for some prior writing, see:
https://reducing-suffering.org/two-envelopes-problem-for-brain-size-and-moral-uncertainty/
Section 1.1 in https://www.openphilanthropy.org/research/update-on-cause-prioritization-at-open-philanthropy/
Thanks for your comment! I agree that there are plenty of options that would be useful other than using the raw welfare ranges or using the welfare ranges with a cap on shrimp at 0.1x humans’ level. Here are the results with a cap on both shrimp’s and chickens’ welfare ranges at 1x humans’:
Breaking out the cost-effectiveness results conditional on each welfare range model (or conditional on including/excluding the undiluted experience model) would be fantastic, but is probably outside the scope of what I have time to do.
I don’t really understand your middle paragraph. Can you elaborate on what you mean by “agent-relative reasons?” I do understand the issue whereby which welfare range is taken to be constant can drive the outcome of an EV calculation. But I *think* that only ends up changing the results if your unit for welfare range is based one of the animals in the comparison? I think I’d get identical conclusions if I took fruit flies’ welfare range to be the constant instead of humans’, it would only change if I used chickens’ or shrimp’s as the constant. And I’m not trying to take EVs across multiple moral systems, I’m holding the moral system constant and taking EVs across different estimates of chickens’ and shrimp’s capacities for realizing welfare, which seems like it avoids some further pitfalls.
Thanks for doing and sharing these calculations!
If you used fruit flies (or other very small-brained animals) as having constant welfare range and included the neuron count model as a possibility, then the neuron count model would skew the expected values towards favouring chickens. This post illustrates this, although the model assumptions are different (conscious subsystems is an empirical/descriptive hypothesis about the number of moral patients conditional on each theory of consciousness and welfare, not a model or theory of consciousness and welfare per moral patient): https://forum.effectivealtruism.org/posts/vbhoFsyQmrntru6Kw/do-brains-contain-many-conscious-subsystems-if-so-should-we
By agent-relative reasons, I just mean each agent could have some reasons to fix their own welfare range without similar reasons to fix others’ welfare ranges. Like maybe you just do ethics by projecting and extrapolating how you value your own experiences, which you assume to be fixed in value.
I think different theories of welfare (including different theories of hedonistic welfare), captured by the different models for welfare ranges, are effectively different normative/moral theories. So, if you do take expected values across them, you are taking expected values across different normative/moral stances.
Thanks! I appreciate the response