Thanks for your comment! I agree that there are plenty of options that would be useful other than using the raw welfare ranges or using the welfare ranges with a cap on shrimp at 0.1x humans’ level. Here are the results with a cap on both shrimp’s and chickens’ welfare ranges at 1x humans’:
Summary Statistics: Weighted Hours of Disabling-Equivalent Pain Averted Per Dollar Donated to SWP: 5th, 25th, 50th, 75th, 95th percentiles: [2.234e-04 4.837e+00 6.347e+01 4.592e+02 4.776e+03] Mean: 1155.78 Summary Statistics: Weighted Hours of Disabling-Equivalent Pain Averted Per Dollar Donated to THL: 5th, 25th, 50th, 75th, 95th percentiles: [2.301e+01 1.023e+03 2.935e+03 6.537e+03 1.595e+04] Mean: 4811.45
Breaking out the cost-effectiveness results conditional on each welfare range model (or conditional on including/excluding the undiluted experience model) would be fantastic, but is probably outside the scope of what I have time to do.
I don’t really understand your middle paragraph. Can you elaborate on what you mean by “agent-relative reasons?” I do understand the issue whereby which welfare range is taken to be constant can drive the outcome of an EV calculation. But I *think* that only ends up changing the results if your unit for welfare range is based one of the animals in the comparison? I think I’d get identical conclusions if I took fruit flies’ welfare range to be the constant instead of humans’, it would only change if I used chickens’ or shrimp’s as the constant. And I’m not trying to take EVs across multiple moral systems, I’m holding the moral system constant and taking EVs across different estimates of chickens’ and shrimp’s capacities for realizing welfare, which seems like it avoids some further pitfalls.
If you used fruit flies (or other very small-brained animals) as having constant welfare range and included the neuron count model as a possibility, then the neuron count model would skew the expected values towards favouring chickens. This post illustrates this, although the model assumptions are different (conscious subsystems is an empirical/descriptive hypothesis about the number of moral patients conditional on each theory of consciousness and welfare, not a model or theory of consciousness and welfare per moral patient):
https://forum.effectivealtruism.org/posts/vbhoFsyQmrntru6Kw/do-brains-contain-many-conscious-subsystems-if-so-should-we
By agent-relative reasons, I just mean each agent could have some reasons to fix their own welfare range without similar reasons to fix others’ welfare ranges. Like maybe you just do ethics by projecting and extrapolating how you value your own experiences, which you assume to be fixed in value.
I think different theories of welfare (including different theories of hedonistic welfare), captured by the different models for welfare ranges, are effectively different normative/moral theories. So, if you do take expected values across them, you are taking expected values across different normative/moral stances.
Thanks for your comment! I agree that there are plenty of options that would be useful other than using the raw welfare ranges or using the welfare ranges with a cap on shrimp at 0.1x humans’ level. Here are the results with a cap on both shrimp’s and chickens’ welfare ranges at 1x humans’:
Breaking out the cost-effectiveness results conditional on each welfare range model (or conditional on including/excluding the undiluted experience model) would be fantastic, but is probably outside the scope of what I have time to do.
I don’t really understand your middle paragraph. Can you elaborate on what you mean by “agent-relative reasons?” I do understand the issue whereby which welfare range is taken to be constant can drive the outcome of an EV calculation. But I *think* that only ends up changing the results if your unit for welfare range is based one of the animals in the comparison? I think I’d get identical conclusions if I took fruit flies’ welfare range to be the constant instead of humans’, it would only change if I used chickens’ or shrimp’s as the constant. And I’m not trying to take EVs across multiple moral systems, I’m holding the moral system constant and taking EVs across different estimates of chickens’ and shrimp’s capacities for realizing welfare, which seems like it avoids some further pitfalls.
Thanks for doing and sharing these calculations!
If you used fruit flies (or other very small-brained animals) as having constant welfare range and included the neuron count model as a possibility, then the neuron count model would skew the expected values towards favouring chickens. This post illustrates this, although the model assumptions are different (conscious subsystems is an empirical/descriptive hypothesis about the number of moral patients conditional on each theory of consciousness and welfare, not a model or theory of consciousness and welfare per moral patient): https://forum.effectivealtruism.org/posts/vbhoFsyQmrntru6Kw/do-brains-contain-many-conscious-subsystems-if-so-should-we
By agent-relative reasons, I just mean each agent could have some reasons to fix their own welfare range without similar reasons to fix others’ welfare ranges. Like maybe you just do ethics by projecting and extrapolating how you value your own experiences, which you assume to be fixed in value.
I think different theories of welfare (including different theories of hedonistic welfare), captured by the different models for welfare ranges, are effectively different normative/moral theories. So, if you do take expected values across them, you are taking expected values across different normative/moral stances.
Thanks! I appreciate the response