If you used fruit flies (or other very small-brained animals) as having constant welfare range and included the neuron count model as a possibility, then the neuron count model would skew the expected values towards favouring chickens. This post illustrates this, although the model assumptions are different (conscious subsystems is an empirical/descriptive hypothesis about the number of moral patients conditional on each theory of consciousness and welfare, not a model or theory of consciousness and welfare per moral patient):
https://forum.effectivealtruism.org/posts/vbhoFsyQmrntru6Kw/do-brains-contain-many-conscious-subsystems-if-so-should-we
By agent-relative reasons, I just mean each agent could have some reasons to fix their own welfare range without similar reasons to fix others’ welfare ranges. Like maybe you just do ethics by projecting and extrapolating how you value your own experiences, which you assume to be fixed in value.
I think different theories of welfare (including different theories of hedonistic welfare), captured by the different models for welfare ranges, are effectively different normative/moral theories. So, if you do take expected values across them, you are taking expected values across different normative/moral stances.
Thanks for doing and sharing these calculations!
If you used fruit flies (or other very small-brained animals) as having constant welfare range and included the neuron count model as a possibility, then the neuron count model would skew the expected values towards favouring chickens. This post illustrates this, although the model assumptions are different (conscious subsystems is an empirical/descriptive hypothesis about the number of moral patients conditional on each theory of consciousness and welfare, not a model or theory of consciousness and welfare per moral patient): https://forum.effectivealtruism.org/posts/vbhoFsyQmrntru6Kw/do-brains-contain-many-conscious-subsystems-if-so-should-we
By agent-relative reasons, I just mean each agent could have some reasons to fix their own welfare range without similar reasons to fix others’ welfare ranges. Like maybe you just do ethics by projecting and extrapolating how you value your own experiences, which you assume to be fixed in value.
I think different theories of welfare (including different theories of hedonistic welfare), captured by the different models for welfare ranges, are effectively different normative/moral theories. So, if you do take expected values across them, you are taking expected values across different normative/moral stances.
Thanks! I appreciate the response