A very important factor here is that you give chickens an expected moral weight more than twice that of humans.
This is not just because of your particular model, but is rather intrinsic to Muelhauser’s estimate: 10% of being above 10 (and 100% of being positive) necessarily means a mean above 1. Together with his 80% chance that they’re moral patients, this gives at least a 0.8 conversion factor.
...which seems extremely high. I don’t know anybody who’d agree with this.
Fwiw, looks like rerunning the analysis with the relative bounds on chicken moral worth being a ten-billionth to a thousandth of a human, rather than a twenty thousandth to 10 humans, still outputs a mean cost-effectiveness ratio of CCCW to MIF of ~1.3.
So though it is a pretty significant factor, choosing different values there seems unlikely by themselves to directionally change the output.
I also don’t think that the expected moral weight of more than twice that of a human is that intrinsic to Muehlhauser numbers. Seems like it is more like something of an artifact that comes from putting a log-normal distribution to that confidence interval.
But I also I think it is all somewhat beside the point that could really be at play:
It’s unclear that one can compare the “near-termist, human-centric” worldview, to the “near-termist, animal-centric” worldview by just working to put them on the same metric, and then crunching the EV. And further, I don’t think subscribers to the “near-termist, human-centric” worldview will be swayed much (potentially at all) by analysis like that.
So idk and I am always confused by this but when I thought about this more a few years ago, I personally think the decision framework might be more along the lines of like: how much credence do I give to “near-term human-centric”, and according to it how good is MIF. And how much credence do I give “near term animal-centric”, and according to it centric how good is CCCW. And that is more how one gets at how one ought to allocate funds across them.
I also don’t think that the expected moral weight of more than twice that of a human is that intrinsic to Muehlhauser numbers. Seems like it is more like something of an artifact that comes from putting a log-normal distribution to that confidence interval.
Note I used a loguniform distribution, not a lognormal (which would result in a mean of 1.50 k). In addition, normal, uniform, and logistic distributions would lead to 4.00.
It’s unclear that one can compare the “near-termist, human-centric” worldview, to the “near-termist, animal-centric” worldview by just working to put them on the same metric, and then crunching the EV. And further, I don’t think subscribers to the “near-termist, human-centric” worldview will be swayed much (potentially at all) by analysis like that.
Assuming total hedonic utilitarianism (classical utilitarisnism) carries most of the weight amongst various possible moral theories, I would say one can compare the experiences of humans with those of non-human animals.
I discussed concerns about calculating expected moral weights at length here.
So idk and I am always confused by this but when I thought about this more a few years ago, I personally think the decision framework might be more along the lines of like: how much credence do I give to “near-term human-centric”, and according to it how good is MIF. And how much credence do I give “near term animal-centric”, and according to it centric how good is CCCW. And that is more how one gets at how one ought to allocate funds across them.
Implicitly assumed to represent the weighted mean of the moral weight distributions of the various theories of consciousness. These are, in turn, supposed to produce (summable) moral weight distributions in QALY/cQALY.
I think the ability to give credences to different views implies that they are somehow comparable with respect to an idealised truth, since the credence is sort of the probability of a view being true. I think about the moral weight as representing the knowledge about both “near-term human-centric” and “near term animal-centric” views. I think one cannot reasonably be confident that the latter has a very low credence, and therefore the 90th percentile of the moral weight distribution will tend to be close to 1, which implies a mean moral weight close to 1.
Yeah, I agree, dividing it by e.g. 1000 would only make a 10,000 ratio into 10.
I also don’t think that the expected moral weight of more than twice that of a human is that intrinsic to Muelhauser numbers. Seems like it is more like something of an artifact that comes from putting a log-normal distribution to that confidence interval.
The particular value is a result of the log-uniform distribution, but any distribution conforming to Muelhauser’s confidence interval will give a mean in this neighborhood (i.e. at most ~3 times smaller).
This is a good point, thanks. Note that I have fitted 6 types of distributions to Muehlhauser’s guesses for various species here, and concluded that:
The mean moral weight is close to 1 for all the considered species, ranging from 0.5 to 5 excluding the lognormal and pareto distributions (for which it is even higher, but seemingly inaccurate).
I think a value close to 1 is not unreasonable. As described here, CE’s Weighted Animal Welfare Index total welfare score and probability of feeling pain imply the conditions of laying hens in CC in -QALY/cyear is about 10 % of what my assumptions imply. 1 order of magnitude is not much considering the large uncertainty involved: the 95th percentile of the moral weight distribution I used is about 1 M times as large as the 5th percentile.
Moreover, I do not think we know enough about consciousness to confidently say that the moral cannot be larger than 1. As a result, for the reasons you mentioned, the mean moral weight will tend to be close to 1.
A very important factor here is that you give chickens an expected moral weight more than twice that of humans.
This is not just because of your particular model, but is rather intrinsic to Muelhauser’s estimate: 10% of being above 10 (and 100% of being positive) necessarily means a mean above 1. Together with his 80% chance that they’re moral patients, this gives at least a 0.8 conversion factor.
...which seems extremely high. I don’t know anybody who’d agree with this.
Good flag! :)
Fwiw, looks like rerunning the analysis with the relative bounds on chicken moral worth being a ten-billionth to a thousandth of a human, rather than a twenty thousandth to 10 humans, still outputs a mean cost-effectiveness ratio of CCCW to MIF of ~1.3.
So though it is a pretty significant factor, choosing different values there seems unlikely by themselves to directionally change the output.
I also don’t think that the expected moral weight of more than twice that of a human is that intrinsic to Muehlhauser numbers. Seems like it is more like something of an artifact that comes from putting a log-normal distribution to that confidence interval.
But I also I think it is all somewhat beside the point that could really be at play:
It’s unclear that one can compare the “near-termist, human-centric” worldview, to the “near-termist, animal-centric” worldview by just working to put them on the same metric, and then crunching the EV. And further, I don’t think subscribers to the “near-termist, human-centric” worldview will be swayed much (potentially at all) by analysis like that.
So idk and I am always confused by this but when I thought about this more a few years ago, I personally think the decision framework might be more along the lines of like: how much credence do I give to “near-term human-centric”, and according to it how good is MIF. And how much credence do I give “near term animal-centric”, and according to it centric how good is CCCW. And that is more how one gets at how one ought to allocate funds across them.
Note I used a loguniform distribution, not a lognormal (which would result in a mean of 1.50 k). In addition, normal, uniform, and logistic distributions would lead to 4.00.
Assuming total hedonic utilitarianism (classical utilitarisnism) carries most of the weight amongst various possible moral theories, I would say one can compare the experiences of humans with those of non-human animals.
I discussed concerns about calculating expected moral weights at length here.
Here, the moral weight is:
I think the ability to give credences to different views implies that they are somehow comparable with respect to an idealised truth, since the credence is sort of the probability of a view being true. I think about the moral weight as representing the knowledge about both “near-term human-centric” and “near term animal-centric” views. I think one cannot reasonably be confident that the latter has a very low credence, and therefore the 90th percentile of the moral weight distribution will tend to be close to 1, which implies a mean moral weight close to 1.
Yeah, I agree, dividing it by e.g. 1000 would only make a 10,000 ratio into 10.
The particular value is a result of the log-uniform distribution, but any distribution conforming to Muelhauser’s confidence interval will give a mean in this neighborhood (i.e. at most ~3 times smaller).
This is a good point, thanks. Note that I have fitted 6 types of distributions to Muehlhauser’s guesses for various species here, and concluded that:
I think a value close to 1 is not unreasonable. As described here, CE’s Weighted Animal Welfare Index total welfare score and probability of feeling pain imply the conditions of laying hens in CC in -QALY/cyear is about 10 % of what my assumptions imply. 1 order of magnitude is not much considering the large uncertainty involved: the 95th percentile of the moral weight distribution I used is about 1 M times as large as the 5th percentile.
Moreover, I do not think we know enough about consciousness to confidently say that the moral cannot be larger than 1. As a result, for the reasons you mentioned, the mean moral weight will tend to be close to 1.