Fwiw, looks like rerunning the analysis with the relative bounds on chicken moral worth being a ten-billionth to a thousandth of a human, rather than a twenty thousandth to 10 humans, still outputs a mean cost-effectiveness ratio of CCCW to MIF of ~1.3.
So though it is a pretty significant factor, choosing different values there seems unlikely by themselves to directionally change the output.
I also donāt think that the expected moral weight of more than twice that of a human is that intrinsic to Muehlhauser numbers. Seems like it is more like something of an artifact that comes from putting a log-normal distribution to that confidence interval.
But I also I think it is all somewhat beside the point that could really be at play:
Itās unclear that one can compare the ānear-termist, human-centricā worldview, to the ānear-termist, animal-centricā worldview by just working to put them on the same metric, and then crunching the EV. And further, I donāt think subscribers to the ānear-termist, human-centricā worldview will be swayed much (potentially at all) by analysis like that.
So idk and I am always confused by this but when I thought about this more a few years ago, I personally think the decision framework might be more along the lines of like: how much credence do I give to ānear-term human-centricā, and according to it how good is MIF. And how much credence do I give ānear term animal-centricā, and according to it centric how good is CCCW. And that is more how one gets at how one ought to allocate funds across them.
I also donāt think that the expected moral weight of more than twice that of a human is that intrinsic to Muehlhauser numbers. Seems like it is more like something of an artifact that comes from putting a log-normal distribution to that confidence interval.
Note I used a loguniform distribution, not a lognormal (which would result in a mean of 1.50 k). In addition, normal, uniform, and logistic distributions would lead to 4.00.
Itās unclear that one can compare the ānear-termist, human-centricā worldview, to the ānear-termist, animal-centricā worldview by just working to put them on the same metric, and then crunching the EV. And further, I donāt think subscribers to the ānear-termist, human-centricā worldview will be swayed much (potentially at all) by analysis like that.
Assuming total hedonic utilitarianism (classical utilitarisnism) carries most of the weight amongst various possible moral theories, I would say one can compare the experiences of humans with those of non-human animals.
I discussed concerns about calculating expected moral weights at length here.
So idk and I am always confused by this but when I thought about this more a few years ago, I personally think the decision framework might be more along the lines of like: how much credence do I give to ānear-term human-centricā, and according to it how good is MIF. And how much credence do I give ānear term animal-centricā, and according to it centric how good is CCCW. And that is more how one gets at how one ought to allocate funds across them.
Implicitly assumed to represent the weighted mean of the moral weight distributions of the various theories of consciousness. These are, in turn, supposed to produce (summable) moral weight distributions in QALY/ācQALY.
I think the ability to give credences to different views implies that they are somehow comparable with respect to an idealised truth, since the credence is sort of the probability of a view being true. I think about the moral weight as representing the knowledge about both ānear-term human-centricā and ānear term animal-centricā views. I think one cannot reasonably be confident that the latter has a very low credence, and therefore the 90th percentile of the moral weight distribution will tend to be close to 1, which implies a mean moral weight close to 1.
Yeah, I agree, dividing it by e.g. 1000 would only make a 10,000 ratio into 10.
I also donāt think that the expected moral weight of more than twice that of a human is that intrinsic to Muelhauser numbers. Seems like it is more like something of an artifact that comes from putting a log-normal distribution to that confidence interval.
The particular value is a result of the log-uniform distribution, but any distribution conforming to Muelhauserās confidence interval will give a mean in this neighborhood (i.e. at most ~3 times smaller).
Good flag! :)
Fwiw, looks like rerunning the analysis with the relative bounds on chicken moral worth being a ten-billionth to a thousandth of a human, rather than a twenty thousandth to 10 humans, still outputs a mean cost-effectiveness ratio of CCCW to MIF of ~1.3.
So though it is a pretty significant factor, choosing different values there seems unlikely by themselves to directionally change the output.
I also donāt think that the expected moral weight of more than twice that of a human is that intrinsic to Muehlhauser numbers. Seems like it is more like something of an artifact that comes from putting a log-normal distribution to that confidence interval.
But I also I think it is all somewhat beside the point that could really be at play:
Itās unclear that one can compare the ānear-termist, human-centricā worldview, to the ānear-termist, animal-centricā worldview by just working to put them on the same metric, and then crunching the EV. And further, I donāt think subscribers to the ānear-termist, human-centricā worldview will be swayed much (potentially at all) by analysis like that.
So idk and I am always confused by this but when I thought about this more a few years ago, I personally think the decision framework might be more along the lines of like: how much credence do I give to ānear-term human-centricā, and according to it how good is MIF. And how much credence do I give ānear term animal-centricā, and according to it centric how good is CCCW. And that is more how one gets at how one ought to allocate funds across them.
Note I used a loguniform distribution, not a lognormal (which would result in a mean of 1.50 k). In addition, normal, uniform, and logistic distributions would lead to 4.00.
Assuming total hedonic utilitarianism (classical utilitarisnism) carries most of the weight amongst various possible moral theories, I would say one can compare the experiences of humans with those of non-human animals.
I discussed concerns about calculating expected moral weights at length here.
Here, the moral weight is:
I think the ability to give credences to different views implies that they are somehow comparable with respect to an idealised truth, since the credence is sort of the probability of a view being true. I think about the moral weight as representing the knowledge about both ānear-term human-centricā and ānear term animal-centricā views. I think one cannot reasonably be confident that the latter has a very low credence, and therefore the 90th percentile of the moral weight distribution will tend to be close to 1, which implies a mean moral weight close to 1.
Yeah, I agree, dividing it by e.g. 1000 would only make a 10,000 ratio into 10.
The particular value is a result of the log-uniform distribution, but any distribution conforming to Muelhauserās confidence interval will give a mean in this neighborhood (i.e. at most ~3 times smaller).