I also donât think that the expected moral weight of more than twice that of a human is that intrinsic to Muehlhauser numbers. Seems like it is more like something of an artifact that comes from putting a log-normal distribution to that confidence interval.
Note I used a loguniform distribution, not a lognormal (which would result in a mean of 1.50 k). In addition, normal, uniform, and logistic distributions would lead to 4.00.
Itâs unclear that one can compare the ânear-termist, human-centricâ worldview, to the ânear-termist, animal-centricâ worldview by just working to put them on the same metric, and then crunching the EV. And further, I donât think subscribers to the ânear-termist, human-centricâ worldview will be swayed much (potentially at all) by analysis like that.
Assuming total hedonic utilitarianism (classical utilitarisnism) carries most of the weight amongst various possible moral theories, I would say one can compare the experiences of humans with those of non-human animals.
I discussed concerns about calculating expected moral weights at length here.
So idk and I am always confused by this but when I thought about this more a few years ago, I personally think the decision framework might be more along the lines of like: how much credence do I give to ânear-term human-centricâ, and according to it how good is MIF. And how much credence do I give ânear term animal-centricâ, and according to it centric how good is CCCW. And that is more how one gets at how one ought to allocate funds across them.
Implicitly assumed to represent the weighted mean of the moral weight distributions of the various theories of consciousness. These are, in turn, supposed to produce (summable) moral weight distributions in QALY/âcQALY.
I think the ability to give credences to different views implies that they are somehow comparable with respect to an idealised truth, since the credence is sort of the probability of a view being true. I think about the moral weight as representing the knowledge about both ânear-term human-centricâ and ânear term animal-centricâ views. I think one cannot reasonably be confident that the latter has a very low credence, and therefore the 90th percentile of the moral weight distribution will tend to be close to 1, which implies a mean moral weight close to 1.
Note I used a loguniform distribution, not a lognormal (which would result in a mean of 1.50 k). In addition, normal, uniform, and logistic distributions would lead to 4.00.
Assuming total hedonic utilitarianism (classical utilitarisnism) carries most of the weight amongst various possible moral theories, I would say one can compare the experiences of humans with those of non-human animals.
I discussed concerns about calculating expected moral weights at length here.
Here, the moral weight is:
I think the ability to give credences to different views implies that they are somehow comparable with respect to an idealised truth, since the credence is sort of the probability of a view being true. I think about the moral weight as representing the knowledge about both ânear-term human-centricâ and ânear term animal-centricâ views. I think one cannot reasonably be confident that the latter has a very low credence, and therefore the 90th percentile of the moral weight distribution will tend to be close to 1, which implies a mean moral weight close to 1.