Thanks for the comment, and introducing me to that post some weeks ago!
1- Note that I only made the Guesstimate model for illustration purposes, and to allow people to choose their own inputs. I am only using the Google Colab program and this Sheets to obtain the post results.
2- I do not find it odd that the moral weight of poultry birds relative to humans is as likely to be smaller than 2*10^-5 (5th percentile) as to be larger than 20 (95th percentile).
3- I tend to think distributions are more informative because they allow us to calculate the expected (mean) results. It would be possible to compute the mean moral weight from point estimates, but I think it makes more sense to assume a continuous probability density function.
4- I do not understand why calculating the expected value of moral weights is problematic. Brian mentions that:
We could naively try to compute expected utility and say that the expected value of creating two elephants is 50% * f1(two elephants) + 50% * f2(two elephants) = 50% * 1â2 + 50% * 2 = 1.25, which is greater than the expected value of 1 for creating the human. However, this doesnât work the way it did in the case of a single utility function, because utility functions can be rescaled arbitrarily, and thereâs no ârightâ way to compare different utility functions. For example, the utility function 1000 * f1 is equivalent to the utility function f1, since both utility functions imply the same behavior for a utilitarian. However, if we use 1000 * f1 instead of f1, our naive expected-value calculation now favors the human
I do not see why âutility functions can be rescaled arbitrarilyâ. For the above case, I would say replacing f1 by 1000 f1 is not reasonable, because it is equivalent to increasing the weight of f1 from 50% (= 1/â(1 + 1)) to 99.9% (= 1000/â(1000 + 1)).
If we had started with 1000 f1 rather than f1 in the first place, then switching it to f1 would seem to give f1 (or 1000 f1, or whatever) too little weight relative to f2, right?
Right, for f3 = 1000 f1, we would need some kind of information to change the weight of f3 from 50% (= 1/â(1 + 1)) to 0.1% (= 0.001/â(0.001 + 1)).
Note that I do not think the starting functions are arbitrary. For the analysis of this post, for example, each function would represent a distribution for the moral weight of poultry birds relative to humans in QALY/âpQALY, under a given theory.
In addition, to determine an overall moral weight given 2 distributions for the moral weight, MWA and MWB, I would weight them by the reciprocal of their variances (based on this analysis from by Dario Amodei):
Ya, we can have different intuitions about how large the chickenâs relative moral weight can be. 20 is probably about the maximum I would allow, so the tail would go to more extreme values than I would personally use, and that skews the EV.
My bottom ~5% for chickens would probably be for chickens not being conscious, so 0 moral weight.
I agree. That being said, I think most informed intuitions will imply the mean negative utility of poultry living time is at least comparable to that of human life.
For example, your intuitions about the maximum moral weight might not significantly change the mean moral weight (as long as your intuitions about the intermediate quantiles are not too different from what I considered). Giving 5% weight to null moral weight, and 95% weight to the moral weight following a loguniform distribution whose minimum and maximum are the 5th and 95th percentiles I estimated above, the mean moral weight is 1.
I am also curious about your reasons for setting the maximum moral weight to 20. My distribution implies a maximum of 46, which is not much larger than 20 having in mind that my 95th percentile is 1 M times as large as my 5th percentile.
Ah, sorry, my misunderstanding about your use of Guesstimate.
One reason that taking expectations over moral weights can be misleading is that the cases where chickens matter much less than humans may be because the humanâs absolute moral weight is much higher than it is in the cases where chickens have closer to average moral weight to humans. Taking expected values of the ratio with human moral weight in the denominator treats human moral weight as fixed, and not allowed to be larger than otherwise in absolute terms. So you could be underweighting cases where humans matter much more, because those have little influence on the expected value of the ratio. I think Brianâs article illustrates this, but you could also see the Felicifia thread discussion he references for more.
Another reason is that this involves genuine uncertainty over different theories of consciousness, each of which may define its own hedonic/âvalue scale, and these scales may not be intertheoretically comparable (I expect them not to be). So itâs really moral/ânormative uncertainty, and you need to justify intertheoretic comparisons and the units you normalize by to take expected values over moral uncertainty this way. I havenât found any such attempts persuasive, and I think they can lead to unresolvable disagreements if two groups of beings (humans and intelligent conscious aliens or conscious AI, say) fix their own moral weight and treat the otherâs as uncertain and relative to their own.
A priori, I would expect any theory of consciousness to produce a mean moral weight of poultry birds relative to humans in pQALY/âQALY.
Subsequently (and arguably ânaivelyâ, according to Brian, Luke and probably you), I would give weights to each of the theories of consciousness, and then determine the weighted expected moral weight (the rest of this sentence was added after this reply) overall moral weight distribution from the weighted mean of the moral weight distributions of the various theories of consciousness (see here).
If I understand you correctly, you do not expect the above to be possible. I would very much welcome any sources explaining why that might be the case!
A priori, I would expect any theory of consciousness to produce a mean moral weight of poultry birds relative to humans in pQALY/âQALY.
I think this is probably right, as long as the theory is sufficiently quantitatively precise.
Subsequently (and ânaivelyâ, according to Brian, Luke and probably you), I would give weights to each of the theories of consciousness, and then determine the weighted expected moral weight.
This treats human moral weight like itâs fixed and the same across theories. That needs justification to me, since I donât see why there would be any fact of the matter for such intertheoretic comparisons, and since there are alternative choices to fix that would make different recommendations in practice, e.g. chicken moral weight, alien moral weight, sentient AI moral weight, human toe stubs, chicken torture, an ant eating sugar, and so on. I think what youâre proposing is the maximizing expected choice-worthiness/âchoiceworthiness approach to moral uncertainty, so you could look for discussions and critiques of that. Or, just more general treatments of moral uncertainty.
Which of these do you think is problematic (I have clarified above what I would do; see 2nd bullet)?
Giving weights to each of the theories of consciousness (e.g. as I described here).
Determining the overall moral weight distribution from the weighted mean of the moral weight distributions of the various theories of consciousness.
I might not have been clear about it (if that was the case, sorry!), but:
I actually agree I cannot use expected moral weights to determine the expected negative utility of poultry living time as a fraction of the utility of human life.
I think what youâre proposing is the maximizing expected choice-worthiness/âchoiceworthiness approach to moral uncertainty, so you could look for discussions and critiques of that. Or, just more general treatments of moral uncertainty.
Which of these do you think is problematic (I have clarified above what I would do; see 2nd bullet)?
Giving weights to each of the theories of consciousness (e.g. as I described here).
Determining the overall moral weight distribution from the weighted mean of the moral weight distributions of the various theories of consciousness.
I think the first probably makes some unverifiable and unjustified assumptions. Why normalize by the variance in particular?
It seems similar to variance voting, although variance voting normalizes by the standard deviation instead of the variance, to ensure each has variance 1 (Var(aX)=a2Var(X), so Var(X/âVar(X))=Var(X)/Var(X)=1). It is one approach to moral uncertainty, but there are others, like the parliamentary approach. Why normalize by the variance or standard deviation and not some other measure, for example?
You are taking expected values over products of values, one of which is the moral weight, though, right?
Here is how I would think about it, with variables with units.
Mh = measured units of human welfare from the intervention, in QALYs (total, not per year or per capita, for simplicity)
Mc = measured units of chicken welfare from the intervention, in pQALYs (total, not per year or per capita, for simplicity)
Vh,t = value per measured unit of human welfare on the theory of consciousness t, in units valt/QALY
Vc,t = value per measured unit of chicken welfare on the theory of consciousness t, in units valt/pQALY
Vh,t/Vc,t, basically the relative moral weight of chickens wrt humans humans wrt chickens, in units pQALY/âQALY
Vc,t/Vh,t, basically the relative moral weight of humans wrt chickens chickens wrt humans, in units QALY/âpQALY
Youâre trying to calculate, where T is a random variable for the theory of consciousness,
E[Vc,TMc+Vh,TMh]=E[Vc,TMc]+E[Vh,TMh]
but first, the above means taking expectations over values with units valt for different t, like adding values in Fahrenheit and values in Celsius (or grams), so you need to condition on a theory of consciousness T=t first. So, letâs look at, for each theory t,
E[Vc,TMc|T=t]+E[Vh,TMh|T=t]=E[Vc,tMc]+E[Vh,tMh]
Then, I think youâre effectively assuming Vh,t=1 and that itâs unitless, and so you infer the following:
Vc,t=Vc,t/Vh,tVc,tMc=Vc,t/Vh,tMc
But even on a fixed theory of consciousness, there could still be empirical uncertainty about Vh,t, so you shouldnât assume Vh,t is fixed.
I mainly wanted to understand whether you tought the simple fact of attributing weights and then calculating a weighted mean might be intrinsically problematic. Weighting the various moral weight distributions by the reciprocal of their variances is just my preferred solution. That being said:
It is coherent with a bayesian approach (see here).
It mitigates Pascalâs Mugging (search for âPascalâs Mugging refersâ in this GiveWellâs article). This would not be the case if one used the standard deviation instead of the variance. For a distribution k X:
The mean E(k X) is k E(X).
The variance V(k X) is k^2 V(X).
Therefore the ratio between the mean and standard deviation is inversely proportional to k.
The standard deviation V(k X)^0.5 is k V(X)^0.5.
Therefore the ratio between the mean and standard deviation does not depend on k.
It facilitates the calculation of the weights (as they are solely a function of the distributions).
You are taking expected values over products of values, one of which is the moral weight, though, right?
I am calculating the mean of R = ânegative utility of poultry living time as a fraction of the utility of human lifeâ from the mean of R_PH, which is defined here.
Vh,t/Vc,t, basically the relative moral weight of chickens wrt humans, in units pQALY/âQALY
I think you meant âhumans wrt chickensâ (not âchickens wrt humansâ), as âhâ is in the numerator.
Vc,t/Vh,t, basically the relative moral weight of humans wrt chickens, in units QALY/âpQALY
I think you meant âchickens wrt humansâ (not âhumans wrt chickensâ), as âcâ is in the numerator.
But even on a fixed theory of consciousness, there could still be empirical uncertainty about Vh,t, so you shouldnât assume Vh,t is fixed.
Let me try to match my variables to yours, based on what I defined here:
R_PH (= R_HP), which is what I am trying to calculate, is akin to (Vc,tMc)/(Vh,tMh), not Vc,tMc+Vh,tMh.
Mc is akin to T*Q, where:
T = âpoultry living time per capita (pyear/âperson/âyear)â.
Q = âquality of the living conditions of poultry (-pQALY/âpyear)â.
Mh is akin to H = âutility of human life (QALY/âperson/âyear)â.
PH = âmoral weight of poultry birds relative to humans (QALY/âpQALY)â is Vc,t/Vh,t.
I did not set Vh,t to 1, because my PH represents Vc,t/Vh,t, not Vc,t.
Note that if you divide a random variable with units by its variance, the result will not be unitless (itâll have the reciprocal units of the random variable), and so you would need to make sure the units match before adding. In this case, with the notation I introduced, youâd have different theory-specific units youâre trying to sum across, and this wouldnât work. Dividing by the standard deviation or the range or some other statistics with the same units as the random variable would work.
I think you meant âhumans wrt chickensâ (not âchickens wrt humansâ), as âhâ is in the numerator.
(...)
I think you meant âchickens wrt humansâ (not âhumans wrt chickensâ), as âcâ is in the numerator.
Woops, yes, good catch.
R_PH (= R_HP), which is what I am trying to calculate, is akin to (Vc,tMc)/(Vh,tMh), not Vc,tMc+Vh,tMh.
I think this is the problem, then. You should not take and use the expected value of the ratio (Vc,tMc)/(Vh,tMh), for basically the reasons I gave previously that you should not in general (except when you condition on enough things or make certain explicit and justified assumptions) take expected values of relative moral weights. Indeed, these are moral weights, just aggregates. When youâre interested in the impacts of an intervention on different individuals, you would sum the impacts over each individual, and then take the expected value (or sum expected individual impacts), i.e. E[Vc,tMc+Vh,tMh]. E[(Vc,tMc)/(Vh,tMh)] isnât generally useful for this unless, without further assumptions that are unjustified and plausibly wrong, e.g.(Vc,tMc)/(Vh,tMh) and Vh,tMh are independent.
(You could estimate E[Vc,tMc]/E[Vh,tMh] instead, though, and that could be useful, if you also have an estimate of E[Vh,tMh].)
Note that if you divide a random variable with units by its variance, the result will not be unitless (itâll have the reciprocal units of the random variable), and so you would need to make sure the units match before adding.
I agree, but I do not expect this to be a problem:
A priori, I would expect any theory of consciousness to produce a mean moral weight of poultry birds relative to humans in pQALY/âQALY [or QALY/âpQALY].
Moreover, if this is not the case, it seems to me that weighting the various moral weight distributions by the reciprocal of their standard deviations (or any other metric, with or without units) would also not be possible:
As you point out, the terms in the numerator would both be unitless, and therefore adding them would not be a problem.
However, the terms in the denominator would have different units. For example, for 2 moral weight distributions MWA and MWB with units A and B, the terms in the denominator would have units A^-1 and B^-1.
Dividing by the standard deviation or the range or some other statistics with the same units as the random variable would work.
As explained above, I do not see how it would be possible to combine the results of different theories if these cannot be expressed in the same units.
E[(Vc,tMc)/(Vh,tMh)] isnât generally useful for this unless, without further assumptions that are unjustified and plausibly wrong, e.g.(Vc,tMc)/(Vh,tMh) and Vh,tMh are independent.
In order to calculate something akin to (Vc,tMc)+(Vh,tMh) instead of (Vc,tMc)/(Vh,tMh), I would compute S_PH = T*PH*Q + H instead of R_PH = T*PH*Q/âH (see definitions here), assuming:
All the distributions I defined in Methodology are independent.
All theories of consciousness produce a distribution for the moral weight of poultry birds relative to humans in QALY/âpQALY.
PH represents the weighted mean of all these distributions.
Under these assumption (I have added the 1st to Methodology, and the 2nd and 3rd to Moral weight of poultry), E(R_PH) is a good proxy for E(S_PH) (which is what we care about, as you pointed out):
S_PH = (R_PH + 1) H.
I defined H as a constant.
Consequently, the greater is E(R_PH), the greater is E(S_PH).
Normalizing PH (or HP) by its variance on each theory could introduce more arbitrarily asymmetric treatment between animals, overweight theories where the variance is lowest for reasons unrelated to the probability you assign to them (e.g. on aome theories, capacity for welfare may be close to constant), and is pretty ad hoc. I would recommend looking into more general treatments of moral uncertainty instead, and just an approach like variance voting or moral parliament, applied to your whole expected value over outcomes, not PH (or HP).
As I discussed in other comments and the other links discussing the two envelopes problem, H should not be defined as constant (or independent from or uncorrelated with PH) without good argument, and on any given theory of consciousness, it seems pretty unlikely to me, since we still have substantial empirical uncertainty about human (and chicken) brains on any theory of consciousness. You can estimate the things you want to this way, but the assumptions are too strong, so you shouldnât trust the estimates, and this is partly why you get the average chicken having greater capacity for welfare than the average human in expectation. Sometimes PH is lower than on some empirical possibilities not because P is lower on those possibilities, but because H is greater on them, but youâve assumed this canât be the case, so may be severely underweighting human capacity for welfare.
If you instead assumed P were constant (although this would be even more suspicious), youâd get pretty different results.
I would recommend looking into more general treatments of moral uncertainty instead, and just an approach like variance voting or moral parliament, applied to your whole expected value over outcomes, not PH (or HP).
I will do, thanks!
You can estimate the things you want to this way, but the assumptions are too strong, so you shouldnât trust the estimates, and this is partly why you get the average chicken having greater capacity for welfare than the average human in expectation.
Note that it is possible to obtain a mean moral weight much smaller than 1 with exactly the same method, but different parameters. For example, changing the 90th percentile of moral weight of poultry birds if these are moral patients from 10 to 0.1 results in a mean moral weight of 0.02 (instead of 2). I have added to this section one speculative explanation for why estimates for the moral weight tend to be smaller.
If you instead assumed P were constant (although this would be even more suspicious), youâd get pretty different results.
I have not defined P, but I agree I could, in theory, have estimated R_PH (and S_PH) based on P = âutility of poultry living time (-pQALY/âperson/âyear)â. However, as you seem to note, this would be even more prone to error (âmore suspiciousâ). The two methods are mathematically equivalent under my assumptions, and therefore it makes much more sense to me as a human to use QALY (instead of pQALY) as the reference unit.
Michael, once again, thank you so much for all these comments!
Regarding your 1st reason, you seem to be referring to a distinction between the following distributions:
PH = âmoral weight of poultry birds relative to humans (QALY/âpQALY)â (i.e. poultry birds in the numerator, and humans in the denominator).
HP = âmoral weight of humans relative to poultry birds (pQALY/âQALY)â (i.e. humans in the numerator, and poultry birds in the denominator).
However, I think both distributions contain the same information, as HP = PH^-1. E(PH) is not equal to E(HP)^-1 (as I noted here), but R = ânegative utility of poultry living time as a fraction of the utility of human lifeâ is the same regardless of which of the above metrics is used. For T = âpoultry living time per capita (pyear/âperson/âyear)â, Q = âquality of the living conditions of poultry (-pQALY/âpyear)â, and H = âutility of human life (QALY/âperson/âyear)â, the 2 ways of computing R are:
Using PH, i.e. with QALY/âperson/âyear in the numerator and denominator of R:
R_PH = (T*PH*Q)/âH.
Using HP, i.e. with pQALY/âperson/âyear in the numerator and denominator of R:
R_HP = (T*Q)/â(HP*H).
Since HP = PH^-1, R_PH = R_HP.
(I have skimmed the Felicifiaâs thread, which has loads of interesting discussions! Nevertheless, for the reasons I have been providing here, I still do not understand why calculating expected moral weights is problematic.)
If you used E[HP] as a multiplicative factor to convert human welfare impacts into chicken welfare-equivalent impacts and measure everything in chicken welfare-equivalent terms, your analysis would give different results. In particular, E[HP]>1, which would tell you humans matter more individually (per year) than chickens, but you have E[PH]>1, which tells you chickens matter more than humans. The tradeoffs in this post would favor humans more.
I agree that the following 2 metrics are different:
R_PH_mod = (T*E(PH)*Q)/âH.
R_HP_mod = (T*Q)/â(E(HP)*H).
However, as far as I understand, it would not make sense to use E(PH) or E(HP) instead of PH or HP. I am interested in determining E(R_PH) = E(R_HP), and therefore the expeced value should only be calculated after all the operations.
In general, to determine a distribution X, which is a function of X1, X2, âŚ, and Xn, via a Monte Carlo simulation, I believe:
E(X) = E(X(X1, X2, âŚ, Xn)).
For me, it would not make sense to replace an input distribution by its mean (as you seem to be suggesting), e.g. because E(A*B)E(A/âB) is not equal to E(A)*E(B)E(A)/âE(B).
For me, it would not make sense to replace an input distribution by its mean (as you seem to be suggesting), e.g. because E(A*B) is not equal to E(A)*E(B).
I agree in general, but I think youâre modelling A=PH as independent from T, Q and H, so you can get the expected value of the product as equal to the product of expected values. However, I also donât think you should model PH as independent from the rest.
I gave a poor example (I have now rectified it above), but my general point is valid:
The expected value of X should not be calculated by replacing the input distributions by their means.
For example, for X = 1/âX1, E(1/âX1) is not equal to 1/âE(X1).
As a result, one should not use (and I have not used) expected moral weights.
I agree that the input distributions of my analysis might not be independent. However, that seems a potential concern for any Monte Carlo simulation, not just ones involving moral weight distributions.
Thanks for the comment, and introducing me to that post some weeks ago!
1- Note that I only made the Guesstimate model for illustration purposes, and to allow people to choose their own inputs. I am only using the Google Colab program and this Sheets to obtain the post results.
2- I do not find it odd that the moral weight of poultry birds relative to humans is as likely to be smaller than 2*10^-5 (5th percentile) as to be larger than 20 (95th percentile).
3- I tend to think distributions are more informative because they allow us to calculate the expected (mean) results. It would be possible to compute the mean moral weight from point estimates, but I think it makes more sense to assume a continuous probability density function.
4- I do not understand why calculating the expected value of moral weights is problematic. Brian mentions that:
I do not see why âutility functions can be rescaled arbitrarilyâ. For the above case, I would say replacing f1 by 1000 f1 is not reasonable, because it is equivalent to increasing the weight of f1 from 50% (= 1/â(1 + 1)) to 99.9% (= 1000/â(1000 + 1)).
Why is increasing the weight of f1 this much unreasonable?
In my view, the weights of f1 and f2 depend on how much we trust f1 and f2, and therefore they are not arbitrary:
If we had absolutely no idea about in which function to trust more, giving the same weight to each of the functions (i.e. 50%) would seem intuitive.
In order to increase the weight of f1 from 50% to 99.9%, we would need to have new information updating us towards trusting much more in f1 over f2.
If we had started with 1000 f1 rather than f1 in the first place, then switching it to f1 would seem to give f1 (or 1000 f1, or whatever) too little weight relative to f2, right?
Right, for f3 = 1000 f1, we would need some kind of information to change the weight of f3 from 50% (= 1/â(1 + 1)) to 0.1% (= 0.001/â(0.001 + 1)).
Note that I do not think the starting functions are arbitrary. For the analysis of this post, for example, each function would represent a distribution for the moral weight of poultry birds relative to humans in QALY/âpQALY, under a given theory.
In addition, to determine an overall moral weight given 2 distributions for the moral weight, MWA and MWB, I would weight them by the reciprocal of their variances (based on this analysis from by Dario Amodei):
MW = (MWA/âV(MWA) + MWB/âV(MWB))/â(1/âV(MWA) + 1/âV(MWB)).
Having this in mind, the higher is the uncertainty of MWA relative to that of MWB, the larger is the weight of MWA.
Ya, we can have different intuitions about how large the chickenâs relative moral weight can be. 20 is probably about the maximum I would allow, so the tail would go to more extreme values than I would personally use, and that skews the EV.
My bottom ~5% for chickens would probably be for chickens not being conscious, so 0 moral weight.
I agree. That being said, I think most informed intuitions will imply the mean negative utility of poultry living time is at least comparable to that of human life.
For example, your intuitions about the maximum moral weight might not significantly change the mean moral weight (as long as your intuitions about the intermediate quantiles are not too different from what I considered). Giving 5% weight to null moral weight, and 95% weight to the moral weight following a loguniform distribution whose minimum and maximum are the 5th and 95th percentiles I estimated above, the mean moral weight is 1.
I am also curious about your reasons for setting the maximum moral weight to 20. My distribution implies a maximum of 46, which is not much larger than 20 having in mind that my 95th percentile is 1 M times as large as my 5th percentile.
Ah, sorry, my misunderstanding about your use of Guesstimate.
One reason that taking expectations over moral weights can be misleading is that the cases where chickens matter much less than humans may be because the humanâs absolute moral weight is much higher than it is in the cases where chickens have closer to average moral weight to humans. Taking expected values of the ratio with human moral weight in the denominator treats human moral weight as fixed, and not allowed to be larger than otherwise in absolute terms. So you could be underweighting cases where humans matter much more, because those have little influence on the expected value of the ratio. I think Brianâs article illustrates this, but you could also see the Felicifia thread discussion he references for more.
Another reason is that this involves genuine uncertainty over different theories of consciousness, each of which may define its own hedonic/âvalue scale, and these scales may not be intertheoretically comparable (I expect them not to be). So itâs really moral/ânormative uncertainty, and you need to justify intertheoretic comparisons and the units you normalize by to take expected values over moral uncertainty this way. I havenât found any such attempts persuasive, and I think they can lead to unresolvable disagreements if two groups of beings (humans and intelligent conscious aliens or conscious AI, say) fix their own moral weight and treat the otherâs as uncertain and relative to their own.
Regarding your 2nd reason:
A priori, I would expect any theory of consciousness to produce a mean moral weight of poultry birds relative to humans in pQALY/âQALY.
Subsequently (and arguably ânaivelyâ, according to Brian, Luke and probably you), I would give weights to each of the theories of consciousness, and then determine the
weighted expected moral weight(the rest of this sentence was added after this reply) overall moral weight distribution from the weighted mean of the moral weight distributions of the various theories of consciousness (see here).If I understand you correctly, you do not expect the above to be possible. I would very much welcome any sources explaining why that might be the case!
I think this is probably right, as long as the theory is sufficiently quantitatively precise.
This treats human moral weight like itâs fixed and the same across theories. That needs justification to me, since I donât see why there would be any fact of the matter for such intertheoretic comparisons, and since there are alternative choices to fix that would make different recommendations in practice, e.g. chicken moral weight, alien moral weight, sentient AI moral weight, human toe stubs, chicken torture, an ant eating sugar, and so on. I think what youâre proposing is the maximizing expected choice-worthiness/âchoiceworthiness approach to moral uncertainty, so you could look for discussions and critiques of that. Or, just more general treatments of moral uncertainty.
Which of these do you think is problematic (I have clarified above what I would do; see 2nd bullet)?
Giving weights to each of the theories of consciousness (e.g. as I described here).
Determining the overall moral weight distribution from the weighted mean of the moral weight distributions of the various theories of consciousness.
I might not have been clear about it (if that was the case, sorry!), but:
I actually agree I cannot use expected moral weights to determine the expected negative utility of poultry living time as a fraction of the utility of human life.
Although I presented statistics for the Moral weight of poultry and Quality of the living conditions of poutry, I did not use them to obtain the results for the Negative utility of poultry living time as a fraction of the utility of human life.
Thanks, I will have a look!
I think the first probably makes some unverifiable and unjustified assumptions. Why normalize by the variance in particular?
It seems similar to variance voting, although variance voting normalizes by the standard deviation instead of the variance, to ensure each has variance 1 (Var(aX)=a2Var(X), so Var(X/âVar(X))=Var(X)/Var(X)=1). It is one approach to moral uncertainty, but there are others, like the parliamentary approach. Why normalize by the variance or standard deviation and not some other measure, for example?
You are taking expected values over products of values, one of which is the moral weight, though, right?
Here is how I would think about it, with variables with units.
Mh = measured units of human welfare from the intervention, in QALYs (total, not per year or per capita, for simplicity)
Mc = measured units of chicken welfare from the intervention, in pQALYs (total, not per year or per capita, for simplicity)
Vh,t = value per measured unit of human welfare on the theory of consciousness t, in units valt/QALY
Vc,t = value per measured unit of chicken welfare on the theory of consciousness t, in units valt/pQALY
Vh,t/Vc,t, basically the relative moral weight of
chickens wrt humanshumans wrt chickens, in units pQALY/âQALYVc,t/Vh,t, basically the relative moral weight of
humans wrt chickenschickens wrt humans, in units QALY/âpQALYYouâre trying to calculate, where T is a random variable for the theory of consciousness,
E[Vc,TMc+Vh,TMh]=E[Vc,TMc]+E[Vh,TMh]but first, the above means taking expectations over values with units valt for different t, like adding values in Fahrenheit and values in Celsius (or grams), so you need to condition on a theory of consciousness T=t first. So, letâs look at, for each theory t,
E[Vc,TMc|T=t]+E[Vh,TMh|T=t]=E[Vc,tMc]+E[Vh,tMh]Then, I think youâre effectively assuming Vh,t=1 and that itâs unitless, and so you infer the following:
Vc,t=Vc,t/Vh,tVc,tMc=Vc,t/Vh,tMcBut even on a fixed theory of consciousness, there could still be empirical uncertainty about Vh,t, so you shouldnât assume Vh,t is fixed.
Thanks for the reply!
I mainly wanted to understand whether you tought the simple fact of attributing weights and then calculating a weighted mean might be intrinsically problematic. Weighting the various moral weight distributions by the reciprocal of their variances is just my preferred solution. That being said:
It is coherent with a bayesian approach (see here).
It mitigates Pascalâs Mugging (search for âPascalâs Mugging refersâ in this GiveWellâs article). This would not be the case if one used the standard deviation instead of the variance. For a distribution k X:
The mean E(k X) is k E(X).
The variance V(k X) is k^2 V(X).
Therefore the ratio between the mean and standard deviation is inversely proportional to k.
The standard deviation V(k X)^0.5 is k V(X)^0.5.
Therefore the ratio between the mean and standard deviation does not depend on k.
It facilitates the calculation of the weights (as they are solely a function of the distributions).
I am calculating the mean of R = ânegative utility of poultry living time as a fraction of the utility of human lifeâ from the mean of R_PH, which is defined here.
I think you meant âhumans wrt chickensâ (not âchickens wrt humansâ), as âhâ is in the numerator.
I think you meant âchickens wrt humansâ (not âhumans wrt chickensâ), as âcâ is in the numerator.
Let me try to match my variables to yours, based on what I defined here:
R_PH (= R_HP), which is what I am trying to calculate, is akin to (Vc,tMc)/(Vh,tMh), not Vc,tMc+Vh,tMh.
Mc is akin to T*Q, where:
T = âpoultry living time per capita (pyear/âperson/âyear)â.
Q = âquality of the living conditions of poultry (-pQALY/âpyear)â.
Mh is akin to H = âutility of human life (QALY/âperson/âyear)â.
PH = âmoral weight of poultry birds relative to humans (QALY/âpQALY)â is Vc,t/Vh,t.
I did not set Vh,t to 1, because my PH represents Vc,t/Vh,t, not Vc,t.
Note that if you divide a random variable with units by its variance, the result will not be unitless (itâll have the reciprocal units of the random variable), and so you would need to make sure the units match before adding. In this case, with the notation I introduced, youâd have different theory-specific units youâre trying to sum across, and this wouldnât work. Dividing by the standard deviation or the range or some other statistics with the same units as the random variable would work.
Woops, yes, good catch.
I think this is the problem, then. You should not take and use the expected value of the ratio (Vc,tMc)/(Vh,tMh), for basically the reasons I gave previously that you should not in general (except when you condition on enough things or make certain explicit and justified assumptions) take expected values of relative moral weights. Indeed, these are moral weights, just aggregates. When youâre interested in the impacts of an intervention on different individuals, you would sum the impacts over each individual, and then take the expected value (or sum expected individual impacts), i.e. E[Vc,tMc+Vh,tMh]. E[(Vc,tMc)/(Vh,tMh)] isnât generally useful for this unless, without further assumptions that are unjustified and plausibly wrong, e.g.(Vc,tMc)/(Vh,tMh) and Vh,tMh are independent.
(You could estimate E[Vc,tMc]/E[Vh,tMh] instead, though, and that could be useful, if you also have an estimate of E[Vh,tMh].)
I agree, but I do not expect this to be a problem:
Moreover, if this is not the case, it seems to me that weighting the various moral weight distributions by the reciprocal of their standard deviations (or any other metric, with or without units) would also not be possible:
As you point out, the terms in the numerator would both be unitless, and therefore adding them would not be a problem.
However, the terms in the denominator would have different units. For example, for 2 moral weight distributions MWA and MWB with units A and B, the terms in the denominator would have units A^-1 and B^-1.
As explained above, I do not see how it would be possible to combine the results of different theories if these cannot be expressed in the same units.
In order to calculate something akin to (Vc,tMc)+(Vh,tMh) instead of (Vc,tMc)/(Vh,tMh), I would compute S_PH = T*PH*Q + H instead of R_PH = T*PH*Q/âH (see definitions here), assuming:
All the distributions I defined in Methodology are independent.
All theories of consciousness produce a distribution for the moral weight of poultry birds relative to humans in QALY/âpQALY.
PH represents the weighted mean of all these distributions.
Under these assumption (I have added the 1st to Methodology, and the 2nd and 3rd to Moral weight of poultry), E(R_PH) is a good proxy for E(S_PH) (which is what we care about, as you pointed out):
S_PH = (R_PH + 1) H.
I defined H as a constant.
Consequently, the greater is E(R_PH), the greater is E(S_PH).
Normalizing PH (or HP) by its variance on each theory could introduce more arbitrarily asymmetric treatment between animals, overweight theories where the variance is lowest for reasons unrelated to the probability you assign to them (e.g. on aome theories, capacity for welfare may be close to constant), and is pretty ad hoc. I would recommend looking into more general treatments of moral uncertainty instead, and just an approach like variance voting or moral parliament, applied to your whole expected value over outcomes, not PH (or HP).
As I discussed in other comments and the other links discussing the two envelopes problem, H should not be defined as constant (or independent from or uncorrelated with PH) without good argument, and on any given theory of consciousness, it seems pretty unlikely to me, since we still have substantial empirical uncertainty about human (and chicken) brains on any theory of consciousness. You can estimate the things you want to this way, but the assumptions are too strong, so you shouldnât trust the estimates, and this is partly why you get the average chicken having greater capacity for welfare than the average human in expectation. Sometimes PH is lower than on some empirical possibilities not because P is lower on those possibilities, but because H is greater on them, but youâve assumed this canât be the case, so may be severely underweighting human capacity for welfare.
If you instead assumed P were constant (although this would be even more suspicious), youâd get pretty different results.
I will do, thanks!
Note that it is possible to obtain a mean moral weight much smaller than 1 with exactly the same method, but different parameters. For example, changing the 90th percentile of moral weight of poultry birds if these are moral patients from 10 to 0.1 results in a mean moral weight of 0.02 (instead of 2). I have added to this section one speculative explanation for why estimates for the moral weight tend to be smaller.
I have not defined P, but I agree I could, in theory, have estimated R_PH (and S_PH) based on P = âutility of poultry living time (-pQALY/âperson/âyear)â. However, as you seem to note, this would be even more prone to error (âmore suspiciousâ). The two methods are mathematically equivalent under my assumptions, and therefore it makes much more sense to me as a human to use QALY (instead of pQALY) as the reference unit.
Michael, once again, thank you so much for all these comments!
Regarding your 1st reason, you seem to be referring to a distinction between the following distributions:
PH = âmoral weight of poultry birds relative to humans (QALY/âpQALY)â (i.e. poultry birds in the numerator, and humans in the denominator).
HP = âmoral weight of humans relative to poultry birds (pQALY/âQALY)â (i.e. humans in the numerator, and poultry birds in the denominator).
However, I think both distributions contain the same information, as HP = PH^-1. E(PH) is not equal to E(HP)^-1 (as I noted here), but R = ânegative utility of poultry living time as a fraction of the utility of human lifeâ is the same regardless of which of the above metrics is used. For T = âpoultry living time per capita (pyear/âperson/âyear)â, Q = âquality of the living conditions of poultry (-pQALY/âpyear)â, and H = âutility of human life (QALY/âperson/âyear)â, the 2 ways of computing R are:
Using PH, i.e. with QALY/âperson/âyear in the numerator and denominator of R:
R_PH = (T*PH*Q)/âH.
Using HP, i.e. with pQALY/âperson/âyear in the numerator and denominator of R:
R_HP = (T*Q)/â(HP*H).
Since HP = PH^-1, R_PH = R_HP.
(I have skimmed the Felicifiaâs thread, which has loads of interesting discussions! Nevertheless, for the reasons I have been providing here, I still do not understand why calculating expected moral weights is problematic.)
If you used E[HP] as a multiplicative factor to convert human welfare impacts into chicken welfare-equivalent impacts and measure everything in chicken welfare-equivalent terms, your analysis would give different results. In particular, E[HP]>1, which would tell you humans matter more individually (per year) than chickens, but you have E[PH]>1, which tells you chickens matter more than humans. The tradeoffs in this post would favor humans more.
I agree that the following 2 metrics are different:
R_PH_mod = (T*E(PH)*Q)/âH.
R_HP_mod = (T*Q)/â(E(HP)*H).
However, as far as I understand, it would not make sense to use E(PH) or E(HP) instead of PH or HP. I am interested in determining E(R_PH) = E(R_HP), and therefore the expeced value should only be calculated after all the operations.
In general, to determine a distribution X, which is a function of X1, X2, âŚ, and Xn, via a Monte Carlo simulation, I believe:
E(X) = E(X(X1, X2, âŚ, Xn)).
For me, it would not make sense to replace an input distribution by its mean (as you seem to be suggesting), e.g. because
E(A*B)E(A/âB) is not equal toE(A)*E(B)E(A)/âE(B).I agree in general, but I think youâre modelling A=PH as independent from T, Q and H, so you can get the expected value of the product as equal to the product of expected values. However, I also donât think you should model PH as independent from the rest.
I gave a poor example (I have now rectified it above), but my general point is valid:
The expected value of X should not be calculated by replacing the input distributions by their means.
For example, for X = 1/âX1, E(1/âX1) is not equal to 1/âE(X1).
As a result, one should not use (and I have not used) expected moral weights.
I agree that the input distributions of my analysis might not be independent. However, that seems a potential concern for any Monte Carlo simulation, not just ones involving moral weight distributions.