Really great work. Regardless of my specific disagreements, I do think calculating moral weights for animals is literally some of the highest value work the EA community can do, because without such weights we cant compare animal welfare causes to human-related global health/âlongtermism causesâand hence cannot identify and direct resources towards the most important problems. And I say this as someone who has always donated to human causes over animal ones, and who is not, in fact, vegan.
With respect to the post and the related discussion:
(1) Fundamentally, the quantitative proxy model seems conceptually sound to me.
(2) I do disagree with the idea that your results are robust to different theories of welfare. For example, I myself reject hedonism and accept a broader view of welfare (given that we care about a broad range of things beyond happiness, e.g. life/âfreedom/âachievement/âlove/âwhatever). If (a) such broad welfarist views are correct, (b) you place a sufficiently high weight on the other elements of welfare (e.g. life per se, even if neutral valenced), and (c) you donât believe animals can enjoy said elements of welfare (e.g. if most animals arenât cognitively sophisticated enough to have preferences over continued existence), then an additional healthy year of human life would plausibly be worth a lot more than an equivalent animal year even after accounting for similar degrees of suffering and the relevant moral weights as calculated.
(3) I would like to say, for the record, that a lot of the criticism youâre getting (and I donât exempt myself here) is probably subject to a lot of motivated reasoning. I am personally uncertain as to the degree to which I should discount my own conclusions over this reason.
(4) My main concern, as someone who does human-related cause prioritization research, is the meat eater argument and whether helping to save human lives is net negative from overall POV, given the adverse consequences for animal suffering. I am moderately optimistic that this is not so, and that saving human lives is net positive (as we want/âneed it to be) . Having very roughly run the numbers myself using RPâs unadjusted moral weights (i.e. not taking into account point 2 above) and inputting other relevant data (e.g. on per capita consumption rate of meat), my approximate sense is that in saving lives weâre basically buying 1 full week of healthy human life for around 6 days of chicken suffering or above 2 days of equivalent human sufferingâwhich is worth it.
Thanks for the kind words about the project, Joel! Thanks too for these thoughtful and gracious comments.
1. I hear you re: the quantitative proxy model. I commissioned the research for that one specially because I thought it would be valuable. However, it was just so difficult to find information. To even begin making the calculations work, we had to semi-arbitrarily fill in a lot of information. Ultimately, we decided that there just wasnât enough to go on.
2. My question about non-hedonist theories of welfare is always the same: just how much do non-hedonic goods and bads increase humansâ welfare range relative to animalsâ welfare ranges? As you know, I think that even if hedonic goods and bads arenât all of welfare, theyâre a lot of it (as we argue here). But suppose you think that non-hedonic goods and bads increase humansâ welfare range 100x over all other animals. In many cost-effectiveness calculations, that would still make corporate campaigns look really good.
3. I appreciate your saying this. I should acknowledge that Iâm not above motivated reasoning either, having spent a lot of the last 12 years working on animal-related issues. In my own defense, Iâve often been an animal-friendly critic of pro-animal arguments, so I think Iâm reasonably well-placed to do this work. Still, we all need to be aware of our biases.
4. This is a very interesting result; thanks for sharing it. Iâve heard of others reaching the same conclusion, though I havenât seen their models. If youâre willing, Iâd love to see the calculations. But no pressure at all.
I myself reject hedonism and accept a broader view of welfare (given that we care about a broad range of things beyond happiness, e.g. life/âfreedom/âachievement/âlove/âwhatever).
Hedonism is compatible with caring about âlife/âfreedom/âachievement/âlove/âwhateverâ, because all of those describe sets of conscious experiences, and hedonism is about valuing conscious experiences. I cannot think of something I value independently of conscious experiences, but I would welcome counterexamples.
Thereâs the standard philosophical counterexample of the experience machine, including the reformulated Joshua Greene example that addressed status quo bias. But basically, the idea is thisâwould you rather that the world was real or just an illusion as youâre trapped as a brain in a vat (with the subjective sensory experience itself otherwise identical)? Almost certainly, and most people will give this answer, youâll want the world to be real. Thatâs because we donât just want to think that youâre free/âsuccessful/âin a loving relationshipâwe also actually want to be all those things.
In less philosophical terms, you can think about how would not want your friends and families and family to actually hate you (even if you couldnât tell the different). And that would also be why people care about having non-moral impact even after theyâre dead (e.g. authors hoping their posthumously published book is successful, or some athlete wanting their achievements to stand the test of time and not being bested at the next competition, or some mathematician wanting to prove some conjecture and not just think he did).
But basically, the idea is thisâwould you rather that the world was real or just an illusion as youâre trapped as a brain in a vat (with the subjective sensory experience itself otherwise identical)?
It depends on the specific properties of the real and simulated world, but my answer would certainly be guided by hedonic considerations:
My personal hedonic utility would be the same in the simulated and real worlds, so it would not be a deciding factor.
If I were the only (sentient) being in the simulated world, and there were lots of (sentient) beings in the real world, the absolute value of the total hedonic utility would be much larger for the real world.
As a result, I would prefer:
The real world if I expected the mean experience per being there to be positive (i.e. positive total hedonic utility).
The simulated world if I expected the mean experience per being in the real world to be negative (i.e. negative total hedonic utility), and I had positive experiences myself in the simulated world.
Hedonism says all that matters is conscious experiences, but that does mean we should be indifferent between 2 worlds where our personal concious experiences are the same. We still have to look into the experiences of other beings, unless we are perfectly egoistic, which I do not think we should be.
For me, a true counterexample to hedonism would have to present 2 worlds in which expected total (not personal) hedonistic utility (ETHU) were the same, and people still preferred one of them over the other. However, since we do not understand well how to calculate ETHU, we can only ensure 2 worlds have the same of it if they are exactly the same, in which case it does not make sense to prefer one over the other.
In less philosophical terms, you can think about how would not want your friends and families and family to actually hate you (even if you couldnât tell the different).
I agree. However, as I commented here, that is only an argument against egoistic hedonism, not altruistic hedonism (which is the one I support).
You can imagine a) everyone in their own experience machine isolated from everyone else, so that all the other âpeopleâ inside are not conscious (but the people believe the others are conscious, and thereâs no risk theyâll find out they arenât), or b) people genuinely interacting with each other (in the real world, or virtual reality), making real connections with other real people. I think most people would prefer the latter for themselves, even if it makes them somewhat worse off. An impartial hedonistic view would recommend disregarding these preferences and putting everyone in the isolated experience machines anyway.
Not related to your point, but I would like to note it seems quite extreme to reject the application of hedonism in the context of welfare range estimates based on such thought experiment.
It is unclear to me whether ETHU is greater in a) or b). It depends on whether it is more efficient to produce it via experience machines or genuine interactions (I suppose utility per being would be higher with experience machines, but maybe not utility per unit resources). So I do not think people preferring a) or b) is good evidence that there is something else which matters besides ETHU.
It does not seem possible to make a hard distinction between a) and b). I am only able to perceive reality via my own conscious experience, so there is a sense in which my body is in fact an experience machine.
I believe most people preferring b) over a) is very weak evidence that b) is better than a). Our intuitions are biased towards assessing the thought experiment based on how the words used to describe it make us feel. As a 1st approximation, I think people would be thinking about whether âgenuineâ and ârealâ sound better than âmachineâ and âisolatedâ, and they do, so I am not surprised most people prefer b).
Being genuinely loved rather than just believing you are loved could matter to your welfare even if it doesnât affect your conscious experiences. Knowing the truth even of it makes no difference to your experiences. Actually achieving something rather than falsely believing you achieved it.
I would say they work as counterexamples to egoistic hedonism, but not to altruistic hedonism (the one I support). In each pair of situations you described, my mental states (and therefore personal hedonic utility) would be the same, but the experiences of others around me would be quite different (and so would total hedonic utility):
Pretending to love should feel quite different from loving, and being fake generally leads to worse outcomes.
One is better positioned to improve the mental states of others if one knows what is true.
Actually achieving something means actually improving the mental states of others (to the extent one is altruistic), rather than only believing one did so.
For these reasons, rejecting wireheading is also compatible with hedonism. A priori, it does not seem like the best way to help others. One can specify in thought experiments that âeveryone else[âs hedonic utility] is taken care ofâ, but I think it is quite hard to conditional human answers on that, given that lots of our experiences go against the idea that having delusional experiences is both optimal for us and others.
FYI, this and this could also be relevant for analysing the meat eater problem. The posts are not updated with RPâs moral weight estimates, but the models should still be useful (and I am happy to update them with RPâs estimates if you think it is useful).
(4) My main concern, as someone who does human-related cause prioritization research, is the meat eater argument and whether helping to save human lives is net negative from overall POV, given the adverse consequences for animal suffering. I am moderately optimistic that this is not so, and that saving human lives is net positive (as we want/âneed it to be) .
Great to know you are considering impacts on animals! Even if the meat eater problem is not a major concern according to your calculations, has CEARCH considered that the best animal welfare interventions may be orders of magnitude more cost-effective than GiveWellâs top charities? CEARCH uses a cost-effectiveness bar of 10 times the cost-effectiveness of GiveWellâs top charities, but I think this is very low. I estimated corporate campaigns for broiler welfare are 1.71 k times as cost-effective as the lowest cost to save a life among GWâs top charities.
With respect to the meat eater problem, I think the conclusion depends on the country. This influences the consumption per capita of animals, how much of each animal species is consumed, and the conditions of the animals. High income countries will tend to have greater consumption per capita and worse conditions, given the greater prevalence of factory-farming. For reference:
I estimated the annual suffering of all farmed animals combined is 4.64 times the annual happiness of all humans combined, which goes against your conclusion. For simplicity, I set the welfare per time as a fraction of the welfare range of each farmed animal of any species to a value I got for broilers in a reformed scenario.
However, I estimated accounting for farmed animals only decreases the cost-effectiveness of GiveWellâs top charities by 14.5 %, which is in line with your conclusion. Yet, I am underestimating the reduction in cost-effectiveness due to using current consumption, given it will tend to increase with economic growth.
I think considering impacts on animals may well affect CEARCHâs prioritisation:
Interventions in different countries may have super different impacts on animals (as illustrated by the 2 distinct conclusions above). I guess this is more relevant for CEARCH than GiveWell because I have the impression you have been assessing interventions whose beneficiaries are from a set of less homogeneous countries, which means the impacts on animals will vary more, and therefore cannot be neglected so lightly.
Interventions to extend life have different implications from interventions to improve quality of life. In general, interventions which improve quality of life without affecting lifespan and income much will have smaller impacts on animals (at least nearterm, i.e. neglecting how population size changes economic growth, and hence the trajectory of the consumption of animals). This is relevant to CEARCH because you have looked not only into interventions mostly saving lives and increasing income, but also into mental health.
I also encourage you to publish your estimates regarding the meat eater problem. I am not aware of any evaluator or grantmaker (aligned with effective altruism or not) having ever published a cost-effectiveness analysis of an intervention to improve human welfare which explicitly considered the impacts on farmed animal welfare (although I am aware of another besides you which have an internal analysis). So CEARCH would be the 1st to do so. For the reasons above, I think it would also be great if you included impacts on animals as a standard feature of your cost-effectiveness analyses.
Hi Bob & team,
Really great work. Regardless of my specific disagreements, I do think calculating moral weights for animals is literally some of the highest value work the EA community can do, because without such weights we cant compare animal welfare causes to human-related global health/âlongtermism causesâand hence cannot identify and direct resources towards the most important problems. And I say this as someone who has always donated to human causes over animal ones, and who is not, in fact, vegan.
With respect to the post and the related discussion:
(1) Fundamentally, the quantitative proxy model seems conceptually sound to me.
(2) I do disagree with the idea that your results are robust to different theories of welfare. For example, I myself reject hedonism and accept a broader view of welfare (given that we care about a broad range of things beyond happiness, e.g. life/âfreedom/âachievement/âlove/âwhatever). If (a) such broad welfarist views are correct, (b) you place a sufficiently high weight on the other elements of welfare (e.g. life per se, even if neutral valenced), and (c) you donât believe animals can enjoy said elements of welfare (e.g. if most animals arenât cognitively sophisticated enough to have preferences over continued existence), then an additional healthy year of human life would plausibly be worth a lot more than an equivalent animal year even after accounting for similar degrees of suffering and the relevant moral weights as calculated.
(3) I would like to say, for the record, that a lot of the criticism youâre getting (and I donât exempt myself here) is probably subject to a lot of motivated reasoning. I am personally uncertain as to the degree to which I should discount my own conclusions over this reason.
(4) My main concern, as someone who does human-related cause prioritization research, is the meat eater argument and whether helping to save human lives is net negative from overall POV, given the adverse consequences for animal suffering. I am moderately optimistic that this is not so, and that saving human lives is net positive (as we want/âneed it to be) . Having very roughly run the numbers myself using RPâs unadjusted moral weights (i.e. not taking into account point 2 above) and inputting other relevant data (e.g. on per capita consumption rate of meat), my approximate sense is that in saving lives weâre basically buying 1 full week of healthy human life for around 6 days of chicken suffering or above 2 days of equivalent human sufferingâwhich is worth it.
Thanks for the kind words about the project, Joel! Thanks too for these thoughtful and gracious comments.
1. I hear you re: the quantitative proxy model. I commissioned the research for that one specially because I thought it would be valuable. However, it was just so difficult to find information. To even begin making the calculations work, we had to semi-arbitrarily fill in a lot of information. Ultimately, we decided that there just wasnât enough to go on.
2. My question about non-hedonist theories of welfare is always the same: just how much do non-hedonic goods and bads increase humansâ welfare range relative to animalsâ welfare ranges? As you know, I think that even if hedonic goods and bads arenât all of welfare, theyâre a lot of it (as we argue here). But suppose you think that non-hedonic goods and bads increase humansâ welfare range 100x over all other animals. In many cost-effectiveness calculations, that would still make corporate campaigns look really good.
3. I appreciate your saying this. I should acknowledge that Iâm not above motivated reasoning either, having spent a lot of the last 12 years working on animal-related issues. In my own defense, Iâve often been an animal-friendly critic of pro-animal arguments, so I think Iâm reasonably well-placed to do this work. Still, we all need to be aware of our biases.
4. This is a very interesting result; thanks for sharing it. Iâve heard of others reaching the same conclusion, though I havenât seen their models. If youâre willing, Iâd love to see the calculations. But no pressure at all.
Hi Joel,
Hedonism is compatible with caring about âlife/âfreedom/âachievement/âlove/âwhateverâ, because all of those describe sets of conscious experiences, and hedonism is about valuing conscious experiences. I cannot think of something I value independently of conscious experiences, but I would welcome counterexamples.
Thereâs the standard philosophical counterexample of the experience machine, including the reformulated Joshua Greene example that addressed status quo bias. But basically, the idea is thisâwould you rather that the world was real or just an illusion as youâre trapped as a brain in a vat (with the subjective sensory experience itself otherwise identical)? Almost certainly, and most people will give this answer, youâll want the world to be real. Thatâs because we donât just want to think that youâre free/âsuccessful/âin a loving relationshipâwe also actually want to be all those things.
In less philosophical terms, you can think about how would not want your friends and families and family to actually hate you (even if you couldnât tell the different). And that would also be why people care about having non-moral impact even after theyâre dead (e.g. authors hoping their posthumously published book is successful, or some athlete wanting their achievements to stand the test of time and not being bested at the next competition, or some mathematician wanting to prove some conjecture and not just think he did).
Thanks for the reply, Joel!
It depends on the specific properties of the real and simulated world, but my answer would certainly be guided by hedonic considerations:
My personal hedonic utility would be the same in the simulated and real worlds, so it would not be a deciding factor.
If I were the only (sentient) being in the simulated world, and there were lots of (sentient) beings in the real world, the absolute value of the total hedonic utility would be much larger for the real world.
As a result, I would prefer:
The real world if I expected the mean experience per being there to be positive (i.e. positive total hedonic utility).
The simulated world if I expected the mean experience per being in the real world to be negative (i.e. negative total hedonic utility), and I had positive experiences myself in the simulated world.
Hedonism says all that matters is conscious experiences, but that does mean we should be indifferent between 2 worlds where our personal concious experiences are the same. We still have to look into the experiences of other beings, unless we are perfectly egoistic, which I do not think we should be.
For me, a true counterexample to hedonism would have to present 2 worlds in which expected total (not personal) hedonistic utility (ETHU) were the same, and people still preferred one of them over the other. However, since we do not understand well how to calculate ETHU, we can only ensure 2 worlds have the same of it if they are exactly the same, in which case it does not make sense to prefer one over the other.
I agree. However, as I commented here, that is only an argument against egoistic hedonism, not altruistic hedonism (which is the one I support).
You can imagine a) everyone in their own experience machine isolated from everyone else, so that all the other âpeopleâ inside are not conscious (but the people believe the others are conscious, and thereâs no risk theyâll find out they arenât), or b) people genuinely interacting with each other (in the real world, or virtual reality), making real connections with other real people. I think most people would prefer the latter for themselves, even if it makes them somewhat worse off. An impartial hedonistic view would recommend disregarding these preferences and putting everyone in the isolated experience machines anyway.
Thanks for the clarification! Some thoughts:
Not related to your point, but I would like to note it seems quite extreme to reject the application of hedonism in the context of welfare range estimates based on such thought experiment.
It is unclear to me whether ETHU is greater in a) or b). It depends on whether it is more efficient to produce it via experience machines or genuine interactions (I suppose utility per being would be higher with experience machines, but maybe not utility per unit resources). So I do not think people preferring a) or b) is good evidence that there is something else which matters besides ETHU.
It does not seem possible to make a hard distinction between a) and b). I am only able to perceive reality via my own conscious experience, so there is a sense in which my body is in fact an experience machine.
I believe most people preferring b) over a) is very weak evidence that b) is better than a). Our intuitions are biased towards assessing the thought experiment based on how the words used to describe it make us feel. As a 1st approximation, I think people would be thinking about whether âgenuineâ and ârealâ sound better than âmachineâ and âisolatedâ, and they do, so I am not surprised most people prefer b).
Being genuinely loved rather than just believing you are loved could matter to your welfare even if it doesnât affect your conscious experiences. Knowing the truth even of it makes no difference to your experiences. Actually achieving something rather than falsely believing you achieved it.
Thanks for the examples, Michael!
I would say they work as counterexamples to egoistic hedonism, but not to altruistic hedonism (the one I support). In each pair of situations you described, my mental states (and therefore personal hedonic utility) would be the same, but the experiences of others around me would be quite different (and so would total hedonic utility):
Pretending to love should feel quite different from loving, and being fake generally leads to worse outcomes.
One is better positioned to improve the mental states of others if one knows what is true.
Actually achieving something means actually improving the mental states of others (to the extent one is altruistic), rather than only believing one did so.
For these reasons, rejecting wireheading is also compatible with hedonism. A priori, it does not seem like the best way to help others. One can specify in thought experiments that âeveryone else[âs hedonic utility] is taken care ofâ, but I think it is quite hard to conditional human answers on that, given that lots of our experiences go against the idea that having delusional experiences is both optimal for us and others.
Would love to see the draft calculations from point 4 as well.
Hi Ula,
FYI, this and this could also be relevant for analysing the meat eater problem. The posts are not updated with RPâs moral weight estimates, but the models should still be useful (and I am happy to update them with RPâs estimates if you think it is useful).
Will DM on slack!
Hi Joel,
Great to know you are considering impacts on animals! Even if the meat eater problem is not a major concern according to your calculations, has CEARCH considered that the best animal welfare interventions may be orders of magnitude more cost-effective than GiveWellâs top charities? CEARCH uses a cost-effectiveness bar of 10 times the cost-effectiveness of GiveWellâs top charities, but I think this is very low. I estimated corporate campaigns for broiler welfare are 1.71 k times as cost-effective as the lowest cost to save a life among GWâs top charities.
With respect to the meat eater problem, I think the conclusion depends on the country. This influences the consumption per capita of animals, how much of each animal species is consumed, and the conditions of the animals. High income countries will tend to have greater consumption per capita and worse conditions, given the greater prevalence of factory-farming. For reference:
I estimated the annual suffering of all farmed animals combined is 4.64 times the annual happiness of all humans combined, which goes against your conclusion. For simplicity, I set the welfare per time as a fraction of the welfare range of each farmed animal of any species to a value I got for broilers in a reformed scenario.
However, I estimated accounting for farmed animals only decreases the cost-effectiveness of GiveWellâs top charities by 14.5 %, which is in line with your conclusion. Yet, I am underestimating the reduction in cost-effectiveness due to using current consumption, given it will tend to increase with economic growth.
I think considering impacts on animals may well affect CEARCHâs prioritisation:
Interventions in different countries may have super different impacts on animals (as illustrated by the 2 distinct conclusions above). I guess this is more relevant for CEARCH than GiveWell because I have the impression you have been assessing interventions whose beneficiaries are from a set of less homogeneous countries, which means the impacts on animals will vary more, and therefore cannot be neglected so lightly.
Interventions to extend life have different implications from interventions to improve quality of life. In general, interventions which improve quality of life without affecting lifespan and income much will have smaller impacts on animals (at least nearterm, i.e. neglecting how population size changes economic growth, and hence the trajectory of the consumption of animals). This is relevant to CEARCH because you have looked not only into interventions mostly saving lives and increasing income, but also into mental health.
I also encourage you to publish your estimates regarding the meat eater problem. I am not aware of any evaluator or grantmaker (aligned with effective altruism or not) having ever published a cost-effectiveness analysis of an intervention to improve human welfare which explicitly considered the impacts on farmed animal welfare (although I am aware of another besides you which have an internal analysis). So CEARCH would be the 1st to do so. For the reasons above, I think it would also be great if you included impacts on animals as a standard feature of your cost-effectiveness analyses.