The amount of global spending on each cause is basically irrelevant if you think most of it is non-impactful. Imaginine that John Q Warmglow donates $1 billion to global health, but he stipulates that that billion can only be spent on PlayPumps. Then global spending on GHD is up by $1 billion, but the actual marginal value of money to GHD is unchanged, because that $1 billion did not go to the best opportunities, the ones that would move down the marginal utility of money to the whole cause area. I understand you’re aware of this, which is why your Fermi estimates focus on the marginal value of money to each cause by comparing the best areas within each cause. But the level of global spending on a cause contributes very little to the marginal value of money if most of that spending is low-impact.
I don’t have a satisfying answer to what x is for me. I will say somewhere between 0.5 and 1.5, corresponding to the intuition that neither GHD nor FAW dominates each other. I would guess my cruxes with you come from two sources:
My median moral weight on chickens is much less than 0.33, ~2 OOMs less.[1] This is a difficult inferential gap to cross.
I think the quality of FAW cost-effectiveness estimates is vastly lower than GHD cost-effectiveness estimates, making the comparison apples-to-oranges. Saulius’s estimates are a good start on a hard problem, but
There are a lot of made-up numbers based on intuition (e.g. their assumption of 24% compliance with pledges in the absence of follow-up pressure is wildly out of line with my intuitions)
There’s likely steeply declining returns to effort given that campaigns will initially target the lowest hanging fruit, and eventually things will get much harder. Making a cost-effectiveness estimate based on early successful attempts is not representative of the value of future funding.
This is not a knock on people who are doing the best they can with limited data. I am just not comfortable taking these as unbiased estimates and I put a pretty high premium on having more certain evidence.
I see my views as consistent with expected utility maximization coupled with risk aversion, but not as expected value maximization (which, as its commonly defined, implies risk neutrality). The more uncertainty you have about a cause area, the more a risk-averse decisionmaker will want to hedge. (Edit: I also really like this argument for having a preference for certainty.)
I understand RP is estimating welfare ranges rather than moral weights, but I think you have to do some sneaky philosophical equivalences to use them as weights in a cost-effectiveness estimate. I’m open to being wrong about that.
The amount of global spending on each cause is basically irrelevant if you think most of it is non-impactful.
(I realise this was posted a month ago but) this sounds to me like it overstates how bad global health aid is? I think all GiveWell top charities are existing organisations and programs that GiveWell only advocates increasing spending to, so surely effective aid existed before GiveWell did. Moreover, I have a not-particularly-concrete impression that e.g. vaccine distribution is only not an EA cause because it was already fully funded (at least in the easy cases) by non-EAs, so that our top charities are very much “top remaining” and not “best ever”.
I have the impression that even if EA and OpenPhil collectively tomorrow decided to move all of our global health funding to animals, there would still be a lot of effective global development aid—there would still be e.g. Gavi and the Bill and Melinda Gates Foundation (which sure, does ineffective things, but does effective things too) and many others. Wouldn’t that still meet the need you identified in your original answer for a compromise position?
I understand you’re aware of this, which is why your Fermi estimates focus on the marginal value of money to each cause by comparing the best areas within each cause.
Just to clarify, I only care about the marginal cost-effectiveness. However, I feel like some intrinsically care about spending/neglectedness independently of how it relates to marginal cost-effectiveness.
But the level of global spending on a cause contributes very little to the marginal value of money if most of that spending is low-impact.
Note this also applies to animal welfare.
I don’t have a satisfying answer to what x is for me. I will say somewhere between 0.5 and 1.5, corresponding to the intuition that neither GHD nor FAW dominates each other. I would guess my cruxes with you come from two sources:
My median moral weight on chickens is much less than 0.33, ~2 OOMs less.[1] This is a difficult inferential gap to cross.
I think the quality of FAW cost-effectiveness estimates is vastly lower than GHD cost-effectiveness estimates, making the comparison apples-to-oranges. Saulius’s estimates are a good start on a hard problem, but
There are a lot of made-up numbers based on intuition (e.g. their assumption of 24% compliance with pledges in the absence of follow-up pressure is wildly out of line with my intuitions)
There’s likely steeply declining returns to effort given that campaigns will initially target the lowest hanging fruit, and eventually things will get much harder. Making a cost-effectiveness estimate based on early successful attempts is not representative of the value of future funding.
Thanks for explaining your views! Your moral weight is 1 % (= 10^-2) of mine[1], and I multiplied Saulius’ mainline estimate of 41 chicken-years per $ by 0.2[2]. So, ignoring other disagreements, your marginal cost-effectiveness would have to be 1.32 % (= 0.2/(1.51*10^3*0.01)) the non-marginal cost-effectiveness linked to Saulius’ mainline estimate for corporate campaigns for chicken welfare to be as cost-effective as GiveWell’s top charities. Does this sound right? Open Phil did not share how they got to their adjustment factor of 1⁄5, and I do agree it would be great to have more rigorous estimates of the cost-effectiveness of animal welfare interventions, so I would say your intuition here is reasonable, although I guess you are downgrading Saulius’ estimate too much.
On the other hand, I find it difficult to understand how one can get to such a low moral weight. How many times as large would your moral weight become conditioning on (risk-neutral) expected total hedonistic utilitarianism?
Thanks for clarifying. Given i) 1 unit of welfare with certainty, and ii) 10 x units of welfare with 10 % chance (i.e. x units of welfare in expectation), what is the x which would make you value i) as much as ii) (for me, the answer would be 1)? Why not a higher/lower x? Are your answers to these questions compatible with your intuition that corporate campaigns for chicken welfare are 0.5 to 1.5 times as cost-effective as GiveWell’s top charities? If it is hard to answer these questions, is there a risk of your risk aversion not being supported by seemingly self-evident assumptions[3], and instead being a way of formalising/rationalising your pre-formed intuitions about cause prioritisation?
I strongly endorse expected total hedonistic utilitarianism (here is your sneaky philosophical equivalence :), and I am happy to rely on Rethink Priorities’ median welfare ranges.
Since Open Phil thinks “the marginal FAW [farmed animal welfare] funding opportunity is ~1/5th as cost-effective as the average from Saulius’ analysis [which is linked just above]”.
I want to be clear that I see risk aversion as axiomatic. In my view, there is no “correct” level of risk aversion. Various attitudes to risk will involve biting various bullets (St Petersburg paradox on the one side, concluding that lives have diminishing value on the other side), but I view risk preferences as premises rather than conclusions that need to be justified.
I don’t actually think moral weights are premises. However, I think in practice our best guesses on moral weights are so uninformative that they don’t admit any better strategy than hedging, given my risk attitudes. (That’s the view expressed in the quote in my original comment.) This is not a bedrock belief. My views have shifted over time (in 2018 I would have scoffed at the idea of THL and AMF being even in the same welfare range), and will probably continue to shift.
If it is hard to answer these questions, is there a risk of your risk aversion not being supported by seemingly self-evident assumptions[3], and instead being a way of formalising/rationalising your pre-formed intuitions about cause prioritisation?
Yes, I am formalizing my intuitions about cause prioritization. In particular, I am formalizing my main cruxes with animal welfare—risk aversion and moral weights. (These aren’t even cruxes with “we should fund AW”, they are cruxes only with “AW dominates GHD”. I do think we should reallocate funding from GHD to AW on the margin.)
Is my risk aversion just a guise for my preference that GHD should get lots of money? I comfortably admit that my choice to personally work on GHD is a function of my background and skillset. I was a person from a developing country, and a development economist, before I was an EA. But risk aversion is a universal preference descriptively – it shouldn’t be a high bar to believe that I’m actually just a risk averse person.
At the end of the day, I hold the normie belief that good things are good. Children not dying of malaria is good. Chickens not living in cages is good. Philosophical gotchas and fragile calculations can supplement that belief but not replace it.
My views have shifted over time (in 2018 I would have scoffed at the idea of THL and AMF being even in the same welfare range), and will probably continue to shift.
Are you saying that you are more likely than not to update towards animal welfare, or that you expect to update towards animal welfare? The former is fine. If the latter, it makes sense for you to update all the way now (one should not expect future beliefs to differ from past beliefs).
I don’t actually think moral weights are premises.
Nice to know.
Is my risk aversion just a guise for my preference that GHD should get lots of money? I comfortably admit that my choice to personally work on GHD is a function of my background and skillset.
One could work in a certain area, but support moving marginal donations from that area to animal welfare[1], as you just illustrated:
I do think we should reallocate funding from GHD to AW on the margin.
Thanks for being transparent about this! I think it would be good for more people like you, who do not think spending on animal welfare should increase a lot, to clarify what they believe is more cost-effective at the margin (as this is what matters in practice).
But risk aversion is a universal preference descriptively – it shouldn’t be a high bar to believe that I’m actually just a risk averse person.
Right, but risk aversion with respect to resources makes sense because welfare increases sublinearly with resources. I assume people are less risk averse with respect to welfare. Even if people are significantly risk averse with respect to welfare, I do not think we should elevate this to being normative. People also discount the welfare of their future selves and foreigners. People in and governments of high income countries could argue they are already doing something pretty close to optimal with respect to supporting people in extreme poverty given their decriptive preferences. This may be right, but I would say such preferences are misguided, and that they should be much more impartial with respect to nationality.
At the end of the day, I hold the normie belief that good things are good. Children not dying of malaria is good. Chickens not living in cages is good. Philosophical gotchas and fragile calculations can supplement that belief but not replace it.
I think the vast majority of people arguing that animal welfare should receive way more funding would agree with the above. I certainly do. I just do not think the calculations are fragile to the extent that the current porfolio can be considered anywhere close to optimal. I Fermi estimated buying organic eggs is 2.11 times as cost-effective as donating to GiveWell’s top charities[2], and I think that is far from the most cost-effective interventions in the space. which is what you suggest based on your guess that corporate campaigns for chicken welfare are 0.5 to 1.5 times as cost-effective as GiveWell’s top charities.
I support increasing donations to animal welfare, but I have not been paid for my work in the area. Personal fit plays much less of a role in deciding donations than in deciding jobs. It still plays some role because one could be better suited to assess donation opportunities in some areas.
My estimate relies on Rethink Priorities’ median welfare range for chicken, but it does not make any use of Saulius’ estimates, which are one of your 2 major sources of scepticism.
The amount of global spending on each cause is basically irrelevant if you think most of it is non-impactful. Imaginine that John Q Warmglow donates $1 billion to global health, but he stipulates that that billion can only be spent on PlayPumps. Then global spending on GHD is up by $1 billion, but the actual marginal value of money to GHD is unchanged, because that $1 billion did not go to the best opportunities, the ones that would move down the marginal utility of money to the whole cause area. I understand you’re aware of this, which is why your Fermi estimates focus on the marginal value of money to each cause by comparing the best areas within each cause. But the level of global spending on a cause contributes very little to the marginal value of money if most of that spending is low-impact.
I don’t have a satisfying answer to what x is for me. I will say somewhere between 0.5 and 1.5, corresponding to the intuition that neither GHD nor FAW dominates each other. I would guess my cruxes with you come from two sources:
My median moral weight on chickens is much less than 0.33, ~2 OOMs less.[1] This is a difficult inferential gap to cross.
I think the quality of FAW cost-effectiveness estimates is vastly lower than GHD cost-effectiveness estimates, making the comparison apples-to-oranges. Saulius’s estimates are a good start on a hard problem, but
There are a lot of made-up numbers based on intuition (e.g. their assumption of 24% compliance with pledges in the absence of follow-up pressure is wildly out of line with my intuitions)
There’s likely steeply declining returns to effort given that campaigns will initially target the lowest hanging fruit, and eventually things will get much harder. Making a cost-effectiveness estimate based on early successful attempts is not representative of the value of future funding.
This is not a knock on people who are doing the best they can with limited data. I am just not comfortable taking these as unbiased estimates and I put a pretty high premium on having more certain evidence.
I see my views as consistent with expected utility maximization coupled with risk aversion, but not as expected value maximization (which, as its commonly defined, implies risk neutrality). The more uncertainty you have about a cause area, the more a risk-averse decisionmaker will want to hedge. (Edit: I also really like this argument for having a preference for certainty.)
I understand RP is estimating welfare ranges rather than moral weights, but I think you have to do some sneaky philosophical equivalences to use them as weights in a cost-effectiveness estimate. I’m open to being wrong about that.
(I realise this was posted a month ago but) this sounds to me like it overstates how bad global health aid is? I think all GiveWell top charities are existing organisations and programs that GiveWell only advocates increasing spending to, so surely effective aid existed before GiveWell did. Moreover, I have a not-particularly-concrete impression that e.g. vaccine distribution is only not an EA cause because it was already fully funded (at least in the easy cases) by non-EAs, so that our top charities are very much “top remaining” and not “best ever”.
I have the impression that even if EA and OpenPhil collectively tomorrow decided to move all of our global health funding to animals, there would still be a lot of effective global development aid—there would still be e.g. Gavi and the Bill and Melinda Gates Foundation (which sure, does ineffective things, but does effective things too) and many others. Wouldn’t that still meet the need you identified in your original answer for a compromise position?
Thanks for the follow up!
Just to clarify, I only care about the marginal cost-effectiveness. However, I feel like some intrinsically care about spending/neglectedness independently of how it relates to marginal cost-effectiveness.
Note this also applies to animal welfare.
Thanks for explaining your views! Your moral weight is 1 % (= 10^-2) of mine[1], and I multiplied Saulius’ mainline estimate of 41 chicken-years per $ by 0.2[2]. So, ignoring other disagreements, your marginal cost-effectiveness would have to be 1.32 % (= 0.2/(1.51*10^3*0.01)) the non-marginal cost-effectiveness linked to Saulius’ mainline estimate for corporate campaigns for chicken welfare to be as cost-effective as GiveWell’s top charities. Does this sound right? Open Phil did not share how they got to their adjustment factor of 1⁄5, and I do agree it would be great to have more rigorous estimates of the cost-effectiveness of animal welfare interventions, so I would say your intuition here is reasonable, although I guess you are downgrading Saulius’ estimate too much.
On the other hand, I find it difficult to understand how one can get to such a low moral weight. How many times as large would your moral weight become conditioning on (risk-neutral) expected total hedonistic utilitarianism?
Thanks for clarifying. Given i) 1 unit of welfare with certainty, and ii) 10 x units of welfare with 10 % chance (i.e. x units of welfare in expectation), what is the x which would make you value i) as much as ii) (for me, the answer would be 1)? Why not a higher/lower x? Are your answers to these questions compatible with your intuition that corporate campaigns for chicken welfare are 0.5 to 1.5 times as cost-effective as GiveWell’s top charities? If it is hard to answer these questions, is there a risk of your risk aversion not being supported by seemingly self-evident assumptions[3], and instead being a way of formalising/rationalising your pre-formed intuitions about cause prioritisation?
I strongly endorse expected total hedonistic utilitarianism (here is your sneaky philosophical equivalence :), and I am happy to rely on Rethink Priorities’ median welfare ranges.
Since Open Phil thinks “the marginal FAW [farmed animal welfare] funding opportunity is ~1/5th as cost-effective as the average from Saulius’ analysis [which is linked just above]”.
I think it makes all sense to be risk averse with respect to money, but risk neutral with respect to welfare, which is what is being discussed here.
I want to be clear that I see risk aversion as axiomatic. In my view, there is no “correct” level of risk aversion. Various attitudes to risk will involve biting various bullets (St Petersburg paradox on the one side, concluding that lives have diminishing value on the other side), but I view risk preferences as premises rather than conclusions that need to be justified.
I don’t actually think moral weights are premises. However, I think in practice our best guesses on moral weights are so uninformative that they don’t admit any better strategy than hedging, given my risk attitudes. (That’s the view expressed in the quote in my original comment.) This is not a bedrock belief. My views have shifted over time (in 2018 I would have scoffed at the idea of THL and AMF being even in the same welfare range), and will probably continue to shift.
Yes, I am formalizing my intuitions about cause prioritization. In particular, I am formalizing my main cruxes with animal welfare—risk aversion and moral weights. (These aren’t even cruxes with “we should fund AW”, they are cruxes only with “AW dominates GHD”. I do think we should reallocate funding from GHD to AW on the margin.)
Is my risk aversion just a guise for my preference that GHD should get lots of money? I comfortably admit that my choice to personally work on GHD is a function of my background and skillset. I was a person from a developing country, and a development economist, before I was an EA. But risk aversion is a universal preference descriptively – it shouldn’t be a high bar to believe that I’m actually just a risk averse person.
At the end of the day, I hold the normie belief that good things are good. Children not dying of malaria is good. Chickens not living in cages is good. Philosophical gotchas and fragile calculations can supplement that belief but not replace it.
Thanks for clarifying.
Are you saying that you are more likely than not to update towards animal welfare, or that you expect to update towards animal welfare? The former is fine. If the latter, it makes sense for you to update all the way now (one should not expect future beliefs to differ from past beliefs).
Nice to know.
One could work in a certain area, but support moving marginal donations from that area to animal welfare[1], as you just illustrated:
Thanks for being transparent about this! I think it would be good for more people like you, who do not think spending on animal welfare should increase a lot, to clarify what they believe is more cost-effective at the margin (as this is what matters in practice).
Right, but risk aversion with respect to resources makes sense because welfare increases sublinearly with resources. I assume people are less risk averse with respect to welfare. Even if people are significantly risk averse with respect to welfare, I do not think we should elevate this to being normative. People also discount the welfare of their future selves and foreigners. People in and governments of high income countries could argue they are already doing something pretty close to optimal with respect to supporting people in extreme poverty given their decriptive preferences. This may be right, but I would say such preferences are misguided, and that they should be much more impartial with respect to nationality.
I think the vast majority of people arguing that animal welfare should receive way more funding would agree with the above. I certainly do. I just do not think the calculations are fragile to the extent that the current porfolio can be considered anywhere close to optimal. I Fermi estimated buying organic eggs is 2.11 times as cost-effective as donating to GiveWell’s top charities[2], and I think that is far from the most cost-effective interventions in the space. which is what you suggest based on your guess that corporate campaigns for chicken welfare are 0.5 to 1.5 times as cost-effective as GiveWell’s top charities.
I support increasing donations to animal welfare, but I have not been paid for my work in the area. Personal fit plays much less of a role in deciding donations than in deciding jobs. It still plays some role because one could be better suited to assess donation opportunities in some areas.
My estimate relies on Rethink Priorities’ median welfare range for chicken, but it does not make any use of Saulius’ estimates, which are one of your 2 major sources of scepticism.