Global health & development is actually philosophically defensible, and shouldnât necessarily be swamped by either x-risk reduction or animal welfare.
I think it would be a little bit of a surprising and suspicious convergence if the best interventions to improve human health (e.g. GiveWellâs top charities) were also the best to reliably improve global capacity. Some areas which look pretty good to me on this worldview:
Maybe this is a nitpick, but I wonder whether it would be better to say âglobal human health and development/âwellbeingâ instead of âglobal health and development/âwellbeingâ whenever animals are not being considered. I estimated the scale of the annual welfare of all farmed animals is 4.64 times than of all humans, and that of all wild animals 50.8 M times that of all humans.
Fwiw, I think Gregâs essay is one of the most overweighted in forum history (as in, not necessarily overrated, but people put way too much weight in its argument). Itâs a highly speculative argument with no real-world grounding, and in practice we know that of many well-evidenced socially beneficial causes that do seem convergently beneficial in other areas: one of the best climate change charities seems to be the among the best air pollution charities; deworming seems to be beneficial for education (even if the magnitude might have been overstated); cultivated meat could be a major factor in preventing climate change (subject to it being created by non-fossil-fuel-powered processes).
Each of these side effects have asterisks by them, and yet I find it highly plausible that an asterisked side-effect of a well-evidenced cause could actually turn out to be a much better intervention than essentially evidence-free work done on the very long termâespecially when the latter is developing a history of unforeseen consequences.
This isnât to say we should casually assume well-evidenced work on the short term is better for the long term, but I think we need much better reasons for assuming it canât be than an 8-year-old speculative work.
I donât think these examples illustrate that âbewaring of suspicious convergenceâ is wrong.
For the two examples I can evaluate (the climate ones), there are co-benefits, but there isnât full convergence with regards to optimality.
On air pollution, the most effective intervention for climate are not the most effective intervention for air pollution even though decarbonization is good for both. See e.g. here (where the best intervention for air pollution would be one that has low climate benefits, reducing sulfur in diesel; and I think if that chart were fully scope-sensitive and not made with the intention to showcase co-benefits, the distinctions would probably be larger, e.g. moving from coal to gas is a 15x improvement on air pollution while only a 2x on emissions):
And the reason is that different target metrics (carbon emissions, reduced air pollution mortality) are correlated, but do not map onto each other perfectly and optimizing for one does not maximize the other.
Same thing with alternative proteins, where a strategy focused on reducing animal suffering would likely (depending on moral weights) prioritize APs for chicken, whereas a climate-focused strategy would clearly prioritize APs for beef. (Thereâs a separate question here whether alternative proteins are an optimal climate strategy, which I think is not really established).
I think what these examples show is that we often have interventions with correlated benefits and it is worth asking whether one should optimize for both metrics jointly (given that there isnât really an inherent reason to separate, say, lives lost from climate from lives lost from air pollution it could make sense to prioritize interventions which do not optimize either dimension), but if one decides to optimize for a single dimension not expecting to also accidentally optimize the other dimension (i.e. âbewaring of suspicious convergenceâ) continues to be good advice (or, more narrowly, is at least not discredited by these examples).
To be clear, I think the original post is uncontroversially right that itâs very unlikely that the best intervention for A is also the best intervention for B. My claim is that, when something is well evidenced to be optimal for A and perhaps well evidenced to be high tier for B, you should have a relatively high prior that itâs going to be high tier or even optimal for some related concern C.
Where you have actual evidence available for how effective various interventions are for C, this prior is largely irrelevantâyou look at the evidence in the normal way. But when all interventions targeting C are highly speculative (as they universally are for longtermism), that prior seems to have much more weight.
Just to fully understandâwhere does that intuition come from? Is it that there is a common structure to high impact? (e.g. if you think APs are good for animals you also think they might be good for climate, because some of the goodness comes from the evidence of modular scalable technologies getting cheap and gaining market share?)
Partly from a scepticism about the highly speculative arguments for âdirectâ longtermist workâon which I think my prior is substantially lower than most of the longtermist community (though I strongly suspect selection effects, and that this scepticism would be relatively broadly shared further from the core of the movement).
Partly from something harder to pin down, that good outcomes do tend to cluster in a way that e.g. Givewell seem to recognise, but AFAIK have never really tried to account for (in late 2022, they were still citing that post while saying âwe basically ignore theseâ). So if weâre trying to imagine the whole picture, we need to have some kind of priors anyway.* Mine are some combination of considerations like
there are a huge number of ways in which people tend to behave more generously when they receive generosity, and itâs possible the ripple effects of this are much bigger than we realise (small ripples over a wide group of people that are invisibly small per-person could still be momentous);
having healthier, more economically developed people will tend to lead to more having more economically developed regions (I didnât find Johnâs arguments against randomistas driving growth persuasiveâe.g. IIRC it looked at absolute effect size of randomista-driven growth without properly accounting for the relative budgets vs other interventions. Though if he is right, I might make the following arguments about short term growth policies vs longtermism);
having more economically countries seems better for global political stability than having fewer, so reduce the risk of global catastrophes;
having more economically developed countries seems better for global resilience to catastrophe than having fewer, so reduce the magnitude of global catastrophes;
even âminorâ (i.e. non-extinction) global catastrophes can have a substantial reduction on our long-term prospects, so reducing their risk and magnitude is a potentially big deal
tighter feedback loops and better data mean we can learn more about incidental-optimisations than we can with longtermism work, including ones we didnât know at the time we wanted to optimise forâwe build up a corpus of real-world data that can be referred to whenever we think of a new consideration
tighter feedback loops also mean I expect the people working on it to be more effective at what they do, and less susceptible to (being selected by or themselves being subject to) systemic biases/âgroupthink/âmotivated reasoning etc.
the combination of greater evidence base and tighter feedback loops has countless other ineffable reinforcing-general-good benefits, like greater probability of shutting down when having 0 or negative effect; better signalling; greater reasoning transparency; easier measurement of Shapley values vs rather than counterfactuals; faster and better process refinement etc
deworming seems to be beneficial for education (even if the magnitude might have been overstated)
Maybe a nitpick, but idk if this is suspicious convergenceâI thought the impact on economic outcomes (presumably via educational outcomes) was the main driver for it being considered an effective intervention?
A quick note here. I donât think GiveWell (although I donât speak for them!) would claim that their interventions were necessarily the âbestâ to reliably improve global capacity, more that what their top charities do has less uncertainty, and is more easily measurable than orgs in the areas you point out.
Open Philanthropy and 80,000 hours indeed list many of your interventions near or at the top of their list of causes for people to devote their lives to â 80,000 hours in particular rates them higher than Givewellâs causes.
I do not think GiveWell would even claim that, as they are not optimising for reliably building global capacity. They âsearch for charities that save or improve [human] lives the most per dollarâ, i.e. they seem to simply be optimising for increasing human welfare. GiveWell also assumes the value of saving a life only depends on the age of the person who is saved, not on the country, which in my mind goes against maximising global capacity. Saving a life in a high income country seems much better to improve global capacity than saving one in a low income country, because economic productivity is much higher in high income countries.
Saving a life in a high income country seems much better to improve global capacity than saving one in a low income country, because economic productivity is much higher in high income countries.
To clarify, how are we defining âcapacityâ here? Even assuming economic productivity has something to do with it, it doesnât follow that saving a life in a high-income country increases it. For example, retired and disabled persons generally donât contribute much economic productivity. At the bottom of the economic spectrum in developed countries, many people outside of those categories consume significantly more economic productivity than they produce. If one is going to go down this path (which I do not myself support!), I think one has to bite the bullet and emphasize saving the lives of economically productive members of the high-income country (or kids/âyoung adults who one expects to become economically productive).
To clarify, how are we defining âcapacityâ here? Even assuming economic productivity has something to do with it, it doesnât follow that saving a life in a high-income country increases it. For example, retired and disabled persons generally donât contribute much economic productivity. At the bottom of the economic spectrum in developed countries, many people outside of those categories consume significantly more economic productivity than they produce.
To clarify:
I was thinking that global real gross domestic product could be a decent proxy for global capacity, as it represents global purchasing power.
In my comparison, I was assuming people were saved at the same age (in agreement with GiveWellâs moral weights being a function of age too).
So, since high income countries have higher real GDP per capita by defition, saving a life there would increase capacity more there. I actually have a draft related to this. Update: published!
If one is going to go down this path (which I do not myself support!), I think one has to bite the bullet and emphasize saving the lives of economically productive members of the high-income country (or kids/âyoung adults who one expects to become economically productive).
I am also not so willing to go down this path (so I tend to support animal welfare interventions over ones in global health and development), but I tend to agree one would have to bite that bullet if one did want to use economic output as a proxy for whatever is meant by âcapacityâ.
I think it may be important to draw a theory/âpractice distinction here. It seems completely undeniable in theory (or in terms of what is fundamentally preferable) that instrumental value matters, and so we should prefer that more productive lives be saved (otherwise you are implicitly saying to those who would be helped downstream that they donât matter). But we may not trust real-life agents to exercise good judgment here, or we may worry that the attempts would reinforce harmful biases, and so the mere attempt to optimize here could be expected to do more harm than good.
there are many cases in which instrumental favoritism would seem less appropriate. We do not want emergency room doctors to pass judgment on the social value of their patients before deciding who to save, for example. And there are good utilitarian reasons for this: such judgments are apt to be unreliable, distorted by all sorts of biases regarding privilege and social status, and institutionalizing them could send a harmful stigmatizing message that undermines social solidarity. Realistically, it seems unlikely that the minor instrumental benefits to be gained from such a policy would outweigh these significant harms. So utilitarians may endorse standard rules of medical ethics that disallow medical providers from considering social value in triage or when making medical allocation decisions
But these instrumental reasons to be cautious of over-optimization donât imply that we should completely ignore the fact that saving people has instrumental benefits that saving animals doesnât.
So I disagree that accepting capacity-based arguments for GHD over AW forces one to also optimize for saving productive over unproductive people, in a fine-grained way that many would find offensive. The latter decision-procedure risks extra harms that the former does not. (I think recognition of this fact is precisely why many find the idea offensive.)
Thanks Vasco, I think if this is the case I was misunderstanding what was meant by global capacity. I havenât thought about that framing so much myself!
Nice post, Richard!
I think it would be a little bit of a surprising and suspicious convergence if the best interventions to improve human health (e.g. GiveWellâs top charities) were also the best to reliably improve global capacity. Some areas which look pretty good to me on this worldview:
AI governance and coordination.
Global priorities research.
Improving decision making (especially in important institutions).
Improving individual reasoning or cognition.
Safeguarding liberal democracy.
Maybe this is a nitpick, but I wonder whether it would be better to say âglobal human health and development/âwellbeingâ instead of âglobal health and development/âwellbeingâ whenever animals are not being considered. I estimated the scale of the annual welfare of all farmed animals is 4.64 times than of all humans, and that of all wild animals 50.8 M times that of all humans.
Fwiw, I think Gregâs essay is one of the most overweighted in forum history (as in, not necessarily overrated, but people put way too much weight in its argument). Itâs a highly speculative argument with no real-world grounding, and in practice we know that of many well-evidenced socially beneficial causes that do seem convergently beneficial in other areas: one of the best climate change charities seems to be the among the best air pollution charities; deworming seems to be beneficial for education (even if the magnitude might have been overstated); cultivated meat could be a major factor in preventing climate change (subject to it being created by non-fossil-fuel-powered processes).
Each of these side effects have asterisks by them, and yet I find it highly plausible that an asterisked side-effect of a well-evidenced cause could actually turn out to be a much better intervention than essentially evidence-free work done on the very long termâespecially when the latter is developing a history of unforeseen consequences.
This isnât to say we should casually assume well-evidenced work on the short term is better for the long term, but I think we need much better reasons for assuming it canât be than an 8-year-old speculative work.
I donât think these examples illustrate that âbewaring of suspicious convergenceâ is wrong.
For the two examples I can evaluate (the climate ones), there are co-benefits, but there isnât full convergence with regards to optimality.
On air pollution, the most effective intervention for climate are not the most effective intervention for air pollution even though decarbonization is good for both.
See e.g. here (where the best intervention for air pollution would be one that has low climate benefits, reducing sulfur in diesel; and I think if that chart were fully scope-sensitive and not made with the intention to showcase co-benefits, the distinctions would probably be larger, e.g. moving from coal to gas is a 15x improvement on air pollution while only a 2x on emissions):
And the reason is that different target metrics (carbon emissions, reduced air pollution mortality) are correlated, but do not map onto each other perfectly and optimizing for one does not maximize the other.
Same thing with alternative proteins, where a strategy focused on reducing animal suffering would likely (depending on moral weights) prioritize APs for chicken, whereas a climate-focused strategy would clearly prioritize APs for beef.
(Thereâs a separate question here whether alternative proteins are an optimal climate strategy, which I think is not really established).
I think what these examples show is that we often have interventions with correlated benefits and it is worth asking whether one should optimize for both metrics jointly (given that there isnât really an inherent reason to separate, say, lives lost from climate from lives lost from air pollution it could make sense to prioritize interventions which do not optimize either dimension), but if one decides to optimize for a single dimension not expecting to also accidentally optimize the other dimension (i.e. âbewaring of suspicious convergenceâ) continues to be good advice (or, more narrowly, is at least not discredited by these examples).
Hey Johannes :)
To be clear, I think the original post is uncontroversially right that itâs very unlikely that the best intervention for A is also the best intervention for B. My claim is that, when something is well evidenced to be optimal for A and perhaps well evidenced to be high tier for B, you should have a relatively high prior that itâs going to be high tier or even optimal for some related concern C.
Where you have actual evidence available for how effective various interventions are for C, this prior is largely irrelevantâyou look at the evidence in the normal way. But when all interventions targeting C are highly speculative (as they universally are for longtermism), that prior seems to have much more weight.
Interesting, thanks for clarifying!
Just to fully understandâwhere does that intuition come from? Is it that there is a common structure to high impact? (e.g. if you think APs are good for animals you also think they might be good for climate, because some of the goodness comes from the evidence of modular scalable technologies getting cheap and gaining market share?)
Partly from a scepticism about the highly speculative arguments for âdirectâ longtermist workâon which I think my prior is substantially lower than most of the longtermist community (though I strongly suspect selection effects, and that this scepticism would be relatively broadly shared further from the core of the movement).
Partly from something harder to pin down, that good outcomes do tend to cluster in a way that e.g. Givewell seem to recognise, but AFAIK have never really tried to account for (in late 2022, they were still citing that post while saying âwe basically ignore theseâ). So if weâre trying to imagine the whole picture, we need to have some kind of priors anyway.* Mine are some combination of considerations like
there are a huge number of ways in which people tend to behave more generously when they receive generosity, and itâs possible the ripple effects of this are much bigger than we realise (small ripples over a wide group of people that are invisibly small per-person could still be momentous);
having healthier, more economically developed people will tend to lead to more having more economically developed regions (I didnât find Johnâs arguments against randomistas driving growth persuasiveâe.g. IIRC it looked at absolute effect size of randomista-driven growth without properly accounting for the relative budgets vs other interventions. Though if he is right, I might make the following arguments about short term growth policies vs longtermism);
having more economically countries seems better for global political stability than having fewer, so reduce the risk of global catastrophes;
having more economically developed countries seems better for global resilience to catastrophe than having fewer, so reduce the magnitude of global catastrophes;
even âminorâ (i.e. non-extinction) global catastrophes can have a substantial reduction on our long-term prospects, so reducing their risk and magnitude is a potentially big deal
tighter feedback loops and better data mean we can learn more about incidental-optimisations than we can with longtermism work, including ones we didnât know at the time we wanted to optimise forâwe build up a corpus of real-world data that can be referred to whenever we think of a new consideration
tighter feedback loops also mean I expect the people working on it to be more effective at what they do, and less susceptible to (being selected by or themselves being subject to) systemic biases/âgroupthink/âmotivated reasoning etc.
the combination of greater evidence base and tighter feedback loops has countless other ineffable reinforcing-general-good benefits, like greater probability of shutting down when having 0 or negative effect; better signalling; greater reasoning transparency; easier measurement of Shapley values vs rather than counterfactuals; faster and better process refinement etc
Maybe a nitpick, but idk if this is suspicious convergenceâI thought the impact on economic outcomes (presumably via educational outcomes) was the main driver for it being considered an effective intervention?
A quick note here. I donât think GiveWell (although I donât speak for them!) would claim that their interventions were necessarily the âbestâ to reliably improve global capacity, more that what their top charities do has less uncertainty, and is more easily measurable than orgs in the areas you point out.
Open Philanthropy and 80,000 hours indeed list many of your interventions near or at the top of their list of causes for people to devote their lives to â 80,000 hours in particular rates them higher than Givewellâs causes.
Hi Nick,
I do not think GiveWell would even claim that, as they are not optimising for reliably building global capacity. They âsearch for charities that save or improve [human] lives the most per dollarâ, i.e. they seem to simply be optimising for increasing human welfare. GiveWell also assumes the value of saving a life only depends on the age of the person who is saved, not on the country, which in my mind goes against maximising global capacity. Saving a life in a high income country seems much better to improve global capacity than saving one in a low income country, because economic productivity is much higher in high income countries.
To clarify, how are we defining âcapacityâ here? Even assuming economic productivity has something to do with it, it doesnât follow that saving a life in a high-income country increases it. For example, retired and disabled persons generally donât contribute much economic productivity. At the bottom of the economic spectrum in developed countries, many people outside of those categories consume significantly more economic productivity than they produce. If one is going to go down this path (which I do not myself support!), I think one has to bite the bullet and emphasize saving the lives of economically productive members of the high-income country (or kids/âyoung adults who one expects to become economically productive).
Thanks for following up, Jason!
To clarify:
I was thinking that global real gross domestic product could be a decent proxy for global capacity, as it represents global purchasing power.
In my comparison, I was assuming people were saved at the same age (in agreement with GiveWellâs moral weights being a function of age too).
So, since high income countries have higher real GDP per capita by defition, saving a life there would increase capacity more there. I actually have a draft related to this. Update: published!
I am also not so willing to go down this path (so I tend to support animal welfare interventions over ones in global health and development), but I tend to agree one would have to bite that bullet if one did want to use economic output as a proxy for whatever is meant by âcapacityâ.
I think it may be important to draw a theory/âpractice distinction here. It seems completely undeniable in theory (or in terms of what is fundamentally preferable) that instrumental value matters, and so we should prefer that more productive lives be saved (otherwise you are implicitly saying to those who would be helped downstream that they donât matter). But we may not trust real-life agents to exercise good judgment here, or we may worry that the attempts would reinforce harmful biases, and so the mere attempt to optimize here could be expected to do more harm than good.
As explained on utilitarianism.net:
But these instrumental reasons to be cautious of over-optimization donât imply that we should completely ignore the fact that saving people has instrumental benefits that saving animals doesnât.
So I disagree that accepting capacity-based arguments for GHD over AW forces one to also optimize for saving productive over unproductive people, in a fine-grained way that many would find offensive. The latter decision-procedure risks extra harms that the former does not. (I think recognition of this fact is precisely why many find the idea offensive.)
Thanks Vasco, I think if this is the case I was misunderstanding what was meant by global capacity. I havenât thought about that framing so much myself!