My own best guess is that they’re net good for wild animals based on my suffering-focused views and the resulting reductions of wild arthropod populations. I also endorse hedging in portfolios of interventions.
Thanks for pointing that out, Michael! I should note I Fermiestimated accounting for farmed animals only decreases the cost-effectiveness of GiveWell’s top charities by 8.72 %. However, this was without considering future increases in the consumption of animals throughout the lives of people who are saved, which usually follow economic growth. I also Fermi estimated the badness of the experiences of all farmed animals alive is 4.64 times the goodness of the experiences of all humans alive, which suggests saving a random human life results in a nearterm increase in suffering.
Related, this is likely a nitpick, but I think there might be some steelman-able views of “GiveWell top charities might seem net-negative on a longtermist lens, which could outweigh the shorter term implications”.
Personally, i have a ton of uncertainty here (I assume most do) and have not thought about this much. Also, I assume that from a longtermist lens, the net impact either way is likely small compared to more direct longtermist actions.
But I think that on many hard and complex issues, it’s really hard to say “there’s no good reason for one side” very safely. Often there are some good reasons on both sides.
I find that it’s often the case where there aren’t any highly-coherent arguments raised for one side of an issue—but that’s a different question than asking if intelligent arguments could be raised.
Ya, someone might argue that the average person contributes to economic growth and technological development, and so accelerates and increases x-risk. So, saving lives and increasing incomes could increase x-risk. Some subgroups of people may be exceptions, like EAs/x-risk people or poor people in low-income countries (who are far from the frontier of technological development), but even those could be questionable.
I think one could reasonably judge GiveWell-style saving and improving lives to constitute reliable global capacity growth, and (if very skeptical of attempts at more “direct”, explicitly longtermist long-shots) think that this category of intervention is among our best longtermist options. I suggest something along these lines as a candidate EA “worldview” here.
I’d be curious to hear more about longtermist reasons to view GiveWell top charities as net-negative.
I think one could reasonably judge GiveWell-style saving and improving lives to constitute reliable global capacity growth
I think I’ve almost never heard this argued, and I’d be surprised if it were true. [Edit: Sorry—I just saw your link, where this was argued. I think the discussion there in the comments is good] - GiveWell selected very heavily for QALYs gained in the next 10-40 years. Generally, when you heavily optimize for one variable (short-term welfare), you trade-off against others. - As Robin Hanson noted, if you’d just save up money, you could often make a higher return than by donating it to people today. - I believe that there’s little evidence yet to show that funding AMF/GiveDirectly results in large (>5-7% per year) long-term economic / political gains. I would be very happy if this evidence existed! (links appreciated, at any point)
Some people have argued that EAs should fund this, to get better media support, which would be useful in the long-run, but this seems very indirect to me (though possible).
As for it being *possibly* net-negative: - We have a lot of uncertainty on if many actions are good or bad. Arguably, we would find that many to be net-bad in the long-run. (This is arguably more of a “meta-reason” than a “reason”). - If AI is divided equally among beings, we might prefer there being a greater number of beings with values more similar to ours.
----REMINDER—PLEASE DON’T TAKE THIS OUT OF CONTEXT----
- Maybe, marginal population now is net-harmful in certain populations. Perhaps these areas have limited resources soon, and more people will lead to greater risks later on (poorer average outcomes, more immigration and instability). Related, I’ve heard arguments that the Black Death might have been a net-positive, as it gave workers more power and might have helped lead to the Renaissance. (Again, this is SUPER SPECULATIVE AND UNCERTAIN, just a possibility). - If we think AI is likely to come soonish, we might want to preserve most resources for after it. - This is an awkward/hazardous thing to discuss. If it were the case that there were good arguments, perhaps we’d expect them to not be said. This might increase the chances that there could be good arguments, if one were to really investigate it.
Again, I have an absolute ton of uncertainty on this, and my quick guess is more, “it’s probably a small-ish longtermist deal, with a huge probability spread”, than “I’m fairly sure it’s net-negative.”
I feel like it’s probably important for EAs to have reasonable/nuanced views on this topic, which is why I wrote these thoughts above.
I’ll get annoyed if the above gets greatly taken out of context later, as has been done for many other EAs when discussing topics. (See Beckstead’s dissertation) I added that ugly line in-between to maybe help a bit here.
I should have done this earlier, but would flag that LLMs can summarize a lot of the existing literature on the topic, though most of it isn’t from EAs specifically. I would argue that many of these arguments are still about “optimizing for the long-term”, they just often use different underlying assumptions than EAs do.
I’ll also add that many direct longtermist risks have significant potential downsides too. It seems very possible to me that we’ll wind up finding out that many were net-negative, or that there were good reasons for us to realize they were net-negative in advance.
Yeah, that’s interesting, but the argument “we should consider just letting people die, even when we could easily save them, because they eat too much chicken,” is very much not what anti-EAs like Leif Wenar have in mind when they talk about GiveWell being “harmful”!
(Aside: have you heard anyone argue for domestic policies, like cuts to health care / insurance coverage, on the grounds that more human deaths would actually be a good thing? It seems to follow from the view you mention [not your view, I understand], but one doesn’t hear that implication expressed so often.)
The effects on farmed animals and wild animals could make GiveWell top charities net harmful in the near term. See Comparison between the hedonic utility of human life and poultry living time and Finding bugs in GiveWell’s top charities by Vasco Grilo.
My own best guess is that they’re net good for wild animals based on my suffering-focused views and the resulting reductions of wild arthropod populations. I also endorse hedging in portfolios of interventions.
Thanks for pointing that out, Michael! I should note I Fermi estimated accounting for farmed animals only decreases the cost-effectiveness of GiveWell’s top charities by 8.72 %. However, this was without considering future increases in the consumption of animals throughout the lives of people who are saved, which usually follow economic growth. I also Fermi estimated the badness of the experiences of all farmed animals alive is 4.64 times the goodness of the experiences of all humans alive, which suggests saving a random human life results in a nearterm increase in suffering.
Related, this is likely a nitpick, but I think there might be some steelman-able views of “GiveWell top charities might seem net-negative on a longtermist lens, which could outweigh the shorter term implications”.
Personally, i have a ton of uncertainty here (I assume most do) and have not thought about this much. Also, I assume that from a longtermist lens, the net impact either way is likely small compared to more direct longtermist actions.
But I think that on many hard and complex issues, it’s really hard to say “there’s no good reason for one side” very safely. Often there are some good reasons on both sides.
I find that it’s often the case where there aren’t any highly-coherent arguments raised for one side of an issue—but that’s a different question than asking if intelligent arguments could be raised.
Ya, someone might argue that the average person contributes to economic growth and technological development, and so accelerates and increases x-risk. So, saving lives and increasing incomes could increase x-risk. Some subgroups of people may be exceptions, like EAs/x-risk people or poor people in low-income countries (who are far from the frontier of technological development), but even those could be questionable.
I think one could reasonably judge GiveWell-style saving and improving lives to constitute reliable global capacity growth, and (if very skeptical of attempts at more “direct”, explicitly longtermist long-shots) think that this category of intervention is among our best longtermist options. I suggest something along these lines as a candidate EA “worldview” here.
I’d be curious to hear more about longtermist reasons to view GiveWell top charities as net-negative.
I think I’ve almost never heard this argued, and I’d be surprised if it were true.
[Edit: Sorry—I just saw your link, where this was argued. I think the discussion there in the comments is good]
- GiveWell selected very heavily for QALYs gained in the next 10-40 years. Generally, when you heavily optimize for one variable (short-term welfare), you trade-off against others.
- As Robin Hanson noted, if you’d just save up money, you could often make a higher return than by donating it to people today.
- I believe that there’s little evidence yet to show that funding AMF/GiveDirectly results in large (>5-7% per year) long-term economic / political gains. I would be very happy if this evidence existed! (links appreciated, at any point)
Some people have argued that EAs should fund this, to get better media support, which would be useful in the long-run, but this seems very indirect to me (though possible).
As for it being *possibly* net-negative:
- We have a lot of uncertainty on if many actions are good or bad. Arguably, we would find that many to be net-bad in the long-run. (This is arguably more of a “meta-reason” than a “reason”).
- If AI is divided equally among beings, we might prefer there being a greater number of beings with values more similar to ours.
----REMINDER—PLEASE DON’T TAKE THIS OUT OF CONTEXT----
- Maybe, marginal population now is net-harmful in certain populations. Perhaps these areas have limited resources soon, and more people will lead to greater risks later on (poorer average outcomes, more immigration and instability). Related, I’ve heard arguments that the Black Death might have been a net-positive, as it gave workers more power and might have helped lead to the Renaissance. (Again, this is SUPER SPECULATIVE AND UNCERTAIN, just a possibility).
- If we think AI is likely to come soonish, we might want to preserve most resources for after it.
- This is an awkward/hazardous thing to discuss. If it were the case that there were good arguments, perhaps we’d expect them to not be said. This might increase the chances that there could be good arguments, if one were to really investigate it.
Again, I have an absolute ton of uncertainty on this, and my quick guess is more, “it’s probably a small-ish longtermist deal, with a huge probability spread”, than “I’m fairly sure it’s net-negative.”
I feel like it’s probably important for EAs to have reasonable/nuanced views on this topic, which is why I wrote these thoughts above.
I’ll get annoyed if the above gets greatly taken out of context later, as has been done for many other EAs when discussing topics. (See Beckstead’s dissertation) I added that ugly line in-between to maybe help a bit here.
I should have done this earlier, but would flag that LLMs can summarize a lot of the existing literature on the topic, though most of it isn’t from EAs specifically. I would argue that many of these arguments are still about “optimizing for the long-term”, they just often use different underlying assumptions than EAs do.
https://chatgpt.com/share/b8a9a3f5-d2f3-4dc6-921c-dba1226d25c1
I’ll also add that many direct longtermist risks have significant potential downsides too. It seems very possible to me that we’ll wind up finding out that many were net-negative, or that there were good reasons for us to realize they were net-negative in advance.
Yeah, that’s interesting, but the argument “we should consider just letting people die, even when we could easily save them, because they eat too much chicken,” is very much not what anti-EAs like Leif Wenar have in mind when they talk about GiveWell being “harmful”!
(Aside: have you heard anyone argue for domestic policies, like cuts to health care / insurance coverage, on the grounds that more human deaths would actually be a good thing? It seems to follow from the view you mention [not your view, I understand], but one doesn’t hear that implication expressed so often.)