I think one could reasonably judge GiveWell-style saving and improving lives to constitute reliable global capacity growth, and (if very skeptical of attempts at more ādirectā, explicitly longtermist long-shots) think that this category of intervention is among our best longtermist options. I suggest something along these lines as a candidate EA āworldviewā here.
Iād be curious to hear more about longtermist reasons to view GiveWell top charities as net-negative.
I think one could reasonably judge GiveWell-style saving and improving lives to constitute reliable global capacity growth
I think Iāve almost never heard this argued, and Iād be surprised if it were true. [Edit: SorryāI just saw your link, where this was argued. I think the discussion there in the comments is good] - GiveWell selected very heavily for QALYs gained in the next 10-40 years. Generally, when you heavily optimize for one variable (short-term welfare), you trade-off against others. - As Robin Hanson noted, if youād just save up money, you could often make a higher return than by donating it to people today. - I believe that thereās little evidence yet to show that funding AMF/āGiveDirectly results in large (>5-7% per year) long-term economic /ā political gains. I would be very happy if this evidence existed! (links appreciated, at any point)
Some people have argued that EAs should fund this, to get better media support, which would be useful in the long-run, but this seems very indirect to me (though possible).
As for it being *possibly* net-negative: - We have a lot of uncertainty on if many actions are good or bad. Arguably, we would find that many to be net-bad in the long-run. (This is arguably more of a āmeta-reasonā than a āreasonā). - If AI is divided equally among beings, we might prefer there being a greater number of beings with values more similar to ours.
----REMINDERāPLEASE DONāT TAKE THIS OUT OF CONTEXT----
- Maybe, marginal population now is net-harmful in certain populations. Perhaps these areas have limited resources soon, and more people will lead to greater risks later on (poorer average outcomes, more immigration and instability). Related, Iāve heard arguments that the Black Death might have been a net-positive, as it gave workers more power and might have helped lead to the Renaissance. (Again, this is SUPER SPECULATIVE AND UNCERTAIN, just a possibility). - If we think AI is likely to come soonish, we might want to preserve most resources for after it. - This is an awkward/āhazardous thing to discuss. If it were the case that there were good arguments, perhaps weād expect them to not be said. This might increase the chances that there could be good arguments, if one were to really investigate it.
Again, I have an absolute ton of uncertainty on this, and my quick guess is more, āitās probably a small-ish longtermist deal, with a huge probability spreadā, than āIām fairly sure itās net-negative.ā
I feel like itās probably important for EAs to have reasonable/ānuanced views on this topic, which is why I wrote these thoughts above.
Iāll get annoyed if the above gets greatly taken out of context later, as has been done for many other EAs when discussing topics. (See Becksteadās dissertation) I added that ugly line in-between to maybe help a bit here.
I should have done this earlier, but would flag that LLMs can summarize a lot of the existing literature on the topic, though most of it isnāt from EAs specifically. I would argue that many of these arguments are still about āoptimizing for the long-termā, they just often use different underlying assumptions than EAs do.
Iāll also add that many direct longtermist risks have significant potential downsides too. It seems very possible to me that weāll wind up finding out that many were net-negative, or that there were good reasons for us to realize they were net-negative in advance.
I think one could reasonably judge GiveWell-style saving and improving lives to constitute reliable global capacity growth, and (if very skeptical of attempts at more ādirectā, explicitly longtermist long-shots) think that this category of intervention is among our best longtermist options. I suggest something along these lines as a candidate EA āworldviewā here.
Iād be curious to hear more about longtermist reasons to view GiveWell top charities as net-negative.
I think Iāve almost never heard this argued, and Iād be surprised if it were true.
[Edit: SorryāI just saw your link, where this was argued. I think the discussion there in the comments is good]
- GiveWell selected very heavily for QALYs gained in the next 10-40 years. Generally, when you heavily optimize for one variable (short-term welfare), you trade-off against others.
- As Robin Hanson noted, if youād just save up money, you could often make a higher return than by donating it to people today.
- I believe that thereās little evidence yet to show that funding AMF/āGiveDirectly results in large (>5-7% per year) long-term economic /ā political gains. I would be very happy if this evidence existed! (links appreciated, at any point)
Some people have argued that EAs should fund this, to get better media support, which would be useful in the long-run, but this seems very indirect to me (though possible).
As for it being *possibly* net-negative:
- We have a lot of uncertainty on if many actions are good or bad. Arguably, we would find that many to be net-bad in the long-run. (This is arguably more of a āmeta-reasonā than a āreasonā).
- If AI is divided equally among beings, we might prefer there being a greater number of beings with values more similar to ours.
----REMINDERāPLEASE DONāT TAKE THIS OUT OF CONTEXT----
- Maybe, marginal population now is net-harmful in certain populations. Perhaps these areas have limited resources soon, and more people will lead to greater risks later on (poorer average outcomes, more immigration and instability). Related, Iāve heard arguments that the Black Death might have been a net-positive, as it gave workers more power and might have helped lead to the Renaissance. (Again, this is SUPER SPECULATIVE AND UNCERTAIN, just a possibility).
- If we think AI is likely to come soonish, we might want to preserve most resources for after it.
- This is an awkward/āhazardous thing to discuss. If it were the case that there were good arguments, perhaps weād expect them to not be said. This might increase the chances that there could be good arguments, if one were to really investigate it.
Again, I have an absolute ton of uncertainty on this, and my quick guess is more, āitās probably a small-ish longtermist deal, with a huge probability spreadā, than āIām fairly sure itās net-negative.ā
I feel like itās probably important for EAs to have reasonable/ānuanced views on this topic, which is why I wrote these thoughts above.
Iāll get annoyed if the above gets greatly taken out of context later, as has been done for many other EAs when discussing topics. (See Becksteadās dissertation) I added that ugly line in-between to maybe help a bit here.
I should have done this earlier, but would flag that LLMs can summarize a lot of the existing literature on the topic, though most of it isnāt from EAs specifically. I would argue that many of these arguments are still about āoptimizing for the long-termā, they just often use different underlying assumptions than EAs do.
https://āāchatgpt.com/āāshare/āāb8a9a3f5-d2f3-4dc6-921c-dba1226d25c1
Iāll also add that many direct longtermist risks have significant potential downsides too. It seems very possible to me that weāll wind up finding out that many were net-negative, or that there were good reasons for us to realize they were net-negative in advance.