Saving lives near the precipice
We may need to figure this out: how do the effectiveness estimates of charities change if everyone saved dies in 10 years? Which ones become then the most effective?
Epistemic status: I didn’t think about this for long enough. The post is speculative and tries to unpack my intuitive understanding that might be wrong and not consistent. I still wanted to share it since it might direct the community’s attention to something possibly worth thinking about.
I’m most uncertain about thinking that donations to long-term causes don’t do anything at the moment since everything gets funded. It could still be the case that the expected value of donations to the longtermist orgs by orders of magnitude outweighs the impact of donations to GiveWell recommended charities. Also, it might be that in some scenarios having accumulated resources instead of spending elsewhere enables us to save the world.
Effective altruism is about doing the most good with the resources we have. We can’t save everyone and prevent all the needless suffering; so we ought to save the most lives and prevent as much suffering as possible.
A large part of the community thinks there’s a significant chance of humanity going extinct in the next n years. Many brilliant people are working on preventing this. It seems that we’re funding all the promising projects in this area we can find, and the bottleneck in x-risks reduction is not in money, and that seems unlikely to change. If MIRI knew how to reduce existential risk with$100m, I think they would’ve quickly gotten these $100m. So what do we do with the money we have?
Even if we’re going extinct, having people live for longer and reducing the suffering meanwhile is something that seems right to work towards and donate to if one’s work and donations don’t seem to be helpful in the x-risks reduction area. But how do we effectively save years of life and prevent suffering?
I default to donating to GiveWell. GiveWell estimates values of preventing deaths and calculates the value the most effective charities produce and the years of life they save and compares them. But these estimates and calculations would dramatically change if they accounted for the probability of everyone saved dying in n years.
The issue isn’t that the charities would become “less effective”: doing the most good with your donations is the question of finding charities that do the most per dollar, not of whether they’re still “effective” in the sense that it costs less than some amount to save a life via donating to them. The value of life is enormous. If humanity makes it, I can imagine our descendants tearing apart stars to save a single life. Currently, we have cheaper options available; but we don’t have that many resources, and we can’t prevent all the deaths, so we need to save the most with what we have. That idea doesn’t depend on whether it costs >$100 or >$100,000 to save a year of life.
Rather, the issue is that the order of the charities sorted by effectiveness would change when you account for extinction risk. If two prevent malaria cases with the same cost-effectiveness in the same region between the same demographic groups, their relative positions wouldn’t change; but charities are probably really different in how they produce value and would get different discounts for the effects stopping in n years, and everyone saved dying. (GiveWell estimates the value of preventing deaths of people depending on their ages; the value of making people live for the next n years depending on their age probably has a different distribution. GiveWell calculated years of life saved through averting deaths of people depending on their age; that also changes if the life expectancy distribution is capped at n years. Some interventions don’t directly prevent deaths, and calculations of their impacts would also change relatively differently.)
I’m not sure how difficult it would be to estimate the effects of everyone dying on the ordering of the charities. The lower bound is going through a couple of GiveWell’s spreadsheets, calculating the relevant distributions (e.g., life expectancy) and numbers following from these distributions, and intuitively estimating some other values, putting these into the spreadsheets. Maybe this will appear analogous to result in a similar discount for all the charities; then, it probably isn’t worth looking further. I feel like the discounts would be different, though, and then some of the most effective charities might be ones currently not even recommended by GiveWell.
Ideally, I would like to see a tool where I input my probability distribution of humanity stopping existing per year and get a list of charities sorted by effectiveness. Alternatively, some experts could input their probability distributions, and some median distribution defines the sorting and the recommendations.
If I have spare money that I can’t easily direct towards reducing the x-risks, the next best thing seems to be reducing suffering and delaying deaths; but GiveWell’s recommendations possibly stop working for the reasons above.
Someone should look into this. It seems plausible that (re)evaluating the charities with appropriate discounts for the probabilities of an existential catastrophe happening in n years is worth doing and would direct the funding allocated to global health and development towards making more impact.
- AI pause/governance advocacy might be net-negative, especially without a focus on explaining x-risk by 27 Aug 2023 23:05 UTC; 82 points) (LessWrong;
- 30 Jul 2022 9:23 UTC; 1 point) 's comment on Samin’s Quick takes by (
A model where you think x-risk in the next few decades is a very important problem but that donating to non-x-risk charities now is the most impactful use of money seems weird to me. Even if x-risk work isn’t constrained by money at the moment, it seems likely that could change between now and the global catastrophe. For example, unless you are confident in a fast AI takeoff there will probably be a time in the future where it’s much more effective to lobby for regulation than it is now (because it will be easier to do and easier to know what regulation is helpful).
It is quite likely that you’re right! I think it’s just something that should be explicitly thought about, it seems like an uncertainty that wasn’t really noticed. If x-risk is in the next few decades, some of the money currently directed to the interventions fighting deaths and suffering might be allocated to charities that do it better.
Comments and DMs are welcome, including on the quality of writing (I’m not a native English speaker and would appreciate any corrections)
I definitely agree people should be thinking about this! I wrote about something similar last week :-)
Awesome!
I didn’t consider the spending speed here. It highlights another important part of the analysis one should make when considering neartermist donations conditional on the short timelines. Dependent on humanity solving alignment, you not only want to spend the money before a superintelligence appears but also might maximize the impact by, e.g., delaying deaths until then
On the one hand I’m really against this being a part how we evaluate interventions for currently living people.
On the other hand I’m not exactly sure why, and the idea definitely follows from the current models of the movement. So it’s a discussion very much worth having. (Hence, upvoted)
If we know that extinction event is inevitable soon, like asteroid impact, I think it will be reasonable to try to create remnants, perhaps on the Moon, that could provide possible aliens with information about humanity or even help to resurrect human beings.
I suspect downvoters are misunderstanding “know” and “will be”; I think Turchin meant “If we knew” and “it would [then] be reasonable” (subjunctive).
I think that downvoters didn’t like the word “resurrection”.
Can confirm this is the main reason for my downvote.