About the Neglectedness of Longtermism and Future Work

Link post

Quantified risks of human extinction always play out over a certain stretch of time. For example, Ord (2020, 87) estimates the probability of extinction from all natural risks to be below 0.05 percent per century. Usually extinction risks are calculated over relatively long timeframes, maybe because probabilities are just too small to parse on shorter timescales. The fact that the problems are long-term are of course also the reason that they are so important: If we expected the risk of nuclear warfare to drop to zero in a few years, it would be much less pressing a problem.

When determining how neglected longtermist interventions are, people usually refer to work being done at present. I suppose it is fitting to start with The Precipice. (And I don’t mean to single out any of these writers or organisations, which are all great. They just serve to illustrate the premise.) Ord (2020, 57–58) writes:

The international body responsible for the continued prohibition of bioweapons (the Biological Weapons Convention) has an annual budget of just $1.4 million – less than the average McDonald’s restaurant. The entire spending on reducing existential risks from advanced artificial intelligence is in the tens of millions of dollars, compared with the billions spent on improving artificial intelligence capabilities. While it is difficult to precisely measure global spending on existential risk, we can state with confidence that humanity spends more on ice cream every year than on ensuring that the technologies we develop do not destroy us.

The authors of utilitarianism.net remark: “Work to ensure that humanity’s long-run future goes well is not only very important but also very neglected. [… O]ur generation systematically neglects the interests and wellbeing of the many individuals that will exist in the future.” The 80,000 Hours’ introductory page on longtermism reads: “In fact, it turns out that many of the ways to help future generations are also highly neglected. This is exactly what you’d expect – the present generation has a much greater interest in helping itself rather than improving the future.”[1] Reports of funding gaps in effective altruism, like this one by 80,000 Hours, as far as I can tell, compare longtermist and neartermist cause areas directly, looking only at resources allocated in the near-term.

These assertions are all, as far as I can tell, one hundred percent true. But I think they leave something important out. I’m half afraid that that something is really obvious. But (1) it was not obvious to me at first, and (2) having looked around for mentions of longtermism’s neglectedness, the frame I have usually come across is something like, “Here’s this really important and long-lasting problem, and almost no one is doing anything about it right now.”[2]

Summary

When discussing the importance of long-term problems, we usually consider very long timeframes, but when discussing the neglectedness of those problems, we usually consider only resources allocated in the near-term. This produces a skewed picture. A better way of comparing problems on different timeframes may be to estimate marginal cost-effectiveness directly. Alternatively, we can convert the units so that they are always the same, or perhaps model expected utility to take projected trends in importance, tractability and neglectedness (or any other variables we need) into account.

Argument

Say Nadja’s parents want to buy her a good university education. Assume that they think their grandchildren, great-grandchildren and all future descendants are just as important as Nadja. They are choosing between two interventions: paying for Nadja’s university education, or paying for the university education of some distant descendant. (Assume they’re certain that this descendant will exist.) Maybe they are thinking something along the lines of, “Nadja is already getting a lot of support from us, but no one is doing anything for that distant descendant. Clearly helping the descendant is a far more neglected problem, and we should pay for the descendant’s university education instead of Nadja’s.”

Is their thinking sound? Not really. That future descendant will have parents of her own who will do their best to support her. The second intervention solves a problem in the long-term future, so when determining how neglected that problem is, Nadja’s parents need to take into account any future work that might be done to solve it. Looking only at resources allocated right now means seeing only part of the picture.

Nadja’s parents may still have good reasons to allocate more money to help that descendant. For example, there may be problems that take place, or can only be addressed, before the descendant, her parents or even her grandparents are born. Those problems may really be neglected compared to Nadja’s. But some of the things they can do for the descendant would surely also have been done by her own parents and grandparents.

Objection: This is pretty obvious. When people talk about the neglectedness of longtermism and mention only work being done today, they are not doing an explicit expected value calculation – they are only giving a rough idea of how much (or rather, how little) work is being done on longtermism. Reply: Not to me it wasn’t! I think this would be fine if it weren’t for the fact that the stuff left out consistently makes longtermism look more neglected than it is.

Objection: Isn’t this just the old criticism about letting future generations take care of future problems? Michelle Hutchinson puts it well: “[Helping people alive today can be harder than helping future people] in part due to the sense that if we don’t take actions to improve the future, there are others coming after us who can. By contrast, if we don’t take action to help today’s global poor, those coming after us cannot step in and take our place. The lives we fail to save this year are certain to be lost and grieved for.”[3] Reply: Kind of? Hutchinson solves that problem in part by reminding herself that discounting doesn’t make any sense – that future people’s lives matter just as much as today’s. I think that’s true and relevant to prioritising cause areas. I just think we should be clear that the second consideration (no discounting) doesn’t negate the first consideration (the work that future generations will do) but compensates for it.

Objection: Isn’t this the argument that Phil Torres makes? Reply: No. I take Torres to argue that longtermism puts such a high value on existential risk and the long-term future that everything that happens now is trivial in comparison. But the problems we have now are not trivial. Therefore longtermism is wrong or bad. That is an argument about importance, not neglectedness.

Objection: When allocating resources, it is more common to compare “buckets” of problems (like one global health and poverty bucket, one longtermism bucket and so on) than it is to compare near-term and long-term problems directly. Reply: If this is the case, it probably still involves making judgments about which bucket is more neglected. And because the buckets contain problems on different average timeframes, it is still important to account for future work.

Objection: Resources allocated now are likely to have a causal impact on resources allocated in future. Maybe the question is less about whether or not to spend a lump sum now, and more about whether or not to ramp up spending in a durable way. Reply: This seems true. I’m not sure how to account for this.

To be clear, I am not saying that we should do any discounting, or that longtermism is not important. I think long-term problems are very important indeed, and that the world should do much more to solve them. I am only saying that we should take future work into account when comparing their neglectedness to that of near-term problems.

Alternatives

What is a better way of thinking about it?

One is to estimate marginal cost-effectiveness directly. Neglectedness is after all only important in so far as it affects marginal cost-effectiveness. Here is an example of a direct estimate, from Greaves and MacAskill (2019):

Expert judgment [...] tends to put the probability of existential catastrophe from [artificial superintelligence] at 1-10% [...] we think that even a highly conservative assessment would assign at least a 0.1% chance to an AI-driven catastrophe [...] over the coming century. We also estimate that $1 billion of carefully targeted spending would suffice to avoid catastrophic outcomes in (at the very least) 1% of the scenarios where they would otherwise occur. On these estimates, $1 billion of spending would provide at least a 0.001% absolute reduction in existential risk. That would mean that every $100 spent had, on average, an impact as valuable as saving one trillion (resp., one million, 100) lives on our main (resp. low, restricted) estimate – far more than the near-future benefits of bednet distribution.

The $1 billion is competing with the money that could be spent on bednet distribution. So it is meant to be spent in the near-term. The assumption is that none of the 1% of scenarios – the scenarios nullified by the $1 billion – would have been nullified by someone else later on had the $1 billion not been spent. I think this is what “where they would otherwise occur” means. So the $1 billion presumably nullifies many more than just 1% of scenarios, only the rest of them would have been nullified later on anyway, such that only 1% of scenarios are uniquely nullified by this $1 billion spent right now. (I have no idea whether this 1% figure is reasonable or not.)

Estimating marginal cost-effectiveness directly in this way seems ok to me. But it may not always be feasible. It might be much harder, in the example above, to estimate how much risk is uniquely addressed by the intervention than to estimate first how much risk is addressed overall and then how much of that would have been addressed in future anyway. Like, say we are considering installing an asteroid deflection system. It seems easier to estimate both the probability it would have of successfully deflecting an asteroid in the next century and how likely we are to install some other similar or conflicting system in future, than to directly estimate how large a portion of asteroid impact scenarios the system would address that wouldn’t have been addressed otherwise. Plus, talking about neglectedness is a useful way of illustrating the concept of diminishing returns.

Another way is to convert everything to the same unit, asking ourselves roughly, “How much is spent per year/​decade/​century and what’s the scale of the problem each year/​decade/​century?”[4] Care needs to be taken when doing this, but it probably works fine if we assume a constant risk and a constant amount of funding. (Even with a constant existential risk earlier years are more important. There is always the chance that disaster happens before we get to spend our money.) But (1) this might not always work for problems like AI risk, where the probability of something very bad happening this year is extremely low, but the probability of something very bad happening this century is much higher. And (2) it might not always work if we expect the problem to get more (or fewer) resources allocated to it in future.

Another alternative might be to make projections of risk and spending (or any other variables that go into an expected value calculation) over the timeframe in question. Then we could plot the expected value of an intervention over that timeframe. This may be similar to what Open Philanthropy is doing to prioritise near-term causes.

References

Greaves, Hilary, and William MacAskill. 2019. “The Case for Strong Longtermism.” Gpi Working.
Ord, Toby. 2020. The Precipice: Existential Risk and the Future of Humanity. Hachette Books.


  1. ↩︎

    The sentiment is also echoed in Avital Balwit’s Response to Recent Criticisms of Longtermism: “[L]ongtermist work seems deeply neglected. There are very few people working on existential risk mitigation. This means that each additional person causes a relatively large proportional increase in the amount of work that’s being done. This connects back to how longtermists interact with climate change. Climate change is less neglected than other risks like nuclear war or biosecurity. For example, there are currently no major funders funding nuclear security, whereas climate change gets $5-9 billion from philanthropists every year, and hundreds of billions from governments and the private sector.”

    Another example is Open Philanthropy’s reports for global catastrophic risks, nuclear weapons policy, biosecurity and AI alignment. Looking through these, I see no mention of work that is expected to take place in the future, yet the associated risks are clearly distributed over long time horizons. But the folks at Open Philanthropy have probably taken these into account when prioritising, even if they don’t mention it.

  2. ↩︎

    I have, however, come across many attempts at answering the questions “How much should a longtermist spend now versus in the future?” and “Why is longtermism so neglected compared to neartermist causes?”

  3. ↩︎

    This is different from “punting to the future”. Punting to the future means holding on to or investing resources in order to make a greater impact on long-term problems in the future. The objection I refer to here says we should instead spend resources on near-term causes now.

  4. ↩︎

    Note that not all years are equally important even if we do assume a constant probability. There is a risk that something goes wrong this year and we cannot address this risk in the future.