Mini summaries of GPI papers

I have previously written about the importance of making global priorities research accessible to a wider range of people. Many people don’t have the time or desire to read academic papers, but the findings of the research are still hugely important and action-relevant.

The Global Priorities Institute (GPI) has started producing paper summaries, but even these might have somewhat limited readership given their length. They are also time-consuming for GPI to develop and aren’t all in one place.

With this in mind, and given my personal interest in global priorities research, I have written a few mini-summaries of GPI papers. The extra lazy /​ time poor can read just “The bottom lines”. I would welcome feedback on if these samples are useful and if I should continue to make them—working towards a post with all papers summarised. It is impossible to cover everything in just a few bullet points, but I hope my summaries successfully inform of the main arguments and key takeaways. Please note that for the final two summaries I made use of the existing GPI paper summaries.

On the desire to make a difference (Hilary Greaves, William MacAskill, Andreas Mogensen and Teruji Thomas)

The bottom line: Preferring to make a difference yourself is in deep tension with the ideals of benevolence. If we are to be benevolent, we should solely care about how much total good is done. In practice, this means avoiding tendencies to diversify individual philanthropic portfolios or to neglect mitigation of extinction risks in favour of neartermist options that seem “safer”.

My brief summary:

  • One can consider various types of “difference-making preferences” (DMPs), where one wants to do good themselves. One example is thinking of the difference one makes in terms of their own causal impact. This can make the world worse e.g. going to great lengths to be the one to save a drowning person even if other people are better placed to do so. This way of thinking is therefore in tension with benevolence.

  • One can instead hope to have higher outcome-comparison impact, where one compares how much better an outcome is if one acts, compared to if one does nothing. This would recommend not trying to save the drowning person, which seems the correct conclusion. However, the authors note that thinking of doing good in this way can still be in tension with benevolence. For example, one might prefer that a recent disaster were severe rather than mild so that they can do more good by helping affected people.

  • Under uncertainty, DMPs are also in tension with benevolence, in an action-relevant way. For example, being risk averse to the difference one individually makes sometimes means choosing an action that is (stochastically) dominated by another action—essentially choosing an action that is ‘objectively’ worse under uncertainty, with respect to doing good.

  • This can also be the case when people interact—the authors show that the presence of DMPs in collective action problems with uncertainty can lead to sub-optimal outcomes. Importantly they show that the preferences themselves are the culprits. This is also the case with DMPs under ambiguity aversion (ambiguity aversion means preferring known risks over unknown risks).

  • One could try to rationalise DMPs by saying people are trying to achieve ‘meaning’ in their life. But people who exhibit DMPs are generally motivated by the ideal of benevolence. It seems therefore that such people, if they really do want to be benevolent, should give up their DMPs.

  • See paper here.

The unexpected value of the future (Hayden Wilkinson)

The bottom line: An undefined expected value of the future doesn’t invalidate longtermism. A theory is developed to deal with undefined expected values and this theory leads to an even stronger longtermist conclusion than what we started with.

My brief summary:

  • Standard arguments for longtermism rely on a large expected value of the future. But there are pretty credible arguments that the expected value of the future is undefined! In this case expected value theory is rendered useless and we need to find an alternative theory if we are to choose between different actions.

  • One theory that works in important scenarios is expected utility theory with sensitivity to risk, because it reduces the importance of extreme outcomes in decision-making. But there are compelling arguments for risk neutrality—so can we find a theory that retains risk neutrality?

  • The author builds on previous work to develop an adequate theory of value that does so—one that considers value differences between different actions, and essentially ignores outcomes that are sufficiently unlikely to occur.

  • This theory strongly supports a longtermist conclusion—in fact it says it is infinitely better to improve the far future than the present. The case for longtermism becomes even stronger than what we started with!

  • See paper here.

Longtermism, aggregation, and catastrophic risk (Emma J. Curran)

The bottom line: If one is sceptical about aggregative views, where one can be driven by sufficiently many small harms outweighing a smaller number of large harms, one should also be sceptical about longtermism.

My brief summary:

  • Longtermists generally prefer reducing catastrophic risk to saving lives of people today. This is because, even though you would be reducing probability of harm by a small amount if focusing on catastrophic risk, the expected vastness of the future means more good is done in expectation.

  • This argument relies on an aggregative view where we should be driven by sufficiently many small harms outweighing a smaller number of large harms. However there are some cases where we might say such decision-making is impermissible e.g. letting a man get run over by a train instead of pulling a lever to save the man but also make lots of people late for work. One argument for why it’s better to save the man from death is the separateness of persons—there is no actual person who experiences the sum of the individual harms of being late—so there can be no aggregate complaint.

  • The author shows that a range of non-aggregative views (where we are not driven by sufficiently many small harms outweighing fewer large ones), under different treatments of risk, undermine the case for longtermism. These views typically generate extremely weak claims of assistance from future people.

  • See paper here.

The case for strong longtermism (Hilary Greaves and William MacAskill)

The bottom line: Humanity’s future could be vast, and we can influence its course. That suggests the truth of strong longtermism: impact on the far future is the most important feature of our actions today.

My brief summary:

  • The expected number of future lives is vast. You only need non-negligible probabilities of humanity surviving until the earth becomes uninhabitable, spreading into space, or creating digital sentience.

  • We can predictably improve the far future by steering between persistent states that differ in long-term value. A persistent state is one which – upon coming about – tends to persist for a long time. One way to steer between persistent states is to reduce the risk of premature human extinction—which would therefore be a pressing goal given the vastness of the future.

  • Under a person-affecting view of population ethics where we care about making lives good but not making good lives, reducing risks of extinction isn’t important. But there are alternative interventions that would still be good for the long-term future—such as guiding the development of artificial super intelligence (ASI). ASI is likely to be influential and long-lasting, so ensuring it has the right values would be good for the long-term future under all plausible moral views.

  • Uncertainty does not undermine the case for strong longtermism because we also have ‘meta’ options for improving the far future such as conducting further research and investing resources for use at some later time.

  • The authors don’t think that cluelessness about far-future effects of our actions or the fact that strong longtermism might hinge on tiny probabilities of enormous values (fanaticism) undermines the case for strong longtermism. Fanaticism is one of the most pressing objections, but denying fanaticism has implausible consequences and the probabilities might not be so small that fanaticism becomes an issue.

  • As well as strong longtermism being justified on an axiological basis (making a claim about the value of our actions) we can also justify it on deontic grounds (in terms of what we should do). The authors argue for a deontic justification, as improving the far future is far more valuable than focusing on the short-term, can be done at comparatively small cost, and does not violate any serious moral constraints. These conditions mean we should be driven to act by strong longtermism.

  • See longer summary here and paper here.

The Epistemic Challenge to Longtermism (Christian Tarsney)

The bottom line: If we are happy with expected value theory and don’t mind being driven by very small probabilities, longtermism holds up well. However, if we don’t like being fanatical, the epistemic challenge against longtermism seems fairly serious.

My brief summary:

  • One broad class of strategies for improving the long-term future are “persistent-difference strategies” (PDSs) where one tries to put the world into a better state of the world than it would have been otherwise, and hopes that this state persists for a long time.

  • But one might think it is too difficult to identify ways to do this. For example, such strategies might be threatened by “exogenous nullifying events” (ENEs), which nullify the effect of our PDSs. Negative ENEs, such as existential catastrophes, put the world in a less good persistent state.

  • If we assume that we will settle star systems one day (cubic growth) then, provided that the (constant) probability of ENEs in the far future is low enough, a typical longtermist intervention should be better than a neartermist one. This is because potential value would be huge. The author thinks the probability of ENEs is likely to be low enough for the longtermist intervention to win.

  • However, a model in which we don’t spread to the stars and we eventually reach zero growth (steady state model) is more pessimistic as we would need an unrealistically low probability of negative ENEs occurring in the far future for a longtermist intervention to beat a neartermist one. This rests on conservative assumptions though, and if we relax these the case for longtermism becomes more credible again.

  • The case for longtermism is also strengthened once we account for uncertainty. For example, we might consider that cubic growth is very unlikely, and also that it results in only a very small probability of very high value (like a Dyson sphere). Even in this case, despite arguably very small probabilities, the expected value of longtermist interventions still easily beats neartermist ones, because the potential value is huge.

  • So if we are happy with expected value theory and don’t mind being driven by very small probabilities, longtermism seems to hold up well. However, if we don’t like being fanatical, the epistemic challenge against longtermism seems fairly serious.

  • See longer summary here and paper here.