Meta: I haven’t seen this framing spelt out in these terms and think it’s a useful way of integrating considerations raised by patient longtermism into one overall EA worldview.
The considerations elucidated by patient longtermism, namely that our resources can “go further” in the future, are important. There is an analogous here to Singer’s drowning child argument, which says that, all else equal, you shouldn’t have a preference over helping someone who is spatially close to you compared to someone who is spatially far away. In other words, when evaluating different altruistic actions, you should only consider their “impact potential” and not, for example, your geographical distance of the moral patient. In Singer’s case, inequalities in global levels of development mean that money can go further (i.e. have more altruistic impact) abroad. In the case of patient longtermism, interest rates being higher than the rate at which creating additional welfare becomes more expensive over time mean that money can go further in the future.
Personally, I feel generally very happy to defer judgement about what is best to do to future beings since knowledge and wisdom is likely to have increased by then. Because of that (and abstracting from some other complications, some of which I will touch on later), I feel happy to invest resources today in a way that has them accumulate over time such that, eventually, future beings have more resources at hand for doing good, according to their judgement of how to best do that.
This is why I think estimates based on considerations of patient longtermism can usefully function as a benchmark against which to compare present-day altruistic actions. [1]
(Of course, all of this is still abstracting away from a lot of real-world complexity, some of which are decision-relevant. Thus, a benchmark consideration as I’m suggesting it ought to be used considerately, more like one among many inputs that weigh in on one’s decision.)
[1] An early example of this might be Philip Trammell’s calculation (see “Discounting for Patient Philanthropists” or “80,000 Hours interview with Phillip Trammel”) that says that: if interest rates continue to be higher than the rate at which creating additional welfare becomes more expensive, in approximately 279 years, giving the invested money to rich people in the developed world would (still) create more welfare than if you were to give the initial amount of money to the world’s poorest today. (
Patient Longtermism as a benchmark
Meta: I haven’t seen this framing spelt out in these terms and think it’s a useful way of integrating considerations raised by patient longtermism into one overall EA worldview.
The considerations elucidated by patient longtermism, namely that our resources can “go further” in the future, are important. There is an analogous here to Singer’s drowning child argument, which says that, all else equal, you shouldn’t have a preference over helping someone who is spatially close to you compared to someone who is spatially far away. In other words, when evaluating different altruistic actions, you should only consider their “impact potential” and not, for example, your geographical distance of the moral patient. In Singer’s case, inequalities in global levels of development mean that money can go further (i.e. have more altruistic impact) abroad. In the case of patient longtermism, interest rates being higher than the rate at which creating additional welfare becomes more expensive over time mean that money can go further in the future.
Personally, I feel generally very happy to defer judgement about what is best to do to future beings since knowledge and wisdom is likely to have increased by then. Because of that (and abstracting from some other complications, some of which I will touch on later), I feel happy to invest resources today in a way that has them accumulate over time such that, eventually, future beings have more resources at hand for doing good, according to their judgement of how to best do that.
This is why I think estimates based on considerations of patient longtermism can usefully function as a benchmark against which to compare present-day altruistic actions. [1]
(Of course, all of this is still abstracting away from a lot of real-world complexity, some of which are decision-relevant. Thus, a benchmark consideration as I’m suggesting it ought to be used considerately, more like one among many inputs that weigh in on one’s decision.)
[1] An early example of this might be Philip Trammell’s calculation (see “Discounting for Patient Philanthropists” or “80,000 Hours interview with Phillip Trammel”) that says that: if interest rates continue to be higher than the rate at which creating additional welfare becomes more expensive, in approximately 279 years, giving the invested money to rich people in the developed world would (still) create more welfare than if you were to give the initial amount of money to the world’s poorest today. (