Thanks so much for the summary. I liked the explication.
Some philosophers argue utilitarianism only seems demanding because we are in an unusually bad world. If we were in a “morally normal” world which already had more equitable wealth distribution, less oppressive institutions, and was not in a constant state of emergency, maximising the good and minimising the bad would not be so hard.
Again, having to think about the far future undermines this argument. An overwhelmingly high value in improving the far future need not imply moral dysfunction. Perhaps the future might be good, just and equitable without our interventions, but could be even more glorious and long-lasting if we devote ourselves to its betterment. The demand to devote ourselves to its betterment remains.
I’ve only read your summary and the linked section in the original paper and haven’t read the references, but (if I understand the argument correctly and the references don’t cover nuances that I’ve missed), I think this is wrong.
As I understand it, there are multiple ways in which utilitarianism can be “too demanding.” Two seem salient to me (there might well be others):
In the limit, utilitarianism does not permit any notion of practical free will, or supererogatory actions (“everything that is not obligatory is forbidden”).
If I understand the first paragraph correctly, this is not the notion of demandingness that is being contested here.
Even if you relax the limits of utilitarianism to a much weaker degree (e.g., something like “we only have a moral obligation to do actions to greatly benefit others, with at most relatively minor costs to ourselves,” we still have strong moral duties that will seem at-odds compared to common-sense morality (e.g. maybe we’re obligated to donate >50% of our income to global poverty causes)
Since nobody is debating that #1 is too demanding, the conversation is primarily about #2.
The new argument is that from a far-future perspective, even if we are in a “morally normal” world, we may still have what appears to be extraordinary obligations even under fairly weak versions of utilitarianism. I think this is wrong, because a) our world is clearly morally abnormal, b) most observers not-too-dissimilar-from-us are in worlds that are much closer to intuitive conceptions of “morally normal” (i.e., have substantially more relaxed moral duties as a result).
I think b) is true for two reasons:
1. We appear to be in an unusually early world in the lifecycle of Earth-originating observers. Almost all of our (in-expectation) descendants will have a weaker moral obligation than us, because they cannot (in expectation) affect the future nearly as much as we could. Put another way, the Ramsey rules are much less relevant in equilibrium, because exponential economic growth will stop within the next few thousand years, never mind most of future history. See Holden’s This Can’t Go On and Buck’s critiques of MacAskill on HoH for more explications of this.
Note that if you disagree that we are in expectation unusually early observers, whether because of theoretical arguments like the Doomsday argument, or because of practical beliefs about (e.g.) extinction risk, this instead weakens the argument for longtermism and thus also weaken the notion that we have strong longterm moral obligations.
2. It seems probable that most “observers like us” aren’t living in basement reality. For most observers knowingly in simulations, or Boltzmann brains, etc., it seems unlikely that utilitarianism has nearly the same moral oomph as it does for us, assuming most of us believe that there’s a decently high likelihood that our anthropically weighted status is not in simulations.
Thanks so much for the summary. I liked the explication.
I’ve only read your summary and the linked section in the original paper and haven’t read the references, but (if I understand the argument correctly and the references don’t cover nuances that I’ve missed), I think this is wrong.
As I understand it, there are multiple ways in which utilitarianism can be “too demanding.” Two seem salient to me (there might well be others):
In the limit, utilitarianism does not permit any notion of practical free will, or supererogatory actions (“everything that is not obligatory is forbidden”).
If I understand the first paragraph correctly, this is not the notion of demandingness that is being contested here.
Even if you relax the limits of utilitarianism to a much weaker degree (e.g., something like “we only have a moral obligation to do actions to greatly benefit others, with at most relatively minor costs to ourselves,” we still have strong moral duties that will seem at-odds compared to common-sense morality (e.g. maybe we’re obligated to donate >50% of our income to global poverty causes)
Since nobody is debating that #1 is too demanding, the conversation is primarily about #2.
The new argument is that from a far-future perspective, even if we are in a “morally normal” world, we may still have what appears to be extraordinary obligations even under fairly weak versions of utilitarianism. I think this is wrong, because a) our world is clearly morally abnormal, b) most observers not-too-dissimilar-from-us are in worlds that are much closer to intuitive conceptions of “morally normal” (i.e., have substantially more relaxed moral duties as a result).
I think b) is true for two reasons:
1. We appear to be in an unusually early world in the lifecycle of Earth-originating observers. Almost all of our (in-expectation) descendants will have a weaker moral obligation than us, because they cannot (in expectation) affect the future nearly as much as we could. Put another way, the Ramsey rules are much less relevant in equilibrium, because exponential economic growth will stop within the next few thousand years, never mind most of future history. See Holden’s This Can’t Go On and Buck’s critiques of MacAskill on HoH for more explications of this.
Note that if you disagree that we are in expectation unusually early observers, whether because of theoretical arguments like the Doomsday argument, or because of practical beliefs about (e.g.) extinction risk, this instead weakens the argument for longtermism and thus also weaken the notion that we have strong longterm moral obligations.
2. It seems probable that most “observers like us” aren’t living in basement reality. For most observers knowingly in simulations, or Boltzmann brains, etc., it seems unlikely that utilitarianism has nearly the same moral oomph as it does for us, assuming most of us believe that there’s a decently high likelihood that our anthropically weighted status is not in simulations.