I find it disconcerting that there are a lot of very smart people in the EA community who focus more on near-term effects than I currently find reasonable.
“If you value future people, why do you consider near term effects?” by Alex HT makes the case that a lot of reasons to focus on near-term effects fall short of being persuasive. The case is based centrally on complex cluelessness. It closes with a series of possible objections and why they are not persuasive. (Alex also cites the amazing article “Growth and the case against randomista development.”)
The article invites a discussion, and Michael St. Jules responded by explaining the shape of a utility function (bounded above and below) that would lead to a near term focus and why it is a sensible utility function to have. This seems to be a common reason to prefer near-term interventions, judging by the number of upvotes.
There are also hints in the discussion of whether there may be a reason to focus on near-term effects as a Schelling point in coordination problem with future generations. But that point is not fully developed, and I don’t think I could steelman it.
I’ve heard smart people argue for the merits of bounded utility functions before. They have a number of merits – avoiding Pascal’s mugging, the St. Petersburg game, and more. (Are there maybe even some benefits for dealing with infinite ethics?) But they’re also awfully unintuitive to me.
Besides, I wouldn’t know how to select the right parameters for it. With some parameters, it’ll be nearly linear in a third-degree-polynomial increase in aggregate positive or negative valence over the coming millennium, and that may be enough to prefer current longtermist over current near-termist approaches.
[“If you value future people, why do you consider near term effects?” by Alex HT: Personal takeaways.]
I find it disconcerting that there are a lot of very smart people in the EA community who focus more on near-term effects than I currently find reasonable.
“If you value future people, why do you consider near term effects?” by Alex HT makes the case that a lot of reasons to focus on near-term effects fall short of being persuasive. The case is based centrally on complex cluelessness. It closes with a series of possible objections and why they are not persuasive. (Alex also cites the amazing article “Growth and the case against randomista development.”)
The article invites a discussion, and Michael St. Jules responded by explaining the shape of a utility function (bounded above and below) that would lead to a near term focus and why it is a sensible utility function to have. This seems to be a common reason to prefer near-term interventions, judging by the number of upvotes.
There are also hints in the discussion of whether there may be a reason to focus on near-term effects as a Schelling point in coordination problem with future generations. But that point is not fully developed, and I don’t think I could steelman it.
I’ve heard smart people argue for the merits of bounded utility functions before. They have a number of merits – avoiding Pascal’s mugging, the St. Petersburg game, and more. (Are there maybe even some benefits for dealing with infinite ethics?) But they’re also awfully unintuitive to me.
Besides, I wouldn’t know how to select the right parameters for it. With some parameters, it’ll be nearly linear in a third-degree-polynomial increase in aggregate positive or negative valence over the coming millennium, and that may be enough to prefer current longtermist over current near-termist approaches.
Related: https://globalprioritiesinstitute.org/christian-tarsney-the-epistemic-challenge-to-longtermism/