The authors seem to make a good case for strong longtermism. But I don’t think they make a good case that strong longtermism has very different implications to what we’d do anyway (though I do think that that case can be made).
That is, I don’t recall them directly countering the argument that the flow through effects of things that are best in the short term might be such that the same actions are also best in the long term, and that strong longtermism would thus simply recommend taking those actions.
Though personally I do think that one can make fairly good arguments against such claims available.
In particular, a “meta-level” argument along the lines of the post Beware surprising and suspicious convergence, as well as object-level arguments for the importance, tractability, and neglectedness of specific long-term-future-focused interventions (e.g., reducing anthropogenic existential risk).
Cluelessness is also relevant, though my independent impression is that that’s not a useful concept (see here and here)
But I think that some people reading the paper won’t be familiar with those arguments, and also that there’s room for reasonable doubt about those arguments.
So it seems to me that the paper should’ve devoted a bit more time to that, or at least acknowledged the issue a bit more.
The authors seem to make a good case for strong longtermism. But I don’t think they make a good case that strong longtermism has very different implications to what we’d do anyway (though I do think that that case can be made).
That is, I don’t recall them directly countering the argument that the flow through effects of things that are best in the short term might be such that the same actions are also best in the long term, and that strong longtermism would thus simply recommend taking those actions.
Though personally I do think that one can make fairly good arguments against such claims available.
In particular, a “meta-level” argument along the lines of the post Beware surprising and suspicious convergence, as well as object-level arguments for the importance, tractability, and neglectedness of specific long-term-future-focused interventions (e.g., reducing anthropogenic existential risk).
See also If you value future people, why do you consider near term effects?
Cluelessness is also relevant, though my independent impression is that that’s not a useful concept (see here and here)
But I think that some people reading the paper won’t be familiar with those arguments, and also that there’s room for reasonable doubt about those arguments.
So it seems to me that the paper should’ve devoted a bit more time to that, or at least acknowledged the issue a bit more.