I wonder if a heavy dose of skepticism about longtermist-oriented interventions wouldn’t result in a somewhat similar mix of near termist and longtermist prioritization in practice. Specifically, someone might reasonably start with a prior that most interventions aimed at affecting the far future (especially those that don’t do so by tangibly changing something in the near term so that there could be strong feedbacks) come out as roughly a wash. This might then put a high burden of evidence on these interventions so that only a few very well founded ones would stand out above near termist-oriented actions. While in this view supposed flow through affects of near termist interventions would also be regarded with strong skepticism and so their long term impact might generally be judged to also come out as a wash, you’d at least get the short term benefit. So then one might often favor near term causes because gathering evidence on them is comparatively easy, but for longtermist interventions that are moderately well grounded, the standard reasoning favoring them would kick in. I think this is often roughly what happens, and might be another explanation for the observation that even proponents of strong longtermism don’t generally appear fanatical.
This is piggybacking a bit off of Darius_Meissner’s early comment that distinguishes between the axiological and deontic claims of strong longtermism (to borrow the terminology of Greaves and MacAskill’s and paper). Many have pointed out that accepting the former doesn’t have to lead to the latter, and this is just a particular reasoning for why. But I wonder why there is a need to have a philosophical basis for what seems like a bottom line that could be reached in practice even by neglecting moral uncertainty but just embracing empirical uncertainty and incorporating Bayesian Priors in EV thinking (as opposed to naive EV reasoning).
I wonder if a heavy dose of skepticism about longtermist-oriented interventions wouldn’t result in a somewhat similar mix of near termist and longtermist prioritization in practice. Specifically, someone might reasonably start with a prior that most interventions aimed at affecting the far future (especially those that don’t do so by tangibly changing something in the near term so that there could be strong feedbacks) come out as roughly a wash. This might then put a high burden of evidence on these interventions so that only a few very well founded ones would stand out above near termist-oriented actions. While in this view supposed flow through affects of near termist interventions would also be regarded with strong skepticism and so their long term impact might generally be judged to also come out as a wash, you’d at least get the short term benefit. So then one might often favor near term causes because gathering evidence on them is comparatively easy, but for longtermist interventions that are moderately well grounded, the standard reasoning favoring them would kick in. I think this is often roughly what happens, and might be another explanation for the observation that even proponents of strong longtermism don’t generally appear fanatical.
This is piggybacking a bit off of Darius_Meissner’s early comment that distinguishes between the axiological and deontic claims of strong longtermism (to borrow the terminology of Greaves and MacAskill’s and paper). Many have pointed out that accepting the former doesn’t have to lead to the latter, and this is just a particular reasoning for why. But I wonder why there is a need to have a philosophical basis for what seems like a bottom line that could be reached in practice even by neglecting moral uncertainty but just embracing empirical uncertainty and incorporating Bayesian Priors in EV thinking (as opposed to naive EV reasoning).