Longtermists claim that what we ought to do is mainly determined by how our actions might affect the very long-run future. A natural objection to longtermism is that these effects may be nearly impossible to predict— perhaps so close to impossible that, despite the astronomical importance of the far future, the expected value of our present options is mainly determined by short-term considerations. This paper aims to precisify and evaluate (a version of) this epistemic objection to longtermism. To that end, I develop two simple models for comparing “longtermist” and “short-termist” interventions, incorporating the idea that, as we look further into the future, the effects of any present intervention become progressively harder to predict. These models yield mixed conclusions: If we simply aim to maximize expected value, and don’t mind premising our choices on minuscule probabilities of astronomical payoffs, the case for longtermism looks robust. But on some prima facie plausible empirical worldviews, the expectational superiority of longtermist interventions depends heavily on these “Pascalian” probabilities. So the case for longtermism may depend either on plausible but non-obvious empirical claims or on a tolerance for Pascalian fanaticism.
Why I’m making this linkpost
I want to draw a bit more attention to this great paper
I think this is one of the best sources for people interested in arguments for and against longtermism
For people who are interested in learning about longtermism and are open to reading (sometimes somewhat technical) philosophy papers, I think the main two things I’d recommend they read are The Case for Strong Longtermism and this paper
I want to make it possible to tag the post so that people see it later when it’s relevant to what they’re looking for via tags (e.g., I’d want people who check out the Longtermism tag to see a pointer to this paper to come up prominently)
I want to make it easier for people to get a quick sense of whether it’s worth their time to engage with this paper, given their goals (because people can check this post’s karma, comments, and/or tags)
I want to give people a space to discuss the paper in a way that other people can see and build on
I’ll share a bunch of my own comments below
(I’ll try to start each one with a tl;dr for that comment)
The Epistemic Challenge to Longtermism (Tarsney, 2020)
Link post
Abstract from the paper
Why I’m making this linkpost
I want to draw a bit more attention to this great paper
I think this is one of the best sources for people interested in arguments for and against longtermism
For people who are interested in learning about longtermism and are open to reading (sometimes somewhat technical) philosophy papers, I think the main two things I’d recommend they read are The Case for Strong Longtermism and this paper
Other leading contenders are The Precipice, Existential Risk Prevention as Global Priority, and some of the posts tagged Longtermism
I want to make it possible to tag the post so that people see it later when it’s relevant to what they’re looking for via tags (e.g., I’d want people who check out the Longtermism tag to see a pointer to this paper to come up prominently)
I want to make it easier for people to get a quick sense of whether it’s worth their time to engage with this paper, given their goals (because people can check this post’s karma, comments, and/or tags)
I want to give people a space to discuss the paper in a way that other people can see and build on
I’ll share a bunch of my own comments below
(I’ll try to start each one with a tl;dr for that comment)
See also Should pretty much all content that’s EA-relevant and/or created by EAs be (link)posted to the Forum?