Related entries: cluelessness | estimation of existential risk | forecasting | longtermism
Given how much the scope of this entry/tag would overlap with the scope of an epistemic challenge to longtermism tag, and how much both would overlap with other entries/tags we already have, I think we should probably only have one or the other. (I could be wrong, though. Maybe we should have both but with one being wiki-only. Or maybe we should have both later on, once the Wiki has a larger set of entries and is perhaps getting more fine-grained.)
I agree with having this tag and subsuming epistemic challenge to longtermism under it. We do already have forecasting and AI forecasting, so some further thinking may be needed to avoid overlap.
Ok, I’ve now made a long-range forecasting tag, and added a note there that it should probably subsume/cover the epistemic challenge to longtermism as well.
And yeah, I’m open to people adjusting things later to reduce how many entries/tags we have on similar topics.
People in EA sometimes use the term “cluelessness” in a way that’s pretty much referring to the epistemic challenge or the idea that it’s really really hard to predict long-term-future effects. But I’m pretty sure the philosophers writing on this topic mean something more specific and absolute/qualitative, and a natural interpretation of the word is also more absolute (“clueless” implies “has absolutely no clue”). I think cluelessness could be seen as one special case / subset of the broader topic of “it seems really really hard to predict long-term future effects”.
Hmm, looking again at Greaves’ paper, it seems like it really is the case that the concept of “cluelessness” itself, in the philosophical literature, is meant to be something quite absolute. From Greaves’ introduction:
“The cluelessness worry. Assume determinism.1 Then, for any given (sufficiently precisely described) act A, there is a fact of the matter about which possible world would be realised – what the future course of history would be – if I performed A. Some acts would lead to better consequences (that is, better future histories) than others. Given a pair of alternative actions A1, A2, let us say that
(OB: Criterion of objective c-betterness) A1 is objectively c-better than A2 iff the consequences of A1 are better than those of A2.
It is obvious that we can never be absolutely certain, for any given pair of acts A1, A2, of whether or not A1 is objectively c-better than A2. This in itself would be neither problematic nor surprising: there is very little in life, if anything, of which we can be absolutely certain. Some have argued, however, for the following further claim:
(CWo: Cluelessness Worry regarding objective c-betterness) We can never have even the faintest idea, for any given pair of acts (A1, A2), whether or not A1 is objectively c-better than A2.
This ‘cluelessness worry’ has at least some more claim to be troubling.”
So at least in her account of how other philosophers have used the term, it refers to not having “even the faintest idea” which act is better. This also fits with what “cluelessness” arguably should literally mean (having no clue at all). This seems to me (and I think to Greaves’?) quite distinct from the idea that it’s very very very* hard to predict which act is better, and thus even whether an act is net positive.
And then Greaves later calls this “simple cluelessness”, and introduces the idea of “complex cluelessness” for something even more specific and distinct from the basic idea of things being very very very hard to predict.
Meanwhile, the epistemic challenge is the more quantitative, less absolute, and in my view more useful idea that:
effects probably get harder to predict the further in future they are
this might mean we should focus on the near-term if that gradual decrease in our predictive power outweighs the increased scale of the long-term future compared to the nearer-term.
Longtermists claim that what we ought to do is mainly determined by how our actions might affect the very long-run future. A natural objection to longtermism is that these effects may be nearly impossible to predict— perhaps so close to impossible that, despite the astronomical importance of the far future, the expected value of our present options is mainly determined by short-term considerations. This paper aims to precisify and evaluate (a version of) this epistemic objection to longtermism. To that end, I develop two simple models for comparing “longtermist” and “short-termist” interventions, incorporating the idea that, as we look further into the future, the effects of any present intervention become progressively harder to predict.
Epistemic challenge, or The epistemic challenge, or Epistemic challenges, or any of those but with “to longtermism” added
Relevant posts include the following, and presumably many more:
https://forum.effectivealtruism.org/posts/FhjDSijdWrhFMgZrb/the-epistemic-challenge-to-longtermism-tarsney-2020
https://forum.effectivealtruism.org/posts/z2DkdXgPitqf98AvY/formalising-the-washing-out-hypothesis
https://forum.effectivealtruism.org/posts/jBmLrYJJh4kydhpcD/the-case-for-strong-longtermism
Related entries
cluelessness
longtermism
expected value
forecasting
Another idea: Long-range forecasting (or some other name covering a similar topic).
See e.g. https://forum.effectivealtruism.org/posts/s8CwDrFqyeZexRPBP/link-how-feasible-is-long-range-forecasting-open-phil
Related entries: cluelessness | estimation of existential risk | forecasting | longtermism
Given how much the scope of this entry/tag would overlap with the scope of an epistemic challenge to longtermism tag, and how much both would overlap with other entries/tags we already have, I think we should probably only have one or the other. (I could be wrong, though. Maybe we should have both but with one being wiki-only. Or maybe we should have both later on, once the Wiki has a larger set of entries and is perhaps getting more fine-grained.)
I agree with having this tag and subsuming epistemic challenge to longtermism under it. We do already have forecasting and AI forecasting, so some further thinking may be needed to avoid overlap.
Ok, I’ve now made a long-range forecasting tag, and added a note there that it should probably subsume/cover the epistemic challenge to longtermism as well.
And yeah, I’m open to people adjusting things later to reduce how many entries/tags we have on similar topics.
Is the “epistemic challenge to longtermism” something like “the problem of cluelessness, as applied to longtermism”, or is it something different?
People in EA sometimes use the term “cluelessness” in a way that’s pretty much referring to the epistemic challenge or the idea that it’s really really hard to predict long-term-future effects. But I’m pretty sure the philosophers writing on this topic mean something more specific and absolute/qualitative, and a natural interpretation of the word is also more absolute (“clueless” implies “has absolutely no clue”). I think cluelessness could be seen as one special case / subset of the broader topic of “it seems really really hard to predict long-term future effects”.
I write about this more here and here.
Here’s an excerpt from the first of those links:
Meanwhile, the epistemic challenge is the more quantitative, less absolute, and in my view more useful idea that:
effects probably get harder to predict the further in future they are
this might mean we should focus on the near-term if that gradual decrease in our predictive power outweighs the increased scale of the long-term future compared to the nearer-term.
On that, here’s part of the abstract of Tarsney’s paper: