People in EA sometimes use the term “cluelessness” in a way that’s pretty much referring to the epistemic challenge or the idea that it’s really really hard to predict long-term-future effects. But I’m pretty sure the philosophers writing on this topic mean something more specific and absolute/qualitative, and a natural interpretation of the word is also more absolute (“clueless” implies “has absolutely no clue”). I think cluelessness could be seen as one special case / subset of the broader topic of “it seems really really hard to predict long-term future effects”.
Hmm, looking again at Greaves’ paper, it seems like it really is the case that the concept of “cluelessness” itself, in the philosophical literature, is meant to be something quite absolute. From Greaves’ introduction:
“The cluelessness worry. Assume determinism.1 Then, for any given (sufficiently precisely described) act A, there is a fact of the matter about which possible world would be realised – what the future course of history would be – if I performed A. Some acts would lead to better consequences (that is, better future histories) than others. Given a pair of alternative actions A1, A2, let us say that
(OB: Criterion of objective c-betterness) A1 is objectively c-better than A2 iff the consequences of A1 are better than those of A2.
It is obvious that we can never be absolutely certain, for any given pair of acts A1, A2, of whether or not A1 is objectively c-better than A2. This in itself would be neither problematic nor surprising: there is very little in life, if anything, of which we can be absolutely certain. Some have argued, however, for the following further claim:
(CWo: Cluelessness Worry regarding objective c-betterness) We can never have even the faintest idea, for any given pair of acts (A1, A2), whether or not A1 is objectively c-better than A2.
This ‘cluelessness worry’ has at least some more claim to be troubling.”
So at least in her account of how other philosophers have used the term, it refers to not having “even the faintest idea” which act is better. This also fits with what “cluelessness” arguably should literally mean (having no clue at all). This seems to me (and I think to Greaves’?) quite distinct from the idea that it’s very very very* hard to predict which act is better, and thus even whether an act is net positive.
And then Greaves later calls this “simple cluelessness”, and introduces the idea of “complex cluelessness” for something even more specific and distinct from the basic idea of things being very very very hard to predict.
Meanwhile, the epistemic challenge is the more quantitative, less absolute, and in my view more useful idea that:
effects probably get harder to predict the further in future they are
this might mean we should focus on the near-term if that gradual decrease in our predictive power outweighs the increased scale of the long-term future compared to the nearer-term.
Longtermists claim that what we ought to do is mainly determined by how our actions might affect the very long-run future. A natural objection to longtermism is that these effects may be nearly impossible to predict— perhaps so close to impossible that, despite the astronomical importance of the far future, the expected value of our present options is mainly determined by short-term considerations. This paper aims to precisify and evaluate (a version of) this epistemic objection to longtermism. To that end, I develop two simple models for comparing “longtermist” and “short-termist” interventions, incorporating the idea that, as we look further into the future, the effects of any present intervention become progressively harder to predict.
Is the “epistemic challenge to longtermism” something like “the problem of cluelessness, as applied to longtermism”, or is it something different?
People in EA sometimes use the term “cluelessness” in a way that’s pretty much referring to the epistemic challenge or the idea that it’s really really hard to predict long-term-future effects. But I’m pretty sure the philosophers writing on this topic mean something more specific and absolute/qualitative, and a natural interpretation of the word is also more absolute (“clueless” implies “has absolutely no clue”). I think cluelessness could be seen as one special case / subset of the broader topic of “it seems really really hard to predict long-term future effects”.
I write about this more here and here.
Here’s an excerpt from the first of those links:
Meanwhile, the epistemic challenge is the more quantitative, less absolute, and in my view more useful idea that:
effects probably get harder to predict the further in future they are
this might mean we should focus on the near-term if that gradual decrease in our predictive power outweighs the increased scale of the long-term future compared to the nearer-term.
On that, here’s part of the abstract of Tarsney’s paper: