I actually think this is a tricky case where the boundary to misuse is hard to discern. (I do agree that, in many contexts, the idea can and should “be easily and concisely expressed without jargon”.)
This is because I think philosophical work on cluelessness is at its core motivated by it being “extremely hard to predict the long-term consequences of our actions, and thus even to know what actions will be net positive”. To be fair, it’s motivated by a bit more, but arguably not much more: that bit is roughly that some philosophers think that the “hard to predict” observation at least suggests one can say something philosophically interesting about it, and in particular that it might pose some kind of challenge for standard accounts of reasoning under uncertainty such as expected value. But importantly, there is no consensus about what the correct “theoretical account” of cluelessness is: to name just a few, it might be non-sharp credences, it might be low credal resilience, or it might just be a case where expected-value reasoning is hard but we can’t say anything interesting about it after all. Still, cluelessness is a term proponents of all these different views use.
I think this is a quite common situation in philosophy: at least some people have a ‘pre-theoretic intuition’ that seems to point to some philosophically interesting concept, but philosophers can’t agree on what it is, what its properties are, or even whether the intuition refers to anything at all. Analogs might be:
‘Free will’. Philosophers can’t agree if it’s about being responsive to reasons, having a mesh of desires with certain properties, being ultimately responsible for one’s actions, a “could have done otherwise” ability, or something else; whether it’s compatible with determinism; whether it’s simply an illusion. But it would be odd to say that “using free will to simply refer to the idea that certain (non-epistemic) conditions need to be fulfilled for someone to be morally responsible for their actions—in a way in which e.g. an addict isn’t” was a “misuse” of the term because free will is a technical term in philosophy by which people mean something more specific.
‘Truth’. Philosophers can’t agree if it’s about correspondence to reality, coherence, or something else; whether it’s foundational for meaning or the other way around; or if it’s just a word the semantics of which is fully captured by sentences like “‘snow is white’ is true if and only if snow is white” and we can’t have any interesting theory about. But it would be odd to say that garden-variety uses of “true” are misuses.
Thanks for this comment. I found it interesting both as pushback on my point, and as a quick overview of parts of philosophy!
Some thoughts in response:
Firstly, I wrote “I also wrote about a third example of misuse of jargon I’ve seen, but then decided it wasn’t really necessary to include a third example”. Really, I should’ve written ”...but then decided it wasn’t really necessary to include a third example, and that that example was one I was less sure of and where I care less about defending the jargon in question.” Part of why I was less sure is that I’ve only read two papers on the topic (Greaves’ and Mogensen’s), and that was a few months ago. So I have limited knowledge on how other philosophers use the term.
That said, I think the term’s main entry point into EA is Greaves’ and Mogensen’s papers and Greaves on the 80k podcast (though I expect many EAs heard it second-hand rather than from these sources directly). And it seems to me that at least those two philosophers want the term to mean something more specific than “it’s extremely hard to predict the long-term consequences of our actions, and thus even to know what actions will be net positive”, because otherwise the term wouldn’t include the idea that we can’t just use expected value reasoning. Does that sound right to you?
More generally, I got the impression that cluelessness, as used by academics, refers to at least that idea of extreme difficulty in predictions, but usually more as well. Does that sound right?
This might be analogous to how existential risk includes extinction risk, but also more. In such cases, if one is actually just talking about the easy-to-express-in-normal-language component of the technical concept rather than the entire technical concept, it seems best to use normal language rather than the jargon.
Then there’s the issue that “cluelessness” is also just a common term in everyday English, like free will and truth, and unlike existential risk or the unilateralist’s curse. That does indeed muddy the matter somewhat, and reduce the extent to which “misuse” would confuse or erode the jargon’s meaning.
One thing I’d say there is that, somewhat coincidentally, I’ve found the phrase “I’ve got no clue” a bit annoying for years before getting into EA, in line with my general aversion to absolute statements and black-and-white thinking. Relatedly, I think that, even ignoring the philosophical concept, “cluelessness about the future”, taken literally, implies something extreme and similar to what I think the philosophical concept is meant to imply. Which seems like a very small extra reason to avoid it when a speaker doesn’t really mean to imply we can’t know anything about the consequences of our actions. But that’s probably a fairly subjective stance, which someone could reasonably disagree with.
It sounds like we’re at least roughly on the same page. I certainly agree that e.g. Greaves and Mogensen don’t seem tothink that “the long-term effect of our actions is hard to predict, but this is just a more pronounced version of it being harder to predict the weather in one day than in one hour, and we can’t say anything interesting about this”.
As I said:
To be fair, it’s motivated by a bit more, but arguably not much more: that bit is roughly that some philosophers think that the “hard to predict” observation at least suggests one can say something philosophically interesting about it, and in particular that it might pose some kind of challenge for standard accounts of reasoning under uncertainty such as expected value.
I would still guess that to the extent that these two (and other) philosophers advance more specific accounts of cluelessness—say, non-sharp credence functions—they don’t take their specific proposal to be part of the definition of cluelessness, or to be criterion for whether the term ‘cluelessness’ refers to anything at all. E.g. suppose philosopher A thought that cluelessness involves non-sharp credence functions, but then philosopher B convinces them that the same epistemic state is better described by having sharp credence functions with low resilience (i.e. likely to change a lot in response to new evidence). I’d guess that philosopher A would say “you’ve convinced me that cluelessness is just low credal resilience instead of having sharp credence functions” as opposed to “you’ve convinced me that I should discard the concept of cluelessness—there is no cluelessness, just low credal resilience”.
(To be clear, I think in principle either of theses uses of cluelessness would be possible. I’m also less confident that my perception of the common use is correct than I am for terms that are older and have a larger literature attached to them, such as the examples I gave in my previous comment.)
Hmm, looking again at Greaves’ paper, it seems like it really is the case that the concept of “cluelessness” itself, in the philosophical literature, is meant to be something quite absolute. From Greaves’ introduction:
The cluelessness worry. Assume determinism.1 Then, for any given (sufficiently precisely described) act A, there is a fact of the matter about which possible world would be realised – what the future course of history would be – if I performed A. Some acts would lead to better consequences (that is, better future histories) than others. Given a pair of alternative actions A1, A2, let us say that
(OB: Criterion of objective c-betterness) A1 is objectively c-better than A2 iff the consequences of A1 are better than those of A2.
It is obvious that we can never be absolutely certain, for any given pair of acts A1, A2, of whether or not A1 is objectively c-better than A2. This in itself would be neither problematic nor surprising: there is very little in life, if anything, of which we can be absolutely certain. Some have argued, however, for the following further claim:
(CWo: Cluelessness Worry regarding objective c-betterness) We can never have even the faintest idea, for any given pair of acts (A1, A2), whether or not A1 is objectively c-better than A2.
This ‘cluelessness worry’ has at least some more claim to be troubling.
So at least in her account of how other philosophers have used the term, it refers to not having “even the faintest idea” which act is better. This also fits with what “cluelessness” arguably should literally mean (having no clue at all). This seems to me (and I think to Greaves’?) quite distinct from the idea that it’s very very very* hard to predict which act is better, and thus even whether an act is net positive.
And then Greaves later calls this “simple cluelessness”, and introduces the idea of “complex cluelessness” for something even more specific and distinct from the basic idea of things being very very very hard to predict.
Having not read many other papers on cluelessness, I can’t independently verify that Greaves is explaining their usage of “cluelessness” well. But from that, it does seem to me that “cluelessness” is intended to refer to something more specific (and, in my view, less well-founded) than what I’ve seen some EAs use it to refer to (the very true and important idea that many actions are very very very hard to predict the value of).
(Though I haven’t re-read the rest of the paper for several months, so perhaps “never have even the faintest idea” doesn’t mean what I’m taking it to mean, or there’s some other complexity that counters my points.)
*I’m now stepping away from saying “extremely hard to predict”, because one might argue that that should, taken literally, mean “as hard to predict as anything could ever be”, which might be the same as “so hard to predict that we can’t have even the faintest idea”.
I actually think this is a tricky case where the boundary to misuse is hard to discern. (I do agree that, in many contexts, the idea can and should “be easily and concisely expressed without jargon”.)
This is because I think philosophical work on cluelessness is at its core motivated by it being “extremely hard to predict the long-term consequences of our actions, and thus even to know what actions will be net positive”. To be fair, it’s motivated by a bit more, but arguably not much more: that bit is roughly that some philosophers think that the “hard to predict” observation at least suggests one can say something philosophically interesting about it, and in particular that it might pose some kind of challenge for standard accounts of reasoning under uncertainty such as expected value. But importantly, there is no consensus about what the correct “theoretical account” of cluelessness is: to name just a few, it might be non-sharp credences, it might be low credal resilience, or it might just be a case where expected-value reasoning is hard but we can’t say anything interesting about it after all. Still, cluelessness is a term proponents of all these different views use.
I think this is a quite common situation in philosophy: at least some people have a ‘pre-theoretic intuition’ that seems to point to some philosophically interesting concept, but philosophers can’t agree on what it is, what its properties are, or even whether the intuition refers to anything at all. Analogs might be:
‘Free will’. Philosophers can’t agree if it’s about being responsive to reasons, having a mesh of desires with certain properties, being ultimately responsible for one’s actions, a “could have done otherwise” ability, or something else; whether it’s compatible with determinism; whether it’s simply an illusion. But it would be odd to say that “using free will to simply refer to the idea that certain (non-epistemic) conditions need to be fulfilled for someone to be morally responsible for their actions—in a way in which e.g. an addict isn’t” was a “misuse” of the term because free will is a technical term in philosophy by which people mean something more specific.
‘Truth’. Philosophers can’t agree if it’s about correspondence to reality, coherence, or something else; whether it’s foundational for meaning or the other way around; or if it’s just a word the semantics of which is fully captured by sentences like “‘snow is white’ is true if and only if snow is white” and we can’t have any interesting theory about. But it would be odd to say that garden-variety uses of “true” are misuses.
‘Consciousness’: …
And so on.
Thanks for this comment. I found it interesting both as pushback on my point, and as a quick overview of parts of philosophy!
Some thoughts in response:
Firstly, I wrote “I also wrote about a third example of misuse of jargon I’ve seen, but then decided it wasn’t really necessary to include a third example”. Really, I should’ve written ”...but then decided it wasn’t really necessary to include a third example, and that that example was one I was less sure of and where I care less about defending the jargon in question.” Part of why I was less sure is that I’ve only read two papers on the topic (Greaves’ and Mogensen’s), and that was a few months ago. So I have limited knowledge on how other philosophers use the term.
That said, I think the term’s main entry point into EA is Greaves’ and Mogensen’s papers and Greaves on the 80k podcast (though I expect many EAs heard it second-hand rather than from these sources directly). And it seems to me that at least those two philosophers want the term to mean something more specific than “it’s extremely hard to predict the long-term consequences of our actions, and thus even to know what actions will be net positive”, because otherwise the term wouldn’t include the idea that we can’t just use expected value reasoning. Does that sound right to you?
More generally, I got the impression that cluelessness, as used by academics, refers to at least that idea of extreme difficulty in predictions, but usually more as well. Does that sound right?
This might be analogous to how existential risk includes extinction risk, but also more. In such cases, if one is actually just talking about the easy-to-express-in-normal-language component of the technical concept rather than the entire technical concept, it seems best to use normal language rather than the jargon.
Then there’s the issue that “cluelessness” is also just a common term in everyday English, like free will and truth, and unlike existential risk or the unilateralist’s curse. That does indeed muddy the matter somewhat, and reduce the extent to which “misuse” would confuse or erode the jargon’s meaning.
One thing I’d say there is that, somewhat coincidentally, I’ve found the phrase “I’ve got no clue” a bit annoying for years before getting into EA, in line with my general aversion to absolute statements and black-and-white thinking. Relatedly, I think that, even ignoring the philosophical concept, “cluelessness about the future”, taken literally, implies something extreme and similar to what I think the philosophical concept is meant to imply. Which seems like a very small extra reason to avoid it when a speaker doesn’t really mean to imply we can’t know anything about the consequences of our actions. But that’s probably a fairly subjective stance, which someone could reasonably disagree with.
It sounds like we’re at least roughly on the same page. I certainly agree that e.g. Greaves and Mogensen don’t seem to think that “the long-term effect of our actions is hard to predict, but this is just a more pronounced version of it being harder to predict the weather in one day than in one hour, and we can’t say anything interesting about this”.
As I said:
I would still guess that to the extent that these two (and other) philosophers advance more specific accounts of cluelessness—say, non-sharp credence functions—they don’t take their specific proposal to be part of the definition of cluelessness, or to be criterion for whether the term ‘cluelessness’ refers to anything at all. E.g. suppose philosopher A thought that cluelessness involves non-sharp credence functions, but then philosopher B convinces them that the same epistemic state is better described by having sharp credence functions with low resilience (i.e. likely to change a lot in response to new evidence). I’d guess that philosopher A would say “you’ve convinced me that cluelessness is just low credal resilience instead of having sharp credence functions” as opposed to “you’ve convinced me that I should discard the concept of cluelessness—there is no cluelessness, just low credal resilience”.
(To be clear, I think in principle either of theses uses of cluelessness would be possible. I’m also less confident that my perception of the common use is correct than I am for terms that are older and have a larger literature attached to them, such as the examples I gave in my previous comment.)
Hmm, looking again at Greaves’ paper, it seems like it really is the case that the concept of “cluelessness” itself, in the philosophical literature, is meant to be something quite absolute. From Greaves’ introduction:
So at least in her account of how other philosophers have used the term, it refers to not having “even the faintest idea” which act is better. This also fits with what “cluelessness” arguably should literally mean (having no clue at all). This seems to me (and I think to Greaves’?) quite distinct from the idea that it’s very very very* hard to predict which act is better, and thus even whether an act is net positive.
And then Greaves later calls this “simple cluelessness”, and introduces the idea of “complex cluelessness” for something even more specific and distinct from the basic idea of things being very very very hard to predict.
Having not read many other papers on cluelessness, I can’t independently verify that Greaves is explaining their usage of “cluelessness” well. But from that, it does seem to me that “cluelessness” is intended to refer to something more specific (and, in my view, less well-founded) than what I’ve seen some EAs use it to refer to (the very true and important idea that many actions are very very very hard to predict the value of).
(Though I haven’t re-read the rest of the paper for several months, so perhaps “never have even the faintest idea” doesn’t mean what I’m taking it to mean, or there’s some other complexity that counters my points.)
*I’m now stepping away from saying “extremely hard to predict”, because one might argue that that should, taken literally, mean “as hard to predict as anything could ever be”, which might be the same as “so hard to predict that we can’t have even the faintest idea”.