Thanks for this comment. I found it interesting both as pushback on my point, and as a quick overview of parts of philosophy!
Some thoughts in response:
Firstly, I wrote âI also wrote about a third example of misuse of jargon Iâve seen, but then decided it wasnât really necessary to include a third exampleâ. Really, I shouldâve written â...but then decided it wasnât really necessary to include a third example, and that that example was one I was less sure of and where I care less about defending the jargon in question.â Part of why I was less sure is that Iâve only read two papers on the topic (Greavesâ and Mogensenâs), and that was a few months ago. So I have limited knowledge on how other philosophers use the term.
That said, I think the termâs main entry point into EA is Greavesâ and Mogensenâs papers and Greaves on the 80k podcast (though I expect many EAs heard it second-hand rather than from these sources directly). And it seems to me that at least those two philosophers want the term to mean something more specific than âitâs extremely hard to predict the long-term consequences of our actions, and thus even to know what actions will be net positiveâ, because otherwise the term wouldnât include the idea that we canât just use expected value reasoning. Does that sound right to you?
More generally, I got the impression that cluelessness, as used by academics, refers to at least that idea of extreme difficulty in predictions, but usually more as well. Does that sound right?
This might be analogous to how existential risk includes extinction risk, but also more. In such cases, if one is actually just talking about the easy-to-express-in-normal-language component of the technical concept rather than the entire technical concept, it seems best to use normal language rather than the jargon.
Then thereâs the issue that âcluelessnessâ is also just a common term in everyday English, like free will and truth, and unlike existential risk or the unilateralistâs curse. That does indeed muddy the matter somewhat, and reduce the extent to which âmisuseâ would confuse or erode the jargonâs meaning.
One thing Iâd say there is that, somewhat coincidentally, Iâve found the phrase âIâve got no clueâ a bit annoying for years before getting into EA, in line with my general aversion to absolute statements and black-and-white thinking. Relatedly, I think that, even ignoring the philosophical concept, âcluelessness about the futureâ, taken literally, implies something extreme and similar to what I think the philosophical concept is meant to imply. Which seems like a very small extra reason to avoid it when a speaker doesnât really mean to imply we canât know anything about the consequences of our actions. But thatâs probably a fairly subjective stance, which someone could reasonably disagree with.
It sounds like weâre at least roughly on the same page. I certainly agree that e.g. Greaves and Mogensen donât seem tothink that âthe long-term effect of our actions is hard to predict, but this is just a more pronounced version of it being harder to predict the weather in one day than in one hour, and we canât say anything interesting about thisâ.
As I said:
To be fair, itâs motivated by a bit more, but arguably not much more: that bit is roughly that some philosophers think that the âhard to predictâ observation at least suggests one can say something philosophically interesting about it, and in particular that it might pose some kind of challenge for standard accounts of reasoning under uncertainty such as expected value.
I would still guess that to the extent that these two (and other) philosophers advance more specific accounts of cluelessnessâsay, non-sharp credence functionsâthey donât take their specific proposal to be part of the definition of cluelessness, or to be criterion for whether the term âcluelessnessâ refers to anything at all. E.g. suppose philosopher A thought that cluelessness involves non-sharp credence functions, but then philosopher B convinces them that the same epistemic state is better described by having sharp credence functions with low resilience (i.e. likely to change a lot in response to new evidence). Iâd guess that philosopher A would say âyouâve convinced me that cluelessness is just low credal resilience instead of having sharp credence functionsâ as opposed to âyouâve convinced me that I should discard the concept of cluelessnessâthere is no cluelessness, just low credal resilienceâ.
(To be clear, I think in principle either of theses uses of cluelessness would be possible. Iâm also less confident that my perception of the common use is correct than I am for terms that are older and have a larger literature attached to them, such as the examples I gave in my previous comment.)
Hmm, looking again at Greavesâ paper, it seems like it really is the case that the concept of âcluelessnessâ itself, in the philosophical literature, is meant to be something quite absolute. From Greavesâ introduction:
The cluelessness worry. Assume determinism.1 Then, for any given (sufficiently precisely described) act A, there is a fact of the matter about which possible world would be realised â what the future course of history would be â if I performed A. Some acts would lead to better consequences (that is, better future histories) than others. Given a pair of alternative actions A1, A2, let us say that
(OB: Criterion of objective c-betterness) A1 is objectively c-better than A2 iff the consequences of A1 are better than those of A2.
It is obvious that we can never be absolutely certain, for any given pair of acts A1, A2, of whether or not A1 is objectively c-better than A2. This in itself would be neither problematic nor surprising: there is very little in life, if anything, of which we can be absolutely certain. Some have argued, however, for the following further claim:
(CWo: Cluelessness Worry regarding objective c-betterness) We can never have even the faintest idea, for any given pair of acts (A1, A2), whether or not A1 is objectively c-better than A2.
This âcluelessness worryâ has at least some more claim to be troubling.
So at least in her account of how other philosophers have used the term, it refers to not having âeven the faintest ideaâ which act is better. This also fits with what âcluelessnessâ arguably should literally mean (having no clue at all). This seems to me (and I think to Greavesâ?) quite distinct from the idea that itâs very very very* hard to predict which act is better, and thus even whether an act is net positive.
And then Greaves later calls this âsimple cluelessnessâ, and introduces the idea of âcomplex cluelessnessâ for something even more specific and distinct from the basic idea of things being very very very hard to predict.
Having not read many other papers on cluelessness, I canât independently verify that Greaves is explaining their usage of âcluelessnessâ well. But from that, it does seem to me that âcluelessnessâ is intended to refer to something more specific (and, in my view, less well-founded) than what Iâve seen some EAs use it to refer to (the very true and important idea that many actions are very very very hard to predict the value of).
(Though I havenât re-read the rest of the paper for several months, so perhaps ânever have even the faintest ideaâ doesnât mean what Iâm taking it to mean, or thereâs some other complexity that counters my points.)
*Iâm now stepping away from saying âextremely hard to predictâ, because one might argue that that should, taken literally, mean âas hard to predict as anything could ever beâ, which might be the same as âso hard to predict that we canât have even the faintest ideaâ.
Thanks for this comment. I found it interesting both as pushback on my point, and as a quick overview of parts of philosophy!
Some thoughts in response:
Firstly, I wrote âI also wrote about a third example of misuse of jargon Iâve seen, but then decided it wasnât really necessary to include a third exampleâ. Really, I shouldâve written â...but then decided it wasnât really necessary to include a third example, and that that example was one I was less sure of and where I care less about defending the jargon in question.â Part of why I was less sure is that Iâve only read two papers on the topic (Greavesâ and Mogensenâs), and that was a few months ago. So I have limited knowledge on how other philosophers use the term.
That said, I think the termâs main entry point into EA is Greavesâ and Mogensenâs papers and Greaves on the 80k podcast (though I expect many EAs heard it second-hand rather than from these sources directly). And it seems to me that at least those two philosophers want the term to mean something more specific than âitâs extremely hard to predict the long-term consequences of our actions, and thus even to know what actions will be net positiveâ, because otherwise the term wouldnât include the idea that we canât just use expected value reasoning. Does that sound right to you?
More generally, I got the impression that cluelessness, as used by academics, refers to at least that idea of extreme difficulty in predictions, but usually more as well. Does that sound right?
This might be analogous to how existential risk includes extinction risk, but also more. In such cases, if one is actually just talking about the easy-to-express-in-normal-language component of the technical concept rather than the entire technical concept, it seems best to use normal language rather than the jargon.
Then thereâs the issue that âcluelessnessâ is also just a common term in everyday English, like free will and truth, and unlike existential risk or the unilateralistâs curse. That does indeed muddy the matter somewhat, and reduce the extent to which âmisuseâ would confuse or erode the jargonâs meaning.
One thing Iâd say there is that, somewhat coincidentally, Iâve found the phrase âIâve got no clueâ a bit annoying for years before getting into EA, in line with my general aversion to absolute statements and black-and-white thinking. Relatedly, I think that, even ignoring the philosophical concept, âcluelessness about the futureâ, taken literally, implies something extreme and similar to what I think the philosophical concept is meant to imply. Which seems like a very small extra reason to avoid it when a speaker doesnât really mean to imply we canât know anything about the consequences of our actions. But thatâs probably a fairly subjective stance, which someone could reasonably disagree with.
It sounds like weâre at least roughly on the same page. I certainly agree that e.g. Greaves and Mogensen donât seem to think that âthe long-term effect of our actions is hard to predict, but this is just a more pronounced version of it being harder to predict the weather in one day than in one hour, and we canât say anything interesting about thisâ.
As I said:
I would still guess that to the extent that these two (and other) philosophers advance more specific accounts of cluelessnessâsay, non-sharp credence functionsâthey donât take their specific proposal to be part of the definition of cluelessness, or to be criterion for whether the term âcluelessnessâ refers to anything at all. E.g. suppose philosopher A thought that cluelessness involves non-sharp credence functions, but then philosopher B convinces them that the same epistemic state is better described by having sharp credence functions with low resilience (i.e. likely to change a lot in response to new evidence). Iâd guess that philosopher A would say âyouâve convinced me that cluelessness is just low credal resilience instead of having sharp credence functionsâ as opposed to âyouâve convinced me that I should discard the concept of cluelessnessâthere is no cluelessness, just low credal resilienceâ.
(To be clear, I think in principle either of theses uses of cluelessness would be possible. Iâm also less confident that my perception of the common use is correct than I am for terms that are older and have a larger literature attached to them, such as the examples I gave in my previous comment.)
Hmm, looking again at Greavesâ paper, it seems like it really is the case that the concept of âcluelessnessâ itself, in the philosophical literature, is meant to be something quite absolute. From Greavesâ introduction:
So at least in her account of how other philosophers have used the term, it refers to not having âeven the faintest ideaâ which act is better. This also fits with what âcluelessnessâ arguably should literally mean (having no clue at all). This seems to me (and I think to Greavesâ?) quite distinct from the idea that itâs very very very* hard to predict which act is better, and thus even whether an act is net positive.
And then Greaves later calls this âsimple cluelessnessâ, and introduces the idea of âcomplex cluelessnessâ for something even more specific and distinct from the basic idea of things being very very very hard to predict.
Having not read many other papers on cluelessness, I canât independently verify that Greaves is explaining their usage of âcluelessnessâ well. But from that, it does seem to me that âcluelessnessâ is intended to refer to something more specific (and, in my view, less well-founded) than what Iâve seen some EAs use it to refer to (the very true and important idea that many actions are very very very hard to predict the value of).
(Though I havenât re-read the rest of the paper for several months, so perhaps ânever have even the faintest ideaâ doesnât mean what Iâm taking it to mean, or thereâs some other complexity that counters my points.)
*Iâm now stepping away from saying âextremely hard to predictâ, because one might argue that that should, taken literally, mean âas hard to predict as anything could ever beâ, which might be the same as âso hard to predict that we canât have even the faintest ideaâ.