Iâm certainly not an expert here, and I think my thinking is somewhat unclear, and my explanation of it likely will be too. But I share the sense that Knightian uncertainty canât be rational. Or more specifically, I have a sense that in these sorts of discussions, a lot of the work is being done by imprecise terms that imply a sort of crisp, black-and-white distinction between something we could call âregularâ uncertainty and something we could call âextremeâ/ââradicalâ/ââunquantifiableâ uncertainty, without this distinction being properly made explicit or defended.
For example, in Hilary Greavesâs paper on cluelessness (note: similar thoughts from me would apply to Mogensenâs paper, though explained differently), she discusses cases of âsimple cluelessnessâ and then argues theyâre not really a problem, because in such cases the âunforeseeable effectsâ cancel out in expectation, even if not in reality. E.g.,
While there are countless possible causal stories about how helping an old lady across the road might lead to (for instance) the existence of an additional murderous dictator in the 22nd century, any such story will have a precise counterpart, precisely as plausible as the original, according to which refraining from helping the old lady turns out to have the consequence in question; and it is intuitively clear that one ought to have equal credences in such precise counterpart possible stories.
Greaves is arguing that we can therefore focus on whether the âforeseeable effectsâ are positive or negative in expectation, just as our intuitions would suggest.
I agree with the conclusion, but I think the way itâs juxtaposed with âcomplex cluelessnessâ (which she does suggest may be a cause for concern) highlights the sort of unwarranted (and implicit) sharp distinctions between âtypesâ of uncertainty which I think are being made.
The three key criteria Greaves proposes for a case to involve complex cluelessness are:
(CC1) We have some reasons to think that the unforeseeable consequences of A1 would systematically tend to be substantially better than those of A2;
(CC2) We have some reasons to think that the unforeseeable consequences of A2 would systematically tend to be substantially better than those of A1;
(CC3) It is unclear how to weigh up these reasons against one another.
I think all of that actually applies to the old lady case, just very speculatively. One reason to think CC1 is that the old lady and/âor anyone witnessing your kind act and/âor anyone whoâs told about it could see altruism, kindness, community spirit, etc. as more of the norm than they previously did, and be inspired to act similarly themselves. When they act similarly themselves, this further spreads that norm. We could tell a story about how that ripples out further and further and creates huge amount of additional value over time.
Importantly, there isnât a âprecise counterpart, precisely as plausible as the originalâ, for this story. Thatâd have to be something like people seeing this act therefore thinking unkindness, bullying, etc. are more the norm that they previously thought they were, which is clearly less plausible.
One reason to think CC2 for the old lady case could jump off from that story; maybe your actions sparks ripples of kindness, altruism, etc., which leads to more people donating to GiveWell type charities, which (perhaps) leads to increased population (via reduced mortality), which (perhaps) leads to increased x-risk (e.g., via climate change or more rapid technological development), which eventually causes huge amounts of disvalue.
Iâd argue you again canât tell a precise counterpart story thatâs precisely as plausible as this, and thatâs for reasons very similar to those covered in both Greaves and Mogensenâs paperâthere are separate lines of evidence and argument for GiveWell type charities leading to increased population vs them leading to decreased population, and for increased population increasing vs decreasing x-risk. (And again, it seems less plausible that witnessing your good deed would make people less likely to donate to GiveWell charities than more likelyâor at least, a decrease would occur via different mechanisms than an increase, and therefore not be a âprecise counterpartâ story.)
I think both of these âstoriesâ Iâve told are extremely unlikely, and for practical purposes arenât worth bearing in mind. But they do seem to me to meet the criteria in CC1 and CC2. And it doesnât seem to me thereâs a fundamental difference between their plausibility and worthiness-of-attention and that of the possibility for donations to AMF to increase vs decrease x-risk (or other claimed cases of [complex] cluelessness). I think itâs just a difference of degreeâperhaps a large difference in degree, but degree nonetheless. I canât see how we could draw some clear line somewhere to decide what uncertainties can just be dealt with in normal ways and which uncertainties make us count as clueless and thus unable to use regular expected value reasoning.
(And as for CC3, Iâd say itâs at least slightly âunclearâ how to weigh up these reasons against one another.)
I think my thoughts here are essentially a criticism of the idea of a sharp, fundamental distinction between âriskâ and âKnightian uncertaintyâ*, rather than of Greaves and Mogensenâs papers as a whole. That is, if we did accept as a premise that distinction, I think that most of what Greaves and Mogensen say seems like it may well follow. (I also found their papers interesting, and acknowledge that Iâm very much not an expert here and all of this could be unfounded, so for all these reasons this isnât necessarily a case of me viewing their papers negatively.)
*I do think that those can be useful concepts for heuristic-type, practical purposes. I think we should probably act differently when are credences are massively less well-founded than usual, and probably be suspicious of traditional expected value reasoning then. But I think thatâs because of various flaws with how humans think (e.g., overconfidence in inside-view predictions), not because different rules fundamentally should apply to fundamentally different types of uncertainty.
re: your lady example: as far as I know, the recent papers e.g. here provide the following example: (1) either you help the old lady on a Monday or on a Tuesday (you must and can do exactly one of the two options). In this case, your examples for CC1 and CC2 donât hold. One might argue that the previous example was maybe just a mistake and I find it very hard to come up with CC1 and CC2 for (1) if (supposedly) you donât know anything about Mondays or Tuesdays.
Interesting. I wonder if the switch to that example was because they had a similar thought to mine, or read that comment.
But I think I can make a similar point with the Monday vs Tuesday example. I also predict I could make a similar point with respect to any example Iâm given.
This is because I do know things about Mondays and Tuesdays in general, as well as about other variables. If the papers argue weâre meant to artificially suppose we know literally nothing at all about a given variable, that seems weird or question-begging, and irrelevant to actual decision-making. (Note that I havenât read the recent paper you link to.)
I could probably come up with several stories for the Monday vs Tuesday example, but my first thought is to make it connect to my prior stories so it can reuse most of the reasoning from there, and to do that via social media. Above, I wrote:
One reason to think CC1 is that the old lady and/âor anyone witnessing your kind act and/âor anyone whoâs told about it could see altruism, kindness, community spirit, etc. as more of the norm than they previously did, and be inspired to act similarly themselves. When they act similarly themselves, this further spreads that norm. We could tell a story about how that ripples out further and further and creates huge amount of additional value over time.
This says people tend to use social media more on Tuesday and Wednesday than on Monday. I think we therefore have some reason to believe that, if I help an old lady cross the road on Tuesday rather than Monday, itâs slightly more likely that someone will post about that on social media, and/âor use social media in a slightly more altruistic, kind, community-spirit-y way than they otherwise wouldâve. (Because me doing this on Tuesday means theyâre slightly more likely to be on social media while this kind deed is relatively fresh in their minds.) This could then further spread those norms (compared to how much theyâd be spread if we helped on Monday), and we could tell a story about how that ripples out further etc.
Above, I also wrote:
One reason to think CC2 for the old lady case could jump off from that story; maybe your actions sparks ripples of kindness, altruism, etc., which leads to more people donating to GiveWell type charities, which (perhaps) leads to increased population (via reduced mortality), which (perhaps) leads to increased x-risk (e.g., via climate change or more rapid technological development), which eventually causes huge amounts of disvalue.
I would now say exactly the same thing is true for the Monday vs Tuesday example, given my above argument for why the norms might be spread more if we help on Tuesday rather than Monday.
(We could also probably come up with stories related to amounts of traffic on Monday vs Tuesdayâe.g., the old lady may be likelier to die if un-helped on one day, or more people may be delayed. Or related to people tending to be a little happier or sadder or Monday. Or related to what we ourselves predict weâll do with our time on Monday or Tuesday, which we probably would know about. Or many other things)
As before:
I think both of these âstoriesâ Iâve told are extremely unlikely, and for practical purposes arenât worth bearing in mind. But they do seem to me to meet the criteria in CC1 and CC2.
Sorry, I donât have the time to comment in-depth. However, I think if one agrees with cluelessness, then you donât offer an objection. You might even extend their worries by saying that âalmost everything has âasymmetric uncertaintyââł. I would be interested in your extension of your last sentence. â They are extremely unlikely and thus not worth bearing mindâ. Why is this true?
I would be interested in your extension of your last sentence. â They are extremely unlikely and thus not worth bearing mindâ. Why is this true?
When I said âI think both of these âstoriesâ Iâve told are extremely unlikely, and for practical purposes arenât worth bearing in mindâ, the bolded bit meant that I think a person will tend to better achieve their goals (including altruistic ones) if they donât devote explicit attention to such (extremely unlikely) âstoriesâ when making decisions. The reason is essentially that one could generate huge numbers of such stories for basically every decision. If one tried to explicitly think through and weigh up all such stories in all such decision situations, one would probably become paralysed.
So I think the expected value of making decisions before and without thinking through such stories is higher than the expected value of trying to think through such stories before making decisions.
In other words, the value of information one would be expected to get from spending extra time thinking through such stories is probably usually lower than the opportunity cost of gaining that information (e.g., what one couldâve done with that time otherwise).
Disclaimer: Written on low sleep, and again reporting only independent impressions (i.e., what Iâd believe before updating on the fact that various smart people donât share my views on this). I also shared related thoughts in this comment thread.
I agree that one way someone could respond to my points is indeed by saying that everything/âalmost everything involves complex cluelessness, rather than that complex cluelessness isnât a useful concept.
But if Greaves introduces complex cluelessness by juxtaposing it with simple cluelessness, yet the examples of simple cluelessness actually meet their definition of complex cluelessness (which I think Iâve shown), I think this provides reason to pause and re-evaluate the claims.
And then I think we might notice that Greaves suggests a sharp distinction between simple and complex cluelessness. And also that she (if I recall correctly) arguably suggests homogeneity within each type of cluelessnessâi.e., suggesting all cases of simple cluelessness can be dealt with by just ignoring the possible flow-through effects that seem symmetrical, while we should search for a type of approach to handle all cases of complex cluelessness. (But this latter point is probably debatable.)
And we might also notice that the term âcluelessnessâ seems to suggest we know literally nothing about how to compare the outcomes. Whereas Iâve argued that in all cases weâll have some information relevant to that, and the various bits of information will vary in their importance and degree of uncertainty.
So altogether, it would just seem more natural to me to say:
weâre always at least a little uncertain, and often extremely uncertain, and often somewhere in between
in theory, the âcorrectâ way to reason is basically expected value theory, using all the scraps of evidence at our disposal, and keeping track of how high or low the resilience of our credences are
in practice, we should do something sort of like that, but with a lot of caution and heuristics (given that weâre dealing with limited data, computational constraints, biases, etc.).
I do think there are many important questions to be investigated with regards to how best to make decisions under conditions of extreme uncertainty, and that this becomes especially relevant for people who want to have a positive impact on the long-term future. But it doesnât seem to me that the idea of complex cluelessness is necessary or useful in posing or investigating those questions.
[I know Iâm late to the party but...]
Iâm certainly not an expert here, and I think my thinking is somewhat unclear, and my explanation of it likely will be too. But I share the sense that Knightian uncertainty canât be rational. Or more specifically, I have a sense that in these sorts of discussions, a lot of the work is being done by imprecise terms that imply a sort of crisp, black-and-white distinction between something we could call âregularâ uncertainty and something we could call âextremeâ/ââradicalâ/ââunquantifiableâ uncertainty, without this distinction being properly made explicit or defended.
For example, in Hilary Greavesâs paper on cluelessness (note: similar thoughts from me would apply to Mogensenâs paper, though explained differently), she discusses cases of âsimple cluelessnessâ and then argues theyâre not really a problem, because in such cases the âunforeseeable effectsâ cancel out in expectation, even if not in reality. E.g.,
Greaves is arguing that we can therefore focus on whether the âforeseeable effectsâ are positive or negative in expectation, just as our intuitions would suggest.
I agree with the conclusion, but I think the way itâs juxtaposed with âcomplex cluelessnessâ (which she does suggest may be a cause for concern) highlights the sort of unwarranted (and implicit) sharp distinctions between âtypesâ of uncertainty which I think are being made.
The three key criteria Greaves proposes for a case to involve complex cluelessness are:
I think all of that actually applies to the old lady case, just very speculatively. One reason to think CC1 is that the old lady and/âor anyone witnessing your kind act and/âor anyone whoâs told about it could see altruism, kindness, community spirit, etc. as more of the norm than they previously did, and be inspired to act similarly themselves. When they act similarly themselves, this further spreads that norm. We could tell a story about how that ripples out further and further and creates huge amount of additional value over time.
Importantly, there isnât a âprecise counterpart, precisely as plausible as the originalâ, for this story. Thatâd have to be something like people seeing this act therefore thinking unkindness, bullying, etc. are more the norm that they previously thought they were, which is clearly less plausible.
One reason to think CC2 for the old lady case could jump off from that story; maybe your actions sparks ripples of kindness, altruism, etc., which leads to more people donating to GiveWell type charities, which (perhaps) leads to increased population (via reduced mortality), which (perhaps) leads to increased x-risk (e.g., via climate change or more rapid technological development), which eventually causes huge amounts of disvalue.
Iâd argue you again canât tell a precise counterpart story thatâs precisely as plausible as this, and thatâs for reasons very similar to those covered in both Greaves and Mogensenâs paperâthere are separate lines of evidence and argument for GiveWell type charities leading to increased population vs them leading to decreased population, and for increased population increasing vs decreasing x-risk. (And again, it seems less plausible that witnessing your good deed would make people less likely to donate to GiveWell charities than more likelyâor at least, a decrease would occur via different mechanisms than an increase, and therefore not be a âprecise counterpartâ story.)
I think both of these âstoriesâ Iâve told are extremely unlikely, and for practical purposes arenât worth bearing in mind. But they do seem to me to meet the criteria in CC1 and CC2. And it doesnât seem to me thereâs a fundamental difference between their plausibility and worthiness-of-attention and that of the possibility for donations to AMF to increase vs decrease x-risk (or other claimed cases of [complex] cluelessness). I think itâs just a difference of degreeâperhaps a large difference in degree, but degree nonetheless. I canât see how we could draw some clear line somewhere to decide what uncertainties can just be dealt with in normal ways and which uncertainties make us count as clueless and thus unable to use regular expected value reasoning.
(And as for CC3, Iâd say itâs at least slightly âunclearâ how to weigh up these reasons against one another.)
I think my thoughts here are essentially a criticism of the idea of a sharp, fundamental distinction between âriskâ and âKnightian uncertaintyâ*, rather than of Greaves and Mogensenâs papers as a whole. That is, if we did accept as a premise that distinction, I think that most of what Greaves and Mogensen say seems like it may well follow. (I also found their papers interesting, and acknowledge that Iâm very much not an expert here and all of this could be unfounded, so for all these reasons this isnât necessarily a case of me viewing their papers negatively.)
*I do think that those can be useful concepts for heuristic-type, practical purposes. I think we should probably act differently when are credences are massively less well-founded than usual, and probably be suspicious of traditional expected value reasoning then. But I think thatâs because of various flaws with how humans think (e.g., overconfidence in inside-view predictions), not because different rules fundamentally should apply to fundamentally different types of uncertainty.
re: your lady example: as far as I know, the recent papers e.g. here provide the following example: (1) either you help the old lady on a Monday or on a Tuesday (you must and can do exactly one of the two options). In this case, your examples for CC1 and CC2 donât hold. One might argue that the previous example was maybe just a mistake and I find it very hard to come up with CC1 and CC2 for (1) if (supposedly) you donât know anything about Mondays or Tuesdays.
Interesting. I wonder if the switch to that example was because they had a similar thought to mine, or read that comment.
But I think I can make a similar point with the Monday vs Tuesday example. I also predict I could make a similar point with respect to any example Iâm given.
This is because I do know things about Mondays and Tuesdays in general, as well as about other variables. If the papers argue weâre meant to artificially suppose we know literally nothing at all about a given variable, that seems weird or question-begging, and irrelevant to actual decision-making. (Note that I havenât read the recent paper you link to.)
I could probably come up with several stories for the Monday vs Tuesday example, but my first thought is to make it connect to my prior stories so it can reuse most of the reasoning from there, and to do that via social media. Above, I wrote:
This says people tend to use social media more on Tuesday and Wednesday than on Monday. I think we therefore have some reason to believe that, if I help an old lady cross the road on Tuesday rather than Monday, itâs slightly more likely that someone will post about that on social media, and/âor use social media in a slightly more altruistic, kind, community-spirit-y way than they otherwise wouldâve. (Because me doing this on Tuesday means theyâre slightly more likely to be on social media while this kind deed is relatively fresh in their minds.) This could then further spread those norms (compared to how much theyâd be spread if we helped on Monday), and we could tell a story about how that ripples out further etc.
Above, I also wrote:
I would now say exactly the same thing is true for the Monday vs Tuesday example, given my above argument for why the norms might be spread more if we help on Tuesday rather than Monday.
(We could also probably come up with stories related to amounts of traffic on Monday vs Tuesdayâe.g., the old lady may be likelier to die if un-helped on one day, or more people may be delayed. Or related to people tending to be a little happier or sadder or Monday. Or related to what we ourselves predict weâll do with our time on Monday or Tuesday, which we probably would know about. Or many other things)
As before:
Sorry, I donât have the time to comment in-depth. However, I think if one agrees with cluelessness, then you donât offer an objection. You might even extend their worries by saying that âalmost everything has âasymmetric uncertaintyââł. I would be interested in your extension of your last sentence. â They are extremely unlikely and thus not worth bearing mindâ. Why is this true?
When I said âI think both of these âstoriesâ Iâve told are extremely unlikely, and for practical purposes arenât worth bearing in mindâ, the bolded bit meant that I think a person will tend to better achieve their goals (including altruistic ones) if they donât devote explicit attention to such (extremely unlikely) âstoriesâ when making decisions. The reason is essentially that one could generate huge numbers of such stories for basically every decision. If one tried to explicitly think through and weigh up all such stories in all such decision situations, one would probably become paralysed.
So I think the expected value of making decisions before and without thinking through such stories is higher than the expected value of trying to think through such stories before making decisions.
In other words, the value of information one would be expected to get from spending extra time thinking through such stories is probably usually lower than the opportunity cost of gaining that information (e.g., what one couldâve done with that time otherwise).
Disclaimer: Written on low sleep, and again reporting only independent impressions (i.e., what Iâd believe before updating on the fact that various smart people donât share my views on this). I also shared related thoughts in this comment thread.
I agree that one way someone could respond to my points is indeed by saying that everything/âalmost everything involves complex cluelessness, rather than that complex cluelessness isnât a useful concept.
But if Greaves introduces complex cluelessness by juxtaposing it with simple cluelessness, yet the examples of simple cluelessness actually meet their definition of complex cluelessness (which I think Iâve shown), I think this provides reason to pause and re-evaluate the claims.
And then I think we might notice that Greaves suggests a sharp distinction between simple and complex cluelessness. And also that she (if I recall correctly) arguably suggests homogeneity within each type of cluelessnessâi.e., suggesting all cases of simple cluelessness can be dealt with by just ignoring the possible flow-through effects that seem symmetrical, while we should search for a type of approach to handle all cases of complex cluelessness. (But this latter point is probably debatable.)
And we might also notice that the term âcluelessnessâ seems to suggest we know literally nothing about how to compare the outcomes. Whereas Iâve argued that in all cases weâll have some information relevant to that, and the various bits of information will vary in their importance and degree of uncertainty.
So altogether, it would just seem more natural to me to say:
weâre always at least a little uncertain, and often extremely uncertain, and often somewhere in between
in theory, the âcorrectâ way to reason is basically expected value theory, using all the scraps of evidence at our disposal, and keeping track of how high or low the resilience of our credences are
in practice, we should do something sort of like that, but with a lot of caution and heuristics (given that weâre dealing with limited data, computational constraints, biases, etc.).
I do think there are many important questions to be investigated with regards to how best to make decisions under conditions of extreme uncertainty, and that this becomes especially relevant for people who want to have a positive impact on the long-term future. But it doesnât seem to me that the idea of complex cluelessness is necessary or useful in posing or investigating those questions.