Interesting. I wonder if the switch to that example was because they had a similar thought to mine, or read that comment.
But I think I can make a similar point with the Monday vs Tuesday example. I also predict I could make a similar point with respect to any example Iâm given.
This is because I do know things about Mondays and Tuesdays in general, as well as about other variables. If the papers argue weâre meant to artificially suppose we know literally nothing at all about a given variable, that seems weird or question-begging, and irrelevant to actual decision-making. (Note that I havenât read the recent paper you link to.)
I could probably come up with several stories for the Monday vs Tuesday example, but my first thought is to make it connect to my prior stories so it can reuse most of the reasoning from there, and to do that via social media. Above, I wrote:
One reason to think CC1 is that the old lady and/âor anyone witnessing your kind act and/âor anyone whoâs told about it could see altruism, kindness, community spirit, etc. as more of the norm than they previously did, and be inspired to act similarly themselves. When they act similarly themselves, this further spreads that norm. We could tell a story about how that ripples out further and further and creates huge amount of additional value over time.
This says people tend to use social media more on Tuesday and Wednesday than on Monday. I think we therefore have some reason to believe that, if I help an old lady cross the road on Tuesday rather than Monday, itâs slightly more likely that someone will post about that on social media, and/âor use social media in a slightly more altruistic, kind, community-spirit-y way than they otherwise wouldâve. (Because me doing this on Tuesday means theyâre slightly more likely to be on social media while this kind deed is relatively fresh in their minds.) This could then further spread those norms (compared to how much theyâd be spread if we helped on Monday), and we could tell a story about how that ripples out further etc.
Above, I also wrote:
One reason to think CC2 for the old lady case could jump off from that story; maybe your actions sparks ripples of kindness, altruism, etc., which leads to more people donating to GiveWell type charities, which (perhaps) leads to increased population (via reduced mortality), which (perhaps) leads to increased x-risk (e.g., via climate change or more rapid technological development), which eventually causes huge amounts of disvalue.
I would now say exactly the same thing is true for the Monday vs Tuesday example, given my above argument for why the norms might be spread more if we help on Tuesday rather than Monday.
(We could also probably come up with stories related to amounts of traffic on Monday vs Tuesdayâe.g., the old lady may be likelier to die if un-helped on one day, or more people may be delayed. Or related to people tending to be a little happier or sadder or Monday. Or related to what we ourselves predict weâll do with our time on Monday or Tuesday, which we probably would know about. Or many other things)
As before:
I think both of these âstoriesâ Iâve told are extremely unlikely, and for practical purposes arenât worth bearing in mind. But they do seem to me to meet the criteria in CC1 and CC2.
Sorry, I donât have the time to comment in-depth. However, I think if one agrees with cluelessness, then you donât offer an objection. You might even extend their worries by saying that âalmost everything has âasymmetric uncertaintyââł. I would be interested in your extension of your last sentence. â They are extremely unlikely and thus not worth bearing mindâ. Why is this true?
I would be interested in your extension of your last sentence. â They are extremely unlikely and thus not worth bearing mindâ. Why is this true?
When I said âI think both of these âstoriesâ Iâve told are extremely unlikely, and for practical purposes arenât worth bearing in mindâ, the bolded bit meant that I think a person will tend to better achieve their goals (including altruistic ones) if they donât devote explicit attention to such (extremely unlikely) âstoriesâ when making decisions. The reason is essentially that one could generate huge numbers of such stories for basically every decision. If one tried to explicitly think through and weigh up all such stories in all such decision situations, one would probably become paralysed.
So I think the expected value of making decisions before and without thinking through such stories is higher than the expected value of trying to think through such stories before making decisions.
In other words, the value of information one would be expected to get from spending extra time thinking through such stories is probably usually lower than the opportunity cost of gaining that information (e.g., what one couldâve done with that time otherwise).
Disclaimer: Written on low sleep, and again reporting only independent impressions (i.e., what Iâd believe before updating on the fact that various smart people donât share my views on this). I also shared related thoughts in this comment thread.
I agree that one way someone could respond to my points is indeed by saying that everything/âalmost everything involves complex cluelessness, rather than that complex cluelessness isnât a useful concept.
But if Greaves introduces complex cluelessness by juxtaposing it with simple cluelessness, yet the examples of simple cluelessness actually meet their definition of complex cluelessness (which I think Iâve shown), I think this provides reason to pause and re-evaluate the claims.
And then I think we might notice that Greaves suggests a sharp distinction between simple and complex cluelessness. And also that she (if I recall correctly) arguably suggests homogeneity within each type of cluelessnessâi.e., suggesting all cases of simple cluelessness can be dealt with by just ignoring the possible flow-through effects that seem symmetrical, while we should search for a type of approach to handle all cases of complex cluelessness. (But this latter point is probably debatable.)
And we might also notice that the term âcluelessnessâ seems to suggest we know literally nothing about how to compare the outcomes. Whereas Iâve argued that in all cases weâll have some information relevant to that, and the various bits of information will vary in their importance and degree of uncertainty.
So altogether, it would just seem more natural to me to say:
weâre always at least a little uncertain, and often extremely uncertain, and often somewhere in between
in theory, the âcorrectâ way to reason is basically expected value theory, using all the scraps of evidence at our disposal, and keeping track of how high or low the resilience of our credences are
in practice, we should do something sort of like that, but with a lot of caution and heuristics (given that weâre dealing with limited data, computational constraints, biases, etc.).
I do think there are many important questions to be investigated with regards to how best to make decisions under conditions of extreme uncertainty, and that this becomes especially relevant for people who want to have a positive impact on the long-term future. But it doesnât seem to me that the idea of complex cluelessness is necessary or useful in posing or investigating those questions.
Interesting. I wonder if the switch to that example was because they had a similar thought to mine, or read that comment.
But I think I can make a similar point with the Monday vs Tuesday example. I also predict I could make a similar point with respect to any example Iâm given.
This is because I do know things about Mondays and Tuesdays in general, as well as about other variables. If the papers argue weâre meant to artificially suppose we know literally nothing at all about a given variable, that seems weird or question-begging, and irrelevant to actual decision-making. (Note that I havenât read the recent paper you link to.)
I could probably come up with several stories for the Monday vs Tuesday example, but my first thought is to make it connect to my prior stories so it can reuse most of the reasoning from there, and to do that via social media. Above, I wrote:
This says people tend to use social media more on Tuesday and Wednesday than on Monday. I think we therefore have some reason to believe that, if I help an old lady cross the road on Tuesday rather than Monday, itâs slightly more likely that someone will post about that on social media, and/âor use social media in a slightly more altruistic, kind, community-spirit-y way than they otherwise wouldâve. (Because me doing this on Tuesday means theyâre slightly more likely to be on social media while this kind deed is relatively fresh in their minds.) This could then further spread those norms (compared to how much theyâd be spread if we helped on Monday), and we could tell a story about how that ripples out further etc.
Above, I also wrote:
I would now say exactly the same thing is true for the Monday vs Tuesday example, given my above argument for why the norms might be spread more if we help on Tuesday rather than Monday.
(We could also probably come up with stories related to amounts of traffic on Monday vs Tuesdayâe.g., the old lady may be likelier to die if un-helped on one day, or more people may be delayed. Or related to people tending to be a little happier or sadder or Monday. Or related to what we ourselves predict weâll do with our time on Monday or Tuesday, which we probably would know about. Or many other things)
As before:
Sorry, I donât have the time to comment in-depth. However, I think if one agrees with cluelessness, then you donât offer an objection. You might even extend their worries by saying that âalmost everything has âasymmetric uncertaintyââł. I would be interested in your extension of your last sentence. â They are extremely unlikely and thus not worth bearing mindâ. Why is this true?
When I said âI think both of these âstoriesâ Iâve told are extremely unlikely, and for practical purposes arenât worth bearing in mindâ, the bolded bit meant that I think a person will tend to better achieve their goals (including altruistic ones) if they donât devote explicit attention to such (extremely unlikely) âstoriesâ when making decisions. The reason is essentially that one could generate huge numbers of such stories for basically every decision. If one tried to explicitly think through and weigh up all such stories in all such decision situations, one would probably become paralysed.
So I think the expected value of making decisions before and without thinking through such stories is higher than the expected value of trying to think through such stories before making decisions.
In other words, the value of information one would be expected to get from spending extra time thinking through such stories is probably usually lower than the opportunity cost of gaining that information (e.g., what one couldâve done with that time otherwise).
Disclaimer: Written on low sleep, and again reporting only independent impressions (i.e., what Iâd believe before updating on the fact that various smart people donât share my views on this). I also shared related thoughts in this comment thread.
I agree that one way someone could respond to my points is indeed by saying that everything/âalmost everything involves complex cluelessness, rather than that complex cluelessness isnât a useful concept.
But if Greaves introduces complex cluelessness by juxtaposing it with simple cluelessness, yet the examples of simple cluelessness actually meet their definition of complex cluelessness (which I think Iâve shown), I think this provides reason to pause and re-evaluate the claims.
And then I think we might notice that Greaves suggests a sharp distinction between simple and complex cluelessness. And also that she (if I recall correctly) arguably suggests homogeneity within each type of cluelessnessâi.e., suggesting all cases of simple cluelessness can be dealt with by just ignoring the possible flow-through effects that seem symmetrical, while we should search for a type of approach to handle all cases of complex cluelessness. (But this latter point is probably debatable.)
And we might also notice that the term âcluelessnessâ seems to suggest we know literally nothing about how to compare the outcomes. Whereas Iâve argued that in all cases weâll have some information relevant to that, and the various bits of information will vary in their importance and degree of uncertainty.
So altogether, it would just seem more natural to me to say:
weâre always at least a little uncertain, and often extremely uncertain, and often somewhere in between
in theory, the âcorrectâ way to reason is basically expected value theory, using all the scraps of evidence at our disposal, and keeping track of how high or low the resilience of our credences are
in practice, we should do something sort of like that, but with a lot of caution and heuristics (given that weâre dealing with limited data, computational constraints, biases, etc.).
I do think there are many important questions to be investigated with regards to how best to make decisions under conditions of extreme uncertainty, and that this becomes especially relevant for people who want to have a positive impact on the long-term future. But it doesnât seem to me that the idea of complex cluelessness is necessary or useful in posing or investigating those questions.