Sorry, I don’t have the time to comment in-depth. However, I think if one agrees with cluelessness, then you don’t offer an objection. You might even extend their worries by saying that “almost everything has “asymmetric uncertainty”″. I would be interested in your extension of your last sentence. ” They are extremely unlikely and thus not worth bearing mind”. Why is this true?
I would be interested in your extension of your last sentence. ” They are extremely unlikely and thus not worth bearing mind”. Why is this true?
When I said “I think both of these “stories” I’ve told are extremely unlikely, and for practical purposes aren’t worth bearing in mind”, the bolded bit meant that I think a person will tend to better achieve their goals (including altruistic ones) if they don’t devote explicit attention to such (extremely unlikely) “stories” when making decisions. The reason is essentially that one could generate huge numbers of such stories for basically every decision. If one tried to explicitly think through and weigh up all such stories in all such decision situations, one would probably become paralysed.
So I think the expected value of making decisions before and without thinking through such stories is higher than the expected value of trying to think through such stories before making decisions.
In other words, the value of information one would be expected to get from spending extra time thinking through such stories is probably usually lower than the opportunity cost of gaining that information (e.g., what one could’ve done with that time otherwise).
Disclaimer: Written on low sleep, and again reporting only independent impressions (i.e., what I’d believe before updating on the fact that various smart people don’t share my views on this). I also shared related thoughts in this comment thread.
I agree that one way someone could respond to my points is indeed by saying that everything/almost everything involves complex cluelessness, rather than that complex cluelessness isn’t a useful concept.
But if Greaves introduces complex cluelessness by juxtaposing it with simple cluelessness, yet the examples of simple cluelessness actually meet their definition of complex cluelessness (which I think I’ve shown), I think this provides reason to pause and re-evaluate the claims.
And then I think we might notice that Greaves suggests a sharp distinction between simple and complex cluelessness. And also that she (if I recall correctly) arguably suggests homogeneity within each type of cluelessness—i.e., suggesting all cases of simple cluelessness can be dealt with by just ignoring the possible flow-through effects that seem symmetrical, while we should search for a type of approach to handle all cases of complex cluelessness. (But this latter point is probably debatable.)
And we might also notice that the term “cluelessness” seems to suggest we know literally nothing about how to compare the outcomes. Whereas I’ve argued that in all cases we’ll have some information relevant to that, and the various bits of information will vary in their importance and degree of uncertainty.
So altogether, it would just seem more natural to me to say:
we’re always at least a little uncertain, and often extremely uncertain, and often somewhere in between
in theory, the “correct” way to reason is basically expected value theory, using all the scraps of evidence at our disposal, and keeping track of how high or low the resilience of our credences are
in practice, we should do something sort of like that, but with a lot of caution and heuristics (given that we’re dealing with limited data, computational constraints, biases, etc.).
I do think there are many important questions to be investigated with regards to how best to make decisions under conditions of extreme uncertainty, and that this becomes especially relevant for people who want to have a positive impact on the long-term future. But it doesn’t seem to me that the idea of complex cluelessness is necessary or useful in posing or investigating those questions.
Sorry, I don’t have the time to comment in-depth. However, I think if one agrees with cluelessness, then you don’t offer an objection. You might even extend their worries by saying that “almost everything has “asymmetric uncertainty”″. I would be interested in your extension of your last sentence. ” They are extremely unlikely and thus not worth bearing mind”. Why is this true?
When I said “I think both of these “stories” I’ve told are extremely unlikely, and for practical purposes aren’t worth bearing in mind”, the bolded bit meant that I think a person will tend to better achieve their goals (including altruistic ones) if they don’t devote explicit attention to such (extremely unlikely) “stories” when making decisions. The reason is essentially that one could generate huge numbers of such stories for basically every decision. If one tried to explicitly think through and weigh up all such stories in all such decision situations, one would probably become paralysed.
So I think the expected value of making decisions before and without thinking through such stories is higher than the expected value of trying to think through such stories before making decisions.
In other words, the value of information one would be expected to get from spending extra time thinking through such stories is probably usually lower than the opportunity cost of gaining that information (e.g., what one could’ve done with that time otherwise).
Disclaimer: Written on low sleep, and again reporting only independent impressions (i.e., what I’d believe before updating on the fact that various smart people don’t share my views on this). I also shared related thoughts in this comment thread.
I agree that one way someone could respond to my points is indeed by saying that everything/almost everything involves complex cluelessness, rather than that complex cluelessness isn’t a useful concept.
But if Greaves introduces complex cluelessness by juxtaposing it with simple cluelessness, yet the examples of simple cluelessness actually meet their definition of complex cluelessness (which I think I’ve shown), I think this provides reason to pause and re-evaluate the claims.
And then I think we might notice that Greaves suggests a sharp distinction between simple and complex cluelessness. And also that she (if I recall correctly) arguably suggests homogeneity within each type of cluelessness—i.e., suggesting all cases of simple cluelessness can be dealt with by just ignoring the possible flow-through effects that seem symmetrical, while we should search for a type of approach to handle all cases of complex cluelessness. (But this latter point is probably debatable.)
And we might also notice that the term “cluelessness” seems to suggest we know literally nothing about how to compare the outcomes. Whereas I’ve argued that in all cases we’ll have some information relevant to that, and the various bits of information will vary in their importance and degree of uncertainty.
So altogether, it would just seem more natural to me to say:
we’re always at least a little uncertain, and often extremely uncertain, and often somewhere in between
in theory, the “correct” way to reason is basically expected value theory, using all the scraps of evidence at our disposal, and keeping track of how high or low the resilience of our credences are
in practice, we should do something sort of like that, but with a lot of caution and heuristics (given that we’re dealing with limited data, computational constraints, biases, etc.).
I do think there are many important questions to be investigated with regards to how best to make decisions under conditions of extreme uncertainty, and that this becomes especially relevant for people who want to have a positive impact on the long-term future. But it doesn’t seem to me that the idea of complex cluelessness is necessary or useful in posing or investigating those questions.