(I’m writing this while a little jetlagged, so it might be a bit incoherent or disconnected from what you were saying.)
I think invoking simple cluelessness in the case of choosing which chair to sit on is the only way a committed consequentialist can feel OK making a decision one way or the other—otherwise they fall prey to paralysis.
I don’t think this is right. I think the key thing is to remember that doing more analysis (thinking, discussing, researching, whatever) is itself a choice, and itself has a certain expected value (which is related to how long it will take, how likely it is to change what other decision you make, and how much of an improvement that change might be). Sometimes that expected value justifies the opportunity cost, and sometimes it doesn’t. This can be true whether you can or can’t immediately see any difference in the expected value of the two “concrete choices” (this is a term I’m making up to exclude the choice to do further analysis).
E.g., I don’t spend time deciding which of two similar chairs to sit in, and this is the right decision for me to make from a roughly utilitarian perspective, and this is because:
It seems like that, even after quite a while spent analysing which chair I should sit in, the expected value I assign to each choice would be quite similar
There are other useful things I can do with my time
The expected value of just choosing a chair right away and then doing certain other things is higher than the expected value of first spending longer deciding which chair to sit in
(Of course, I don’t explicitly go through that whole thought process each time I implicitly make a mundane decision.)
But there are also some cases where the expected values we’d guess each of two actions would have are basically the same and yet we should engage in further analysis. This is true when the opportunity cost of the time spent on that analysis seems justified, in expectation, by the probability that that analysis would cause us to change our decision and the extent to which that change might be an improvement.
So I don’t think the concept of “simple cluelessness” is necessary, and I think it’s unhelpful in that:
It sounds absolute and unchangeable, whereas in many cases one either already has or could come to have a belief about which action would have higher expected value
It implies that there’s something special about certain cases where one has extremely little knowledge, whereas really what’s key is how much information value various actions (e.g., further thinking) would provide and what opportunity cost those actions have
(I’m writing this while a little jetlagged, so it might be a bit incoherent or disconnected from what you were saying.)
I don’t think this is right. I think the key thing is to remember that doing more analysis (thinking, discussing, researching, whatever) is itself a choice, and itself has a certain expected value (which is related to how long it will take, how likely it is to change what other decision you make, and how much of an improvement that change might be). Sometimes that expected value justifies the opportunity cost, and sometimes it doesn’t. This can be true whether you can or can’t immediately see any difference in the expected value of the two “concrete choices” (this is a term I’m making up to exclude the choice to do further analysis).
E.g., I don’t spend time deciding which of two similar chairs to sit in, and this is the right decision for me to make from a roughly utilitarian perspective, and this is because:
It seems like that, even after quite a while spent analysing which chair I should sit in, the expected value I assign to each choice would be quite similar
There are other useful things I can do with my time
The expected value of just choosing a chair right away and then doing certain other things is higher than the expected value of first spending longer deciding which chair to sit in
(Of course, I don’t explicitly go through that whole thought process each time I implicitly make a mundane decision.)
But there are also some cases where the expected values we’d guess each of two actions would have are basically the same and yet we should engage in further analysis. This is true when the opportunity cost of the time spent on that analysis seems justified, in expectation, by the probability that that analysis would cause us to change our decision and the extent to which that change might be an improvement.
So I don’t think the concept of “simple cluelessness” is necessary, and I think it’s unhelpful in that:
It sounds absolute and unchangeable, whereas in many cases one either already has or could come to have a belief about which action would have higher expected value
It implies that there’s something special about certain cases where one has extremely little knowledge, whereas really what’s key is how much information value various actions (e.g., further thinking) would provide and what opportunity cost those actions have