I don’t agree with that. Cluelessness seems to only arise if you have reason to think that on average your actions won’t make things better. And yet even a very flawed procedure will, on average across worlds, do better than chance. This seems to deal with epistemic cluelessness fine.
(The section I linked to from this other post is more of a quick overview of stuff mostly discussed in the sections above. But it might be harder to follow because it’s in the context of a post about unawareness specifically, hence the “UEV” term etc. — sorry about that! You could skip the first paragraph and replace “UEV” with “imprecise EV”.)
I think the arguments you provide only imply the expected change in welfare from pursuing any 2 strategies is closer than one may have thought. However, as long as the information about each strategy is not exactly the same, one will still be better than the other in expectation. If the difference is sufficiently small (e.g. me leaving home 0.001 s later), one could say they have practically the same expected value. I agree there are many more strategies in this situation than people realise. Yet, I am not convinced literally all possible strategies are in that situation.
This particular claim isn’t empirical, it’s about what follows from compelling epistemic principles.
(As for empirical evidence that would change my mind about imprecision being so severe that we’re clueless, see our earlier exchange. I guess we hit a crux there.)
I don’t agree with that. Cluelessness seems to only arise if you have reason to think that on average your actions won’t make things better. And yet even a very flawed procedure will, on average across worlds, do better than chance. This seems to deal with epistemic cluelessness fine.
I respond to the “better than chance” claim in the post I linked to (in my reply to Richard). What do you think I’m missing there? (See also here.)
It’s a somewhat long post. Want to come on the podcast to discuss?
Sounds great, please DM me! Thanks for the invite. :)
In the meantime, if it helps, for the purposes of this discussion I think the essential sections of the posts I linked are:
“The structure of indeterminacy”
“Aggregating our representor with higher-order credences uses more information” (and “Response”)
(The section I linked to from this other post is more of a quick overview of stuff mostly discussed in the sections above. But it might be harder to follow because it’s in the context of a post about unawareness specifically, hence the “UEV” term etc. — sorry about that! You could skip the first paragraph and replace “UEV” with “imprecise EV”.)
Hi Anthony,
I think the arguments you provide only imply the expected change in welfare from pursuing any 2 strategies is closer than one may have thought. However, as long as the information about each strategy is not exactly the same, one will still be better than the other in expectation. If the difference is sufficiently small (e.g. me leaving home 0.001 s later), one could say they have practically the same expected value. I agree there are many more strategies in this situation than people realise. Yet, I am not convinced literally all possible strategies are in that situation.
Hi Vasco —
My posts argue that this is fundamentally the wrong framework. We don’t have precise “expectations”.
Thanks, Anthony. Is there any empirical evidence that would change your mind on that?
This particular claim isn’t empirical, it’s about what follows from compelling epistemic principles.
(As for empirical evidence that would change my mind about imprecision being so severe that we’re clueless, see our earlier exchange. I guess we hit a crux there.)