Instead I’m saying that in many decision-situations people find themselves in, although they could (somewhat) narrow their credence range by investing more thought, in practice the returns from doing that thinking aren’t enough to justify it, so they shouldn’t do the thinking.
(I don’t think this is particularly important, you can feel free to prioritize my other comment.) Right, sorry, I understood that part. I was asking about an implication of this view. Suppose you have an intervention whose sign varies over the range of your indeterminate credences. Per the standard decision theory for indeterminate credences, then, you currently don’t have a reason to do the intervention — it’s not determinately better than inaction. (I’ll say more about this below, re: your digits of pi example.) So if by “the returns from doing that thinking aren’t enough to justify it” you mean you should just do the intervention in such a case, that doesn’t make sense to me.
(I don’t think this is particularly important, you can feel free to prioritize my other comment.) Right, sorry, I understood that part. I was asking about an implication of this view. Suppose you have an intervention whose sign varies over the range of your indeterminate credences. Per the standard decision theory for indeterminate credences, then, you currently don’t have a reason to do the intervention — it’s not determinately better than inaction. (I’ll say more about this below, re: your digits of pi example.) So if by “the returns from doing that thinking aren’t enough to justify it” you mean you should just do the intervention in such a case, that doesn’t make sense to me.