I appreciated a bunch of things about this comment. Sorry, I’ll just reply (for now) to a couple of parts.
The metaphor with hedonism felt clarifying. But I would say (in the metaphor) that I’m not actually arguing that it’s confused to intrinsically care about the non-hedonist stuff, but that it would be really great to have an account of how the non-hedonist stuff is or isn’t helpful on hedonist grounds, both because this may just be helpful to input into our thinking to whatever extent we endorse hedonist goods (even if we may also care about other things), and because without having such an account it’s sort of hard to assess how much of our caring for non-hedonist goods is grounded in themselves, vs in some sense being debunked by the explanation that they are instrumentally good to care about on hedonist grounds.
I think the piece I feel most inclined to double-click on is the digits of pi piece. Reading your reply, I realise I’m not sure what indeterminate credences are actually supposed to represent (and this is maybe more fundamental than “where do the numbers come from?”). Is it some analogue of betting odds? Or what?
And then, you said:
I think this fights the hypothetical. If you “make guesses about your expectation of where you’d end up,” you’re computing a determinate credence and plugging that into your EV calculation. If you truly have indeterminate credences, EV maximization is undefined.
To some extent, maybe fighting the hypothetical is a general move I’m inclined to make? This gets at “what does your range of indeterminate credences represent?”. I think if you could step me through how you’d be inclined to think about indeterminate credences in an example like the digits of pi case, I might find that illuminating.
(Not sure this is super important, but note that I don’t need to compute a determinate credence here—it may be enough have an indeterminate range of credences, all of which would make the EV calculation fall out the same way.)
Not sure what I overall think of the better odds framing, but to speak in its defence: I think there’s a sense in which decisions are more real than beliefs. (I originally wrote “decisions are real and beliefs are not”, but they’re both ultimately abstractions about what’s going on with a bunch of matter organized into an agent-like system.) I can accept the idea of X as an agent making decisions, and ask what those decisions are and what drives them, without implicitly accepting the idea that X has beliefs. Then “X has beliefs” is kind of a useful model for predicting their behaviour in the decision situations. Or could be used (as you imply) to analyse the rationality of their decisions.
I like your contrived variant of the pi case. But to play on it a bit:
Maybe when I first find out the information on Sally, I quickly eyeball and think that defensible credences probably lie within the range 30% to 90%
Then later when I sit down and think about it more carefully, I think that actually the defensible credences are more like in the range 40% to 75%
If I thought about it even longer, maybe I’d tighten my range a bit further again (45% to 55%? 50% to 70%? I don’t know!)
In this picture, no realistic amount of thinking I’m going to do will bring it down to just a point estimate being defensible, and perhaps even the limit with infinite thinking time would have me maintain an interval of what seems defensible, so some fundamental indeterminacy may well remain.
But to my mind, this kind of behaviour where you can tighten your understanding by thinking more happens all of the time, and is a really important phenomenon to be able to track and think clearly about. So I really want language or formal frameworks which make it easy to track this kind of thing.
Moreover, after you grant this kind of behaviour [do you grant this kind of behaviour?], you may notice that from our epistemic position we can’t even distinguish between:
Cases where we’d collapse our estimated range of defensible credences down to a very small range or even a single point with arbitrary thinking time, but where in practice progress is so slow that it’s not viable
Cases where even in the limit with infinite thinking time, we would maintain a significant range of defensible credences
Because of this, from my perspective the question of whether credences are ultimately indeterminate is … not so interesting? It’s enough that in practice a lot of credences will be indeterminate, and that in many cases it may be useful to invest time thinking to shrink our uncertainty, but in many other cases it won’t be.