Thanks, Michael. Yes, you’re right—in the bit you quote from at the start I’m assuming the bursts have some kind of duration rather than being extensionless. I think that probably got mangled in trying to compress everything!
The zero duration frame possibility is an interesting one—Some of Vasco’s comments below point in the same direction, I think. Is your thought that the problem is something like—If you have these isolated points of experience which have zero duration, then since there’s no experience there to which we can assign a non-zero objective duration, if you measure duration objectively, you count those experiences as nothing, whereas intuitively that’s a mistake—There’s an experience of pain there, after all. It’s got to count for something!
I think that’s an interesting objection and one I’ll have to think more about. My initial reaction is that perhaps it’s bound up with a general weirdness that attaches to things that have zero measure but (in some sense) still aren’t nothing? E.g., there’s something weird about probability zero events that are nonetheless genuinely possible, and taking account of events like that can lead to some weird interactions with otherwise plausible normative principles: e.g., it suggests a possible conflict between dominance and expected utility maximization (see Hajek, “Unexpected Expectations,” p. 556-7 for discussion).
Thanks, Richard! In some sense, I think I agree; as I say in the conclusion, I’m most inclined to think this is one of those cases where we’ve got a philosophical argument we don’t immediately know how to refute for a conclusion that we should nonetheless reject, and so we ought to infer that one of the premises must be false.
On the other hand, I think I’m most inclined to say that the problem lies in the fact that standard models using imprecise credences and their associated decisions rules have or exploit too little structure in terms of how they model our epistemic predicament, while thinking that it is nonetheless the case that our evidence fails to rule out probability functions that put sufficient probability mass on potential bad downstream effects and thereby make AMF come out worse in terms of maximizing expected value relative to that kind of probability function. I’m more inclined to identify the problem as being that the maximality rule gives probability functions of that kind too much of a say when it comes to determining permissibility. Other standard decision rules for imprecise credences argaubly suffer from similar issues. David Thorstad and I look a bit more in depth at decision rules that draw inspiration from voting theory and rely on some kind of measure on the set of admissible probability functions in our paper ‘Tough enough? Robust satisficing as a decision norm for long-term policy analysis’ but we weren’t especially sold on them.