Thanks jesse. Is there a way that we could actually do this? Like choose some F(X) which is unknown to both of us but guaranteed to be between 0 and 1, and if it’s less than 1⁄2 I pay you a dollar and if it’s greater than 1⁄2 you pay me some large amount of money.
I feel pretty confident I would take that bet if the selection of F was not obviously antagonistic towards me, but maybe I’m not understanding the types of scenarios you are imagining.
Good question! Yeah, I can’t think of a real-world process about which I’d want to have maximally imprecise beliefs. (The point of choosing a “demon” in the example is that we would have good reason to worry the process is adversarial if we’re talking about a demon…)
(Is this supposed to be part of an argument against imprecision in general / sufficient imprecision to imply consequentialist cluelessness? Because I don’t think you need anywhere near maximally imprecise beliefs for that. The examples in the paper just use the range [0,1] for simplicity.)
Thanks jesse. Is there a way that we could actually do this? Like choose some F(X) which is unknown to both of us but guaranteed to be between 0 and 1, and if it’s less than 1⁄2 I pay you a dollar and if it’s greater than 1⁄2 you pay me some large amount of money.
I feel pretty confident I would take that bet if the selection of F was not obviously antagonistic towards me, but maybe I’m not understanding the types of scenarios you are imagining.
Good question! Yeah, I can’t think of a real-world process about which I’d want to have maximally imprecise beliefs. (The point of choosing a “demon” in the example is that we would have good reason to worry the process is adversarial if we’re talking about a demon…)
(Is this supposed to be part of an argument against imprecision in general / sufficient imprecision to imply consequentialist cluelessness? Because I don’t think you need anywhere near maximally imprecise beliefs for that. The examples in the paper just use the range [0,1] for simplicity.)