It’s good to know lots of people have this intuition—I think I do too, though it’s not super strong in me.
Arguably, when p is below the threshold you mention, we can make some sort of psuedo-law-of-large-numbers argument for expected utility maximization, like “If we all follow this policy, probably at least one of us will succeed.” But when p is above the threshold, we can’t make that argument.
So the idea is: Reject expected utility maximization in general (perhaps for reasons which will be discussed in subsequent posts!), but accept some sort of “If following a policy seems like it will probably work, then do it” principle, and use that to derive expected utility maximization in ordinary cases.
All of this needs to be made more precise and explored in more detail. I’d love to see someone do that.
(BTW, upcoming posts remove the binary-outcomes assumption. Perhaps it was a mistake to post them in sequence instead of all at once...)
It’s good to know lots of people have this intuition—I think I do too, though it’s not super strong in me.
Arguably, when p is below the threshold you mention, we can make some sort of psuedo-law-of-large-numbers argument for expected utility maximization, like “If we all follow this policy, probably at least one of us will succeed.” But when p is above the threshold, we can’t make that argument.
So the idea is: Reject expected utility maximization in general (perhaps for reasons which will be discussed in subsequent posts!), but accept some sort of “If following a policy seems like it will probably work, then do it” principle, and use that to derive expected utility maximization in ordinary cases.
All of this needs to be made more precise and explored in more detail. I’d love to see someone do that.
(BTW, upcoming posts remove the binary-outcomes assumption. Perhaps it was a mistake to post them in sequence instead of all at once...)