Both cases are traditionally described in terms of payoffs and costs just for yourself, and I’m not sure we have quite as strong a justification for being risk-neutral or fanatical in that case. In particular, I find it at least a little plausible that individuals should effectively have bounded utility functions, whereas it’s not at all plausible that we’re allowed to do that in the moral case—it’d lead something a lot like the old Egyptology objection.
That said, I’d accept Pascal’s wager in the moral case. It comes out of Fanaticism fairly straightforwardly, with some minor provisos. But Pascal’s Mugging seems avoidable—for it to arise, we need another agent interacting with you strategically to get what they want. I think it’s probably possible for an EV maximiser to avoid the mugging as long as we make their decision-making rule a bit richer in strategic interactions. But that’s just speculation—I don’t have a concrete proposal for that!
If you accept Fanaticism, how do you respond to Pascal’s wager and Pascal’s mugging, etc.?
Both cases are traditionally described in terms of payoffs and costs just for yourself, and I’m not sure we have quite as strong a justification for being risk-neutral or fanatical in that case. In particular, I find it at least a little plausible that individuals should effectively have bounded utility functions, whereas it’s not at all plausible that we’re allowed to do that in the moral case—it’d lead something a lot like the old Egyptology objection.
That said, I’d accept Pascal’s wager in the moral case. It comes out of Fanaticism fairly straightforwardly, with some minor provisos. But Pascal’s Mugging seems avoidable—for it to arise, we need another agent interacting with you strategically to get what they want. I think it’s probably possible for an EV maximiser to avoid the mugging as long as we make their decision-making rule a bit richer in strategic interactions. But that’s just speculation—I don’t have a concrete proposal for that!