One practical objection to Fanaticism is that in cases where we believe in a small chance of a very large positive payoffs, we should also suspect a chance of very large negative payoffs, and we may have deep uncertainty about the probabilities and payoffs, so that the sign of the expected value could be positive or negative, and take extreme values.
So,
The second organisation does speculative research into how to do computations using ‘positronium’ - a form of matter which will be ubiquitous in the far future of our universe. If our universe has the right structure (which it probably does not), then in the distant future we may be able to use positronium to instantiate all of the operations of human minds living blissful lives, and thereby allow morally valuable life to survive indefinitely.
What if this backfires spectacularly and we instantiate astronomical amounts of suffering instead? Or, similar technology is used to instantiate astronomical suffering to threaten us into submission (strategic threats in conflict)?
Given how speculative this is and that we won’t get feedback on whether we’ve done good or harm until it’s too late, are there other ways this can go very badly that we haven’t thought of?
In the case of a Pascal’s mugging, when someone is threatening you, you should consider that giving in may encourage this behaviour further, allowing more threats and more threats followed through on, and the kind of person threatening you this way may use these resources for harm, anyway.
In the case of Pascal’s wager, there are multiple possible deities, each of which might punish or reward you or others (infinitely) based on your behaviour. It may turn out to be the case that you should take your chances with one or multiple of them, and that you should spend a lot of time and resources finding out which. This could end up being a case of unresolvable moral cluelessness, and that choosing any of the deities isn’t robustly good, nor is choosing none of them isn’t robustly bad, and you’ll never be able to get past this uncertainty. Then, you may have multiple permissible options (according to the maximality rule, say), but given a particular choice (or non-choice), there could still be things that appear to you to be robustly better than other things, e.g. donating to charity X instead of charity Y, or not torturing kittens for fun instead of torturing kittens for fun. This doesn’t mean anything goes.
Just a note on the Pascal’s Mugging case: I do think the case can probably be overcome by appealing to some aspect of the strategic interaction between different agents. But I don’t think it comes out of the worry that they’ll continue mugging you over and over. Suppose you (morally) value losing $5 to the mugger at −5 and losing nothing at 0 (on some cardinal scale). And you value losing every dollar you ever earn in your life at −5,000,000. And suppose you have credence (or, alternatively, evidential probability) of p that the mugger can and will generate any among of moral value or disvalue they claim they will. Then, as long as they claim they’ll bring about an outcome worse than −5,000,000/p if you don’t give them $5, or they claim they’ll bring about an outcome better than +5,000,000/p if you do, then EV theory says you should hand it over. And likewise for any other fanatical theory, if the payoff is just scaled far enough up or down.
Yes, in practice that’ll be problematic. But I think we’re obligated to take both possible payoffs into account. If we do suspect the large negative payoffs, it seems pretty awful to ignore them in our decision-making. And then there’s a weird asymmetry if we pay attention to the negative payoffs but not the positive.
More generally, Fanaticism isn’t a claim about epistemology. A good epistemic and moral agent should first do their research, consider all of the possible scenarios in which their actions backfire, and put appropriate probabilities on them. If as they do the epistemic side right, it seems fine for them to act according to Fanaticism when it comes to decision-making. But in practice, yeah, that’s going to be an enormous ‘if’.
One practical objection to Fanaticism is that in cases where we believe in a small chance of a very large positive payoffs, we should also suspect a chance of very large negative payoffs, and we may have deep uncertainty about the probabilities and payoffs, so that the sign of the expected value could be positive or negative, and take extreme values.
So,
What if this backfires spectacularly and we instantiate astronomical amounts of suffering instead? Or, similar technology is used to instantiate astronomical suffering to threaten us into submission (strategic threats in conflict)?
Given how speculative this is and that we won’t get feedback on whether we’ve done good or harm until it’s too late, are there other ways this can go very badly that we haven’t thought of?
In the case of a Pascal’s mugging, when someone is threatening you, you should consider that giving in may encourage this behaviour further, allowing more threats and more threats followed through on, and the kind of person threatening you this way may use these resources for harm, anyway.
In the case of Pascal’s wager, there are multiple possible deities, each of which might punish or reward you or others (infinitely) based on your behaviour. It may turn out to be the case that you should take your chances with one or multiple of them, and that you should spend a lot of time and resources finding out which. This could end up being a case of unresolvable moral cluelessness, and that choosing any of the deities isn’t robustly good, nor is choosing none of them isn’t robustly bad, and you’ll never be able to get past this uncertainty. Then, you may have multiple permissible options (according to the maximality rule, say), but given a particular choice (or non-choice), there could still be things that appear to you to be robustly better than other things, e.g. donating to charity X instead of charity Y, or not torturing kittens for fun instead of torturing kittens for fun. This doesn’t mean anything goes.
Just a note on the Pascal’s Mugging case: I do think the case can probably be overcome by appealing to some aspect of the strategic interaction between different agents. But I don’t think it comes out of the worry that they’ll continue mugging you over and over. Suppose you (morally) value losing $5 to the mugger at −5 and losing nothing at 0 (on some cardinal scale). And you value losing every dollar you ever earn in your life at −5,000,000. And suppose you have credence (or, alternatively, evidential probability) of p that the mugger can and will generate any among of moral value or disvalue they claim they will. Then, as long as they claim they’ll bring about an outcome worse than −5,000,000/p if you don’t give them $5, or they claim they’ll bring about an outcome better than +5,000,000/p if you do, then EV theory says you should hand it over. And likewise for any other fanatical theory, if the payoff is just scaled far enough up or down.
Yes, in practice that’ll be problematic. But I think we’re obligated to take both possible payoffs into account. If we do suspect the large negative payoffs, it seems pretty awful to ignore them in our decision-making. And then there’s a weird asymmetry if we pay attention to the negative payoffs but not the positive.
More generally, Fanaticism isn’t a claim about epistemology. A good epistemic and moral agent should first do their research, consider all of the possible scenarios in which their actions backfire, and put appropriate probabilities on them. If as they do the epistemic side right, it seems fine for them to act according to Fanaticism when it comes to decision-making. But in practice, yeah, that’s going to be an enormous ‘if’.