Summary: Against Anti-Fanaticism (Christian Tarsney)
Against Anti-Fanaticism is a Global Priorities Institute Working Paper by Christian Tarsney. This post is part of my sequence of GPI Working Paper summaries.
If you’d like a very brief summary, skip to “Conclusion/brief summary.”
Hilary Greaves and William MacAskill think objections to fanaticism are among the strongest counterarguments to strong longtermism. Such objections also underpin some of the strongest counterarguments to expected value theory. Thus, contemplating fanaticism is critical for comparing neartermist and longtermist causes.
Here I’ve done my best to summarize Tarsney’s argument, making it more easily accessible while sacrificing as little argumentative strength as possible.
Introduction
Anti-fanaticism has an intuitive edge: Say you must choose between guaranteeing a very good future for all sentient life or a gamble with a one-in-a-googol chance of an even better future and instant annihilation otherwise. Most would take the sure thing—that’s an anti-fanatical choice.
Instead of focusing on fanaticism’s truth or falsity, as do previous papers[1], Tarsney focuses on fanaticism’s opposing thesis: anti-fanaticism.
Anti-fanaticism[2]: There is some positive probability p and good g such that you prefer having g for sure to having any good (no matter how great) with probability p or less.
Tarsney stresses the existence of a middle ground between fanaticism and anti-fanaticism, which he calls permissivism, and that arguments against anti-fanaticism (including this paper) aren’t arguments for fanaticism and vice versa.
Anti-fanaticism generalized
Previous debate about fanaticism focuses on special cases of choosing between a binary gamble and a sure outcome. A more general (and realistic) case is choosing either to shift a small amount of probability from a much worse outcome to a better one or to modestly improve every outcome. This better reflects our choices about whether we try to mitigate existential risks, as they don’t decisively determine the probabilities—our action isn’t humanity’s only hope, nor is our inaction humanity’s only chance of doom. Instead, there are preexisting chances of both, which we might shift away from doom. Hence, it’s more accurate to use small differences in intermediate probabilities, not small absolute probabilities.
General Anti-Fanaticism (definition)
He alters the definition of anti-fanaticism to capture this more general and realistic case:
An Improvement (I) makes every possible outcome better.
General Anti-Fanaticism: There is a large enough improvement I and a small enough probability p, that makes certainty of improvement I better than a probability shift of p from one outcome (no matter how bad) to another (no matter how good).
Against anti-fanaticism
Tarsney argues General Anti-Fanaticism is incompatible with the following extremely plausible principles:
No Best Outcome: For every outcome, a better outcome is possible.
No Worst Outcome: For every outcome, a worse outcome is possible.
Minimal Dominance: If an outcome is better than another outcome, then certainty of the better outcome is better than certainty of the worse outcome.
Acyclicity: In a chain of prospects where each new prospect is better than the previous, the first prospect isn’t better than the last.
He demonstrates that, if we accept No Best/Worst Outcome and Minimal Dominance, General Anti-Fanaticism is cyclical: Choosing between improving every outcome or shifting a small amount of probability to an astronomically better outcome, an anti-fanaticist always chooses the former. However, after making a series of choices, this can lead our anti-fanaticist to end up with a prospect they think is worse than the one they started with—violating acyclicity (see pages 12 and 13 for his demonstration).
Because Tarsney finds the principles behind this argument so plausible, he concludes we should reject General Anti-Fanaticism.
Compact EU and Quantile Discounting
He proposes two modifications of Expected Utility Theory for those who agree with General Anti-Fanaticism despite its incompatibility with these extremely plausible principles: Compact EU and Quantile Discounting
Compact EU
Bounded EU sets a maximum and minimum amount of utility, where utility can approach these bounds without ever reaching them. This has previously been used to solve fanatical implications, along with other problems[3].
However, Bounded EU doesn’t satisfy General Anti-Fanaticism because if the universe is very likely to be very good, Bounded EU will prefer to shift the probability of extinction instead of improving every outcome.
He says if we modify Bounded EU to have hard limits (reachable maximum and minimum utilities), then it satisfies the spirit of General Anti-Fanaticism[4]. He calls this Compact EU. However, it violates either No Best/Worst Outcome or Minimal Dominance (Tarsney finds rejecting either of these to be highly implausible).
Quantile Discounting
Other attempts to avoid fanaticism simply ignore very small probabilities, but these face powerful objections. Tarsney proposes a version of small-probability discounting, Quantile Discounting[5], that he finds the most plausible way of satisfying General Anti-Fanaticism, but remains skeptical because it violates Acyclicality (as shown in the previous section).
Conclusion/brief summary
Tarsney modifies fanaticism to capture more general and realistic cases, in which we slightly shift probabilities from much worse outcomes to much better ones, rather than making binary gambles.
He shows how, unless we give up some extremely plausible principles, this general version of anti-fanaticism is cyclical: It’ll prefer a series of prospects, only to end up with a prospect it considers worse than the one it started with.
He proposes two modifications of Expected Utility Theory for those who agree with General Anti-Fanaticism despite its incompatibility with extremely plausible principles: Compact EU and Quantile Discounting.
He demonstrates he hasn’t argued for fanaticism, only against anti-fanaticism, and he argues that a middle ground exists: permissivism, which features incomplete preferences that aren’t fanatical or anti-fanatical. He thinks fanaticism’s skeptics should prefer permissivism to anti-fanaticism.
- ^
See Wilkinson’s “In Defence of Fanaticism” and Beckstead & Thomas’ “A Paradox for Tiny Probabilities and Enormous Values.”
- ^
His formal definition can be found on page 6.
- ^
Namely, paradoxical objections to Expected Utility Theory, which Tarsney describes:
Unbounded utilities allow for prospects with infinite expected utility (generalizations of the St. Petersburg game), which have various paradoxical properties and are in tension with aspects of expected utility theory.
(See footnote 10 on page 14 for further discussion)
- ^
This modified version of Bounded EU doesn’t quite satisfy General Anti-Fanaticism, but it nearly does. Plus, it will fully satisfy a slightly modified version of General Anti-Fanaticism (see page 16 for discussion).
- ^
See pages 18 to 20 for details.
Now I want to know what the hell permissivism is!
The paper says:
Executive summary: Arguments against anti-fanaticism suggest we should reject it in favor of permissivism, since anti-fanaticism violates highly plausible principles or requires implausible modifications to expected utility theory.
Key points:
Anti-fanaticism states there is a positive probability and level of good such that a guarantee of that good is always preferred over a gamble, no matter the potential upside.
Generalized anti-fanaticism better captures realistic choices about existential risk mitigation, using small shifts in intermediate probabilities rather than binary gambles.
Generalized anti-fanaticism is incompatible with the extremely plausible principles of no best/worst outcomes, minimal dominance, and acyclicity.
Attempted modifications to expected utility theory to satisfy anti-fanaticism—Compact EU and Quantile Discounting—give up other highly plausible principles.
Permissivism allows incomplete preferences between gambles and guarantees, avoiding both fanaticism and anti-fanaticism.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.