Fanatical EAs should support very weird projects

Summary: EAs who accept fanaticism (the idea that we should pursue long shots with enough expected value) should favor some pretty weird projects. E.g. trying to create quantum branches, converting sinners to the one true religion, or researching other out-there cause areas. This is unreasonable, if not irrational. We should be somewhat wary of expected value calculations for supporting such weird projects.

Fanaticism

Fanaticism is the idea that we should base our decisions on all of the possible outcomes of our actions no matter how unlikely they are.[1] Even extremely unlikely outcomes may sway what we should do if they are sufficiently good or bad.

Traditional decision theories tell us to maximize expected utility in some form or other. This is fanatical, for it may be that the actions that maximize expected utility produce inordinate amounts of value at extremely low probabilities.[2] There are other ways to be fanatical, but I’ll assume here that EA fanatics take a roughly maximize expected utility approach.[3]

Fanaticism isn’t a weird idea. It’s a sensible and straightforward way of building a decision theory. But it has weird implications in practice. The weirdness of what fanaticism suggests that EAs should do should make us suspicious of it.

Potential Fanatical EA Projects

1.) Quantum Branching[4]

Some simple versions of the Many Worlds interpretation of quantum mechanics say that the universe regularly branches during quantum events, producing multiple universes that differ from each other in the behavior of the state of some particles. If the universe branches in this way, the value of all subsequent events might multiply.[5] There could be twice as much joy and twice as much suffering every time the universe branches in two.[6]

We have the power to produce these quantum events. We can chain them one after another, potentially doubling, then redoubling, then again redoubling all the value and disvalue in the world. These quantum events happen all the time as well without our deliberate choice, but our decisions would make a difference to how many branchings occur. This gives us the power to pretty trivially exponentially increase the total amount of value (for better or worse) in the world by astronomical numbers.

The interpretations of quantum mechanics that allow for branchings like this are almost surely wrong. Almost. There is a small sliver of chance that this is the right way to think about quantum phenomena. Quantum phenomena are weird. We should be humble. The interpretations are logically coherent. They deserve, I think, at least a one in a quintillion[7] probability of being right (and possibly a lot higher).

A small sliver of probability in a simple Many Worlds interpretation is enough to make the expected value of spending our time and money producing[8] quantum events that might trigger branchings very high. It doesn’t much matter how low of a probability you assign. (If you like, add a couple hundred 0s in back of that quintillion, and the expected value of attempting to produce branches will still be enormous.) Doubling the value of our world a thousand times successively would effectively multiply the amount of value by a factor of 21000. If this interpretation of quantum mechanics is true, then there must already be a number of branches being continuously created. There must be a ton of value in all of the lives in all those branches. Additional divisions would effectively multiply the number of branches created in the future. Multiplying the amount of value by a factor of 21000 would mean we multiply a tremendous amount of value by a huge factor. This suggests an expected utility inconceivably greater than any current EA project.[9]

2.) Evangelism

In one of the very first applications of probabilistic decision theory, Pascal argued that we should attempt to believe in religious teachings in case they should make the difference in where we spend our eternal afterlife. Many religions suggest that our actions here on Earth will make an infinitely significant difference to our wellbeing after we die. It is natural to apply this idea to charity. If we really want to help other people, we should aim to secure their eternal afterlife in a good place.

No religion promising eternal damnation or salvation based on our earthly actions is particularly plausible, but they can’t be totally ruled out either. Religious views are coherent. Every major religion has some extremely intelligent people who believe in it. It would be irrationally self-confident to not give such religions even a remote chance of being correct.

Insofar as we are concerned with everyone’s wellbeing, the prospect of infinite afterlife should make it extremely important that we get as many people into heaven and out of hell as possible, even if we think such outcomes are extremely unlikely. Making the difference for one person would account for a greater difference in value than all of the secular good deeds ever performed. Saving the soul of one individual would be better then forever ending factory farming. It would be better than ensuring the survival of a trillion generations of human beings.

There are significant complications to Pascal’s argument: it isn’t clear which religion is right, and any choice with infinite rewards on one view may incur infinite punishments on another which are hard to compare. This gets us deep into infinite ethics, a tricky subject.

Whatever we end up doing with them, I still think Pascal was probably right that religious considerations should swamp all known secular considerations. If we substitute sufficiently large finite numbers for the infinite values of heaven and hell, and if considerations aren’t perfectly balanced, they will dominate expected utilities.

We should perhaps base our charitable decisions entirely on which religions are the least implausible; which promise the greatest rewards; which have the clearest paths to getting into heaven, etc, and devote time and money to evangelizing.

3.) Absurdist Research

The previous two proposals sketch ways we might be able to create tremendous amounts of value relatively easily. If the Many Worlds interpretation is correct, creating quantum branches is much easier than outlawing gestation crates. If Johnathan Edwards was right about God, saving a few people from eternal damnation is a lot easier than solving the alignment problem. These proposals involve far-fetched ideas about the way the world works. It may be that by thinking about more and more absurd hypotheticals, we can find other remote possibilities with even greater payoffs.

Searching through absurd hypotheticals for possible cause areas is an extremely neglected task. No one, as far as I’m aware, is actively trying to work out what prospects there are for producing inordinate amounts of value at probabilities far less than one in a trillion. Human intellectual activity in general has a strong bias towards figuring out what we have strong evidence for, not for figuring out what we can’t conclusively rule out. We don’t have good epistemic tools for distinguishing one in a trillion hypotheses from one in a googol hypotheses, or for saying when considerations are perfectly balanced for and against and when there are minuscule reasons to favor some options over others.

The research needed to identify remote possibilities for creating extraordinarily large amounts of value itself could be treated as a cause area, for it is only after such possibilities are recognized that we can act on them. If there is a one in a quintillion probability that we find a proposal meriting a one in a quintillion probability, that, if true, would entail that we can trivially exponentially raise the value of the universe, it is worth devoting all our fanatical attention to looking for it.

There are some reasons to be optimistic. There are a huge number of possibilities and the vast majority are extremely unlikely and have never before been considered. The recognized values of available charitable projects are generally pretty small in the grand scheme of things. There may be ways, such as with duplication via branching or with creating whole new universes, to produce vast amounts of value. If there are such remote possibilities, then they could easily dominate expected utilities.

Lessons

1.) Fanaticism is unreasonable

I think this is pretty clear from the above examples.[10] I feel reasonably confident that fanatical EAs should be working on one of those three things—certainly not anything mainstream EAs are currently doing—and I lean toward absurdist research. Maybe I’ve mistaken how plausible these projects are, or there are some better options[11] I’m missing. The point is, the fanatic’s projects will look more like these than space governance or insect suffering. The point is not just that fanatical EAs would devote some time to these absurd possibilities, but rather they are the only things that fanatical EAs would see as worth pursuing.

2.) Rationality can be unreasonable

Isaacs, Beckstead & Thomas, and Wilkinson point out how weird it would be to adopt a complete and consistent decision theory that wasn’t fanatical. It would involve making arbitrary distinctions between minute differences of the probability of different wagers or evaluating packages of wagers differently then one evaluates the sum of the wagers individually. Offered enough wagers, non-fanatics must make some distinctions that they will be very hard-pressed to justify.

I take it that a rational decision procedure must be complete and consistent. If you’re rational, you have a pattern of making decisions that is coherent come what wagers may. That pattern can’t involve arbitrary differences, such as refusing a wager at one probability for one penny while accepting the same wager at .0000000000001% greater probability at the cost of your whole life savings. Isaacs et al. are right that it is rational to follow a decision procedure that is fanatical and irrational to follow a decision procedure that is not.

However, I don’t think this challenges the fact that it is clearly unreasonable to be fanatical. If you must decide between devoting your life to spreading the gospel for some religion that you think is almost certainly wrong and making an arbitrary distinction between two remote wagers that you will never actually be offered, the reasonable way to go is the latter.

This shows that sometimes it is unreasonable to be rational. There are plenty of cases where it is unfortunate to be rational (e.g. Newcomb’s paradox). This goes a step further. Reasonability and rationality are separate concepts that often travel together, but not always.

3.) Expected value shouldn’t determine our behavior

Where rationality and reasonability come apart, I’d rather be reasonable, and I hope you would do. Insofar as fanaticism is unreasonable, we should ignore some small probabilities. We shouldn’t work on these projects. We should also be wary about more benign appeals to very low-probability but high-value possibilities. There is no obvious cutoff where it becomes reasonable to ignore small probabilities. We should probably not ignore probabilities on the scale of one in a thousand. But one in a million? One in a billion?

4.) We should ignore at least some probabilities on the order of one in a trillion, no matter how much value they promise

There’s a bit of a history of estimating how low the probabilities are that we can ignore.

I’m not sure precisely how plausible the simplistic Many Worlds interpretation or evangelical religions are, but I can see a case to be made that the relevant probabilities are as high as one in a trillion. Even so, I think it would be unreasonable to devote all of EAs resources to these projects. It follows that at least some probabilities on that order should be ignored.

It doesn’t follow from the fact that we should some one in a trillion probabilities that we should ignore all probabilities on that order, but I’d hope there would be a good story about why some small probabilities should be ignored and some equally small probabilities shouldn’t.

That story might distinguish between probabilities that are small because they depend on absurd metaphysical postulates and probabilities that are small because they depend upon lots of mundane possibilities turning out just right, but I worry that drawing such a distinction is really just a way for us to save face. We don’t want to have to evangelize (at least I don’t), so we tell a story that lets us off the hook.

A more promising alternative might distinguish between kinds of decisions that humans must make over and over, where the outcomes are independent and decisions that are one-offs or dependent on each other. Collectively ignoring relatively small probabilities on a large number of independent wagers will very likely get us into trouble. Small probabilities add up.


  1. ↩︎

    More formally: for any wager with probability greater than 0 and a finite cost, there is a possible reward value for winning that makes it rational to accept the wager. The terminology comes from Hayden Wilkinson’s In Defense of Fanaticism. See also Smith’s Is Evaluative Consistency a Requirement of Rationality, Isaacs’s Probabilities cannot be rationally neglected, Monton’s How to Avoid Maximizing Expected Utility, Beckstead and Teruji Thomas’s A paradox for tiny probabilities and enormous values and Russell’s On Two Arguments for Fanaticism . Kokotajlo’s sequence on remote possibilities is also great.

  2. ↩︎

    If value is bounded by a ceiling, expected utility maximazation doesn’t entail fanaticism. There may be nothing that could occur at a small probability that would be sufficiently valuable to be worth paying some cost. Bounded value functions for moral values are rather strange, however, and I don’t think this is a plausible way to get around the issues.

  3. ↩︎

    We might discount the expected value of low probability prospects, but only by a reasonable factor. Even quite generous discounting will allow us to draw unreasonable conclusions from fanaticism.

  4. ↩︎
  5. ↩︎

    There are different ways we could evaluate separate branches that are independent of how we think of the metaphysics of these branches. It is plausible, but not obvious, that we should treat the separate branches in the same way we treat separate situations in the same branch.

  6. ↩︎

    There would be differences between the two branches, which might grow quite large as time goes on. So bifurcation wouldn’t strictly speaking double all value, but on average we should expect bifurcations to approximately double value.

  7. ↩︎

    One in a quintillion is equivalent to getting three one in a million results in a row. If we think that there is a one in a million chance that the Many Worlds interpretation is true, and a one in a million if, given that, the simple version formulated here is true, and if, given that, there is a one in a million probability that value in such universes would effectively double after division, then we should allow this hypothesis a one in a quintillion probability.

  8. ↩︎

    Or thwarting, for the pessimists.

  9. ↩︎

    Skeptical of these numbers? There’s an argument against even pausing to consider where the argument goes wrong. Each additional bifurcation makes the universe so much better. At best you figure out that it isn’t worth your time and are down a few hours. At worst you miss a chance to multiply all the value on Earth many times over.

  10. ↩︎

    The traditional route to rejecting fanaticism comes by way of evaluating the St. Petersburg game. I find the examples here more convincing since they don’t rely on infinite structures of payoffs and they are genuine options for us.

  11. ↩︎

    Wilkinson suggests positronium research on the grounds that it might some day enable an infinite amount of computation, letting us produce an infinite number of good lives. I’m not sure if this was intended as a serious proposal, but it strikes me as less promising than the proposals I put forward here. Even if it is possible, there’s a case to be made that it is better to create many quantum branches with the expectation we’ll figure out positronium computers in a bunch of them.