Unless I’m misunderstanding your comment, I think it misses the point (sorry if I’ve misunderstood). You’re pointing to unequal/asymmetric probabilities, but the point of the test is that I can create a payoff which is large enough to outweigh the asymmetry.
1.) You really need the probabilities of the mugger and anti-mugger to be nearly exactly equal. if there is a slight edge to believing the mugger rather than the hypothetical anti-mugger, that is enough to get the problem off the ground.
I (the person being mugged) am creating the anti-mugger, so I can determine the payoff to be large enough that the anti-mugger wins in expectation.
2.) <...> I’ve suggested some examples. You don’t consider any cases where we clearly do have asymmetric reasons …
I’m sorry I only read your post quickly, but it seems that your examples are in fact subject to the reversal/inconsistency test, and also that you acknowledge those issues in your post.
You’re proving too much with your anti-mugger argument. This argument essentially invalidates EV reasoning in all practical cases.
For example, you could use EV reasoning to determine that you should give to an animal charity. But then you could imagine a demon whose sole purpose is to torture everyone on earth for the rest of time if you give to that animal charity. The probability of the demon is very small, but as you say you can make the negative payoff associated with the demon arbitrarily large, so that it becomes a very bad idea to give to the animal charity on EV grounds.
Being able to construct examples such as these means you can never justify doing anything through EV reasoning. So either your argument is wrong, or we give up on EV reasoning altogether.
Your first example (quantum/many worlds): I don’t think it’s clear that the quantum worlds example is more likely to be net positive than net negative. You talk about the Many Worlds hypothesis and say that our “power to produce quantum events <...> gives us the power to pretty trivially exponentially increase the total amount of value (for better or worse) in the world by astronomical numbers.” (emphasis added). In this case I don’t need to apply the reversal/inconsistency test, because the original statement already indicates that it could go either way. I.e. no case is made for the proposed action being net positive.
Your second example (evangelism/Pascal’s wager): I think you again acknowledge the problem:
“There are significant complications to Pascal’s argument: it isn’t clear which religion is right, and any choice with infinite rewards on one view may incur infinite punishments on another which are hard to compare. ”
To be more specific, if you decided that converting everyone to religion X was the best choice, I could concoct religion anti-X. Under the doctrine of anti-X, every time you convert someone to religion X, it creates a large infinity* of suffering, and this large infinity is very big.
Sure, you might think that there are asymmetric reasons to believe religion X over religion anti-X. E.g. maybe a billion people believe religion X, whereas I’m the only one supporting religion anti-X, but I’ve constructed the payoffs to be much larger in favour of religion anti-X to offset this.
* If you really want to get into the details about the large infinity, we could say that each time we convert one person to religion X, we create a large infinity of new humans, and there exists a bijective mapping between that new set of humans and the real number line. Each of the new humans is subjected to an infinity of suffering which is more gruesome than the suffering in the hell of religion X.
Unless I’m misunderstanding your comment, I think it misses the point (sorry if I’ve misunderstood). You’re pointing to unequal/asymmetric probabilities, but the point of the test is that I can create a payoff which is large enough to outweigh the asymmetry.
I (the person being mugged) am creating the anti-mugger, so I can determine the payoff to be large enough that the anti-mugger wins in expectation.
I’m sorry I only read your post quickly, but it seems that your examples are in fact subject to the reversal/inconsistency test, and also that you acknowledge those issues in your post.
You’re proving too much with your anti-mugger argument. This argument essentially invalidates EV reasoning in all practical cases.
For example, you could use EV reasoning to determine that you should give to an animal charity. But then you could imagine a demon whose sole purpose is to torture everyone on earth for the rest of time if you give to that animal charity. The probability of the demon is very small, but as you say you can make the negative payoff associated with the demon arbitrarily large, so that it becomes a very bad idea to give to the animal charity on EV grounds.
Being able to construct examples such as these means you can never justify doing anything through EV reasoning. So either your argument is wrong, or we give up on EV reasoning altogether.
A bit more detail on the examples from item (2)
Your first example (quantum/many worlds): I don’t think it’s clear that the quantum worlds example is more likely to be net positive than net negative. You talk about the Many Worlds hypothesis and say that our “power to produce quantum events <...> gives us the power to pretty trivially exponentially increase the total amount of value (for better or worse) in the world by astronomical numbers.” (emphasis added). In this case I don’t need to apply the reversal/inconsistency test, because the original statement already indicates that it could go either way. I.e. no case is made for the proposed action being net positive.
Your second example (evangelism/Pascal’s wager): I think you again acknowledge the problem:
“There are significant complications to Pascal’s argument: it isn’t clear which religion is right, and any choice with infinite rewards on one view may incur infinite punishments on another which are hard to compare. ”
To be more specific, if you decided that converting everyone to religion X was the best choice, I could concoct religion anti-X. Under the doctrine of anti-X, every time you convert someone to religion X, it creates a large infinity* of suffering, and this large infinity is very big.
Sure, you might think that there are asymmetric reasons to believe religion X over religion anti-X. E.g. maybe a billion people believe religion X, whereas I’m the only one supporting religion anti-X, but I’ve constructed the payoffs to be much larger in favour of religion anti-X to offset this.
* If you really want to get into the details about the large infinity, we could say that each time we convert one person to religion X, we create a large infinity of new humans, and there exists a bijective mapping between that new set of humans and the real number line. Each of the new humans is subjected to an infinity of suffering which is more gruesome than the suffering in the hell of religion X.