Let us think through the range of options for addressing Pascalâs mugging. There are basically 3 options:
A: Bite the bullet â if anyone threatens to do cause infinite suffering then do whatever they say.
B: Try to fix your expected value calculations to remove your problem.
C: Take an alternative approach to decision making that does not rely on expected value.
It is also possible that all of A and B and C fail for different reasons.*
Letâs run through.
A:
I think that in practice no one does A. If I email everyone in the EA/âlongtermism community and say: I am an evil wizard please give me $100 or I will cause infinite suffering! I doubt I will get any takers.
B:
You made three suggestions for addressing Pascalâs mugging. I think I would characterise suggestions 1 and 2 as ways of adjusting your expected value calculations to aim for more accurate expected value estimates (not as using an alternate decision making tool).
I think it would be very difficult to make this work, as it leads to problems such as the ones you highlight.
You could maybe make this work using a high discounting based on the âoptimisers curseâ type factors to reduce the expected value of high-uncertainty high-value decisions. I am not sure.
(The GPI paper on cluelessness basically says that expected value calculations can never work to solve this problem. It is plausible you could write a similar paper about Pascals mugging. It might be interesting to read the GPI paper and mentally replace âproblem of clulessnessâ with âproblem of pascals muggingâ and see how it reads).
C:
I do think you could make your third option, the common sense version, work. You just say: if I follow this decision it will lead to very perverse circumstances, such as me having to give everything I own to anyone who claims they will otherwise me cause infinite suffering. It seems so counter-intuitive that I should do this that I will decide not to do this. I think this is roughly the approach that most people follow in practice. This is similar to how you might dismiss this proof that 1+1=3 even if you cannot see the error. It is however a bit of a dissatisfying answer as it is not very rigorous, it is unclear when a conclusion is so absurd as to require outright objection.
It does seem hard to apply most of the DMDU approaches to this problem. An assumption based modeling approach would lead to you writing out all of your assumptions and looking for flaws â I am not sure where it would lead.
if looking for an more rigorous approach the flexible risk planning approach might be useful. Basically make the assumption that: when uncertainty goes up the ability to pinpoint the exact nature of the risk goes down. (I think you can investigate this empirically). So placing a reasonable expected value on a highly uncertain event means that in reality events vaguely of that type are more likely but events specifically as predicted are themselves unlikely. For example you could worry about future weapons technology that could destroy the world and try to explore what this would look like â but you can safely say it is very unlikely to look like your explorations. This might allow you to avoid the pascal mugger and invest appropriate time into more general more flexible evil wizard protection.
Does that help?
* I worry that I have made this work by defining C as everything else and that the above is just saying Paradox â No clear solution â Everything else must be the solutions.
This is true, but we could all be mistaken. This doesnât seem unlikely to me, considering that our brains simply were not built to handle such incredibly small probabilities and incredibly large magnitudes of disutility. That said, I wonât practically bite the bullet, any more than people who would choose torture over dust specks probably do, or any more than pure impartial consequentialists truly sacrifice all their own frivolities for altruism. (This latter case is often excused as just avoiding burnout, but I seriously doubt the level of self-indulgence of the average consequentialist EA, myself included, is anywhere close to altruistically optimal.)
In generalâand this is something I seem to disagree with many in this community aboutâI think following your ethics or decision theory through to its honest conclusions tends to make more sense than assuming the status quo is probably close to optimal. There is of course some reflective equilibrium involved here; sometimes I do revise my understanding of the ethical/âdecision theory.
This is similar to how you might dismiss this proof that 1+1=3 even if you cannot see the error.
To the extent that I assign nonzero probability to mathematically absurd statements (based on precedents like these), I donât think thereâs very high disutility in acting as if 1+1=2 in a world where itâs actually true that 1+1=3. But that could be a failure of my imagination.
It is however a bit of a dissatisfying answer as it is not very rigorous, it is unclear when a conclusion is so absurd as to require outright objection.
This is basically my response. I think thereâs some meaningful distinction between good applications of reductio ad absurdum and relatively hollow appeals to âcommon sense,â though, and the dismissal of Pascalâs mugging strikes me as more the latter.
For example you could worry about future weapons technology that could destroy the world and try to explore what this would look like â but you can safely say it is very unlikely to look like your explorations.
Iâm not sure I follow how this helps. People who accept giving into Pascalâs mugger donât dispute that the very bad scenario in question is âvery unlikely.â
This might allow you to avoid the pascal mugger and invest appropriate time into more general more flexible evil wizard protection.
I think you might be onto something here, but Iâd need the details fleshed out because I donât quite understand the claim.
This is a fascinating question â thank you.
Let us think through the range of options for addressing Pascalâs mugging. There are basically 3 options:
A: Bite the bullet â if anyone threatens to do cause infinite suffering then do whatever they say.
B: Try to fix your expected value calculations to remove your problem.
C: Take an alternative approach to decision making that does not rely on expected value.
It is also possible that all of A and B and C fail for different reasons.*
Letâs run through.
A:
I think that in practice no one does A. If I email everyone in the EA/âlongtermism community and say: I am an evil wizard please give me $100 or I will cause infinite suffering! I doubt I will get any takers.
B:
You made three suggestions for addressing Pascalâs mugging. I think I would characterise suggestions 1 and 2 as ways of adjusting your expected value calculations to aim for more accurate expected value estimates (not as using an alternate decision making tool).
I think it would be very difficult to make this work, as it leads to problems such as the ones you highlight.
You could maybe make this work using a high discounting based on the âoptimisers curseâ type factors to reduce the expected value of high-uncertainty high-value decisions. I am not sure.
(The GPI paper on cluelessness basically says that expected value calculations can never work to solve this problem. It is plausible you could write a similar paper about Pascals mugging. It might be interesting to read the GPI paper and mentally replace âproblem of clulessnessâ with âproblem of pascals muggingâ and see how it reads).
C:
I do think you could make your third option, the common sense version, work. You just say: if I follow this decision it will lead to very perverse circumstances, such as me having to give everything I own to anyone who claims they will otherwise me cause infinite suffering. It seems so counter-intuitive that I should do this that I will decide not to do this. I think this is roughly the approach that most people follow in practice. This is similar to how you might dismiss this proof that 1+1=3 even if you cannot see the error. It is however a bit of a dissatisfying answer as it is not very rigorous, it is unclear when a conclusion is so absurd as to require outright objection.
It does seem hard to apply most of the DMDU approaches to this problem. An assumption based modeling approach would lead to you writing out all of your assumptions and looking for flaws â I am not sure where it would lead.
if looking for an more rigorous approach the flexible risk planning approach might be useful. Basically make the assumption that: when uncertainty goes up the ability to pinpoint the exact nature of the risk goes down. (I think you can investigate this empirically). So placing a reasonable expected value on a highly uncertain event means that in reality events vaguely of that type are more likely but events specifically as predicted are themselves unlikely. For example you could worry about future weapons technology that could destroy the world and try to explore what this would look like â but you can safely say it is very unlikely to look like your explorations. This might allow you to avoid the pascal mugger and invest appropriate time into more general more flexible evil wizard protection.
Does that help?
* I worry that I have made this work by defining C as everything else and that the above is just saying Paradox â No clear solution â Everything else must be the solutions.
Thanks for your reply! :)
This is true, but we could all be mistaken. This doesnât seem unlikely to me, considering that our brains simply were not built to handle such incredibly small probabilities and incredibly large magnitudes of disutility. That said, I wonât practically bite the bullet, any more than people who would choose torture over dust specks probably do, or any more than pure impartial consequentialists truly sacrifice all their own frivolities for altruism. (This latter case is often excused as just avoiding burnout, but I seriously doubt the level of self-indulgence of the average consequentialist EA, myself included, is anywhere close to altruistically optimal.)
In generalâand this is something I seem to disagree with many in this community aboutâI think following your ethics or decision theory through to its honest conclusions tends to make more sense than assuming the status quo is probably close to optimal. There is of course some reflective equilibrium involved here; sometimes I do revise my understanding of the ethical/âdecision theory.
To the extent that I assign nonzero probability to mathematically absurd statements (based on precedents like these), I donât think thereâs very high disutility in acting as if 1+1=2 in a world where itâs actually true that 1+1=3. But that could be a failure of my imagination.
This is basically my response. I think thereâs some meaningful distinction between good applications of reductio ad absurdum and relatively hollow appeals to âcommon sense,â though, and the dismissal of Pascalâs mugging strikes me as more the latter.
Iâm not sure I follow how this helps. People who accept giving into Pascalâs mugger donât dispute that the very bad scenario in question is âvery unlikely.â
I think you might be onto something here, but Iâd need the details fleshed out because I donât quite understand the claim.