Option 1: A 99% chance that everyone on earth gets tortured for all of time (-100 utils per person) and a 1% chance that a septillion happy people get created (+90 utils pp) for all of time
Option 2: A 100% chance that everyone on earth becomes maximally happy for all of time (+100 utils pp)
Let’s assume the population in both these scenario’s remain stable over time (or grow similarly), Expected Value Theory (and classic utilitarianism by extension) says we should choose option 1, even though this has a 99% chance of an s-risk, over a guaranteed everlasting utopia for everyone. (You can also create a scenario with an x-risk instead of an s-risk). This seems counterintuitive.
Say you had to choose between two options:
Option 1: A 99% chance that everyone on earth gets tortured for all of time (-100 utils per person) and a 1% chance that a septillion happy people get created (+90 utils pp) for all of time
Option 2: A 100% chance that everyone on earth becomes maximally happy for all of time (+100 utils pp)
Let’s assume the population in both these scenario’s remain stable over time (or grow similarly), Expected Value Theory (and classic utilitarianism by extension) says we should choose option 1, even though this has a 99% chance of an s-risk, over a guaranteed everlasting utopia for everyone. (You can also create a scenario with an x-risk instead of an s-risk). This seems counterintuitive.
I call this the wagering calamity objection.
EDIT: This is not the ‘very repugnant conclusion’ since it’s not about inequality within a population, but rather about risk-aversion.
This sounds similar to the “very repugnant conclusion”.