You don’t need explicit infinities to get weird things out of utilitarianism. Strong Longtermism is already an example of how the tiny probability that your action affects a huge number of (people?) dominates the expected value of your actions in the eyes of some prominent EAs.
You don’t need explicit infinities to get weird things out of utilitarianism.
I agree with you. Weirdness, though, is a far softer “critique” than the clear paradoxes that result from explicit infinities. And high-value low-probability moral tradeoffs aren’t even all that weird.
We need information in order to have an expected value. We can be utilitarians who deny that sufficient information is available to justify a given high-value low-probability tradeoff. Some of the critiques of “weird” longtermism lose their force once we clarify either a) that we’re ~totally uncertain about the valence of the action under consideration relative to the next-best alternative, and hence the moral conundrum is really an epistemic conundrum, or b) that we actually are very confident about its moral valence and opportunity cost, in which case the weirdness evaporates.
Consider a physicist who realizes there’s a very low but nonzero chance that detonating the first atom bomb will light the atmosphere on fire, yet who also believes that every day it doesn’t get dropped on Japan extends WWII and leads to more deaths on all sides on net. For this physicist, it might still make perfect sense to spend a year testing and checking to resolve this small chance that the nuclear bomb doesn’t work. I think this is not a “weird” decision from the perspective of most people, whether or not we assume the physicist is objectively correct about the epistemic aspect of the tradeoff.
You don’t need explicit infinities to get weird things out of utilitarianism. Strong Longtermism is already an example of how the tiny probability that your action affects a huge number of (people?) dominates the expected value of your actions in the eyes of some prominent EAs.
I agree with you. Weirdness, though, is a far softer “critique” than the clear paradoxes that result from explicit infinities. And high-value low-probability moral tradeoffs aren’t even all that weird.
We need information in order to have an expected value. We can be utilitarians who deny that sufficient information is available to justify a given high-value low-probability tradeoff. Some of the critiques of “weird” longtermism lose their force once we clarify either a) that we’re ~totally uncertain about the valence of the action under consideration relative to the next-best alternative, and hence the moral conundrum is really an epistemic conundrum, or b) that we actually are very confident about its moral valence and opportunity cost, in which case the weirdness evaporates.
Consider a physicist who realizes there’s a very low but nonzero chance that detonating the first atom bomb will light the atmosphere on fire, yet who also believes that every day it doesn’t get dropped on Japan extends WWII and leads to more deaths on all sides on net. For this physicist, it might still make perfect sense to spend a year testing and checking to resolve this small chance that the nuclear bomb doesn’t work. I think this is not a “weird” decision from the perspective of most people, whether or not we assume the physicist is objectively correct about the epistemic aspect of the tradeoff.