The title is bad, e.g. too provocative, clickbaity, overstates the claims or singles out utilitarianism too much (there are serious problems with other views).
EDIT: I’ve changed the title to “Arguments for utilitarianism are impossibility arguments under unbounded prospects”. Previously, it was “Utilitarianism is irrational or self-undermining”. Kind of long now, but descriptive and less provocative.
I’ve changed the title to “Arguments for utilitarianism are impossibility arguments under unbounded prospects”. Previously, it was “Utilitarianism is irrational or self-undermining”. Kind of long now, but descriptive and less provocative.
My response: I admit it’s provocative and sounds like clickbait, but it literally describes what I’m arguing. Maybe I should water it down, e.g. “Utilitarianism seems irrational or self-undermining” or “Utilitarianism is plausibly irrational or self-undermining”? I guess someone could reject all of the assumed requirements of rationality used here. I’m personally sympathetic to that myself (although Stochastic Dominance seems pretty hard to give up, but I think difference-making risk aversion is a plausible reason to give it up), so maybe the title even makes a claim stronger than what I’m confident in.
It’s still a claim that seems plausible enough to me to state outright as-is, though. (EDIT: I also don’t think the self-undermining bit should be controversial, but how much it would self-undermine is a matter of degree and subjective. Maybe “self-undermine” isn’t the right word, because that suggests that utilitarianism is false, not just that we’ve weakened positive arguments for utilitarianism).
Also, maybe it is unfair to single out utilitarianism in particular.
Recommendation: A collection of paradoxes dealing with Utilitarianism. This seems to me to be what you wrote, and would have had me come to the post with more of a “ooo! Fun philosophy discussion” rather than “well, thats a very strong claim… oh look at that, all so called inconsistencies and irrationalities either deal with weird infinite ethics stuff or are things I can’t understand. Time to be annoyed about how the headline is poorly argued for.” the latter experience is not useful or fun, the former nice depending on the day & company.
I think your general point can still stand, but I do want to point out that the results here don’t depend on actual infinities (infinite universe, infinitely long lives, infinite value), which is the domain of infinite ethics. We only need infinitely many possible outcomes and unbounded but finite value. My impression is that this is a less exotic/controversial domain (although I think an infinite universe shouldn’t be controversial, and I’d guess our universe is infinite with probability >80%).
Furthermore, impossibility results in infinite ethics are problematic for everyone with impartial intuitions, but the results here seem more problematic for utilitarianism in particular. You can keep Impartiality and Pareto and/or Separability in deterministic + unbounded but finite cases here, but when extending to uncertain cases, you wouldn’t end up with utilitarianism, or you’d undermine utilitarianism in doing so. You can’t extend both Impartiality and Pareto to infinite cases (allowing arbitrary bijections or swapping infinitely many people in Impartiality), and this is a problem for everyone sympathetic to both principles, not just utilitarians.
the results here don’t depend on actual infinities (infinite universe, infinitely long lives, infinite value)
This seems pretty important to me. You can handwave away standard infinite ethics by positing that everything is finite with 100% certainty, but you can’t handwave away the implications of a finite-everywere distribution with infinite EV.
(Just an offhand thought, I wonder if there’s a way to fix infinite-EV distributions by positing that utility is bounded, but that you don’t know what the bound is? My subjective belief is something like, utility is bounded, I don’t know the bound, and the expected value of the upper bound is infinity. If the upper bound is guaranteed finite but with an infinite EV, does that still cause problems?)
I think someone could hand-wave away heavy tailed distributions, too, but rather than assigning some outcomes 0 probability or refusing to rank them, they’re assuming some prospects of valid outcomes aren’t valid or never occur, even though they’re perfectly valid measure-theoretically. Or, they might actually just assign 0 probability to outcomes outside those with a bounded range of utility. In the latter case, you could represent them with both a bounded utility function and an unbounded utility function, agreeing on the bounded utility set of outcomes.
You could have moral/normative uncertainty across multiple bounded utility functions. Just make sure you don’t weigh them together via maximizing expected choiceworthiness in such a way that the weighted sum of utility functions is unbounded, because the weighted sum is a utility function. If the weighted sum is unbounded, then the same arguments in the post will apply to it. You could normalize all the utility functions first. Or, use a completely different approach to normative uncertainty, e.g. a moral parliament. That being said, the other approaches to normative uncertainty also violate Independence and can be money pumped, AFAIK.
Hmm, in response, one might claim that if we accept Pareto (in deterministic finite cases), we should accept Ex Ante Pareto + Anteriority (including Goodsell’s version), too, and if we accept Separability in deterministic finite cases, we should accept it in uncertain and possibly unbounded but finite cases, too. This would be because the arguments for the stronger principles are similar to the arguments for the weaker more restricted setting ones. So, there would be little reason to satisfy Pareto and Separability only in bounded and/or deterministic cases.
Impartiality + (Ex Ante Pareto or Separability) doesn’t work in unbounded but finite uncertain cases, but because of this, we should also doubt Impartiality + (Pareto or Separability) in unbounded but finite deterministic cases. And that counts against a lot more than just utilitarianism.
Personally, I would have kept the original title. Titles that are both accurate and clickbaity are the best kind—they get engagement without being deceptive.
I don’t think karma is always a great marker of a post’s quality or appropriateness. See an earlier exchange we had.
The title is bad, e.g. too provocative, clickbaity, overstates the claims or singles out utilitarianism too much (there are serious problems with other views).
EDIT: I’ve changed the title to “Arguments for utilitarianism are impossibility arguments under unbounded prospects”. Previously, it was “Utilitarianism is irrational or self-undermining”. Kind of long now, but descriptive and less provocative.
I’ve changed the title to “Arguments for utilitarianism are impossibility arguments under unbounded prospects”. Previously, it was “Utilitarianism is irrational or self-undermining”. Kind of long now, but descriptive and less provocative.
My response: I admit it’s provocative and sounds like clickbait, but it literally describes what I’m arguing. Maybe I should water it down, e.g. “Utilitarianism seems irrational or self-undermining” or “Utilitarianism is plausibly irrational or self-undermining”? I guess someone could reject all of the assumed requirements of rationality used here. I’m personally sympathetic to that myself (although Stochastic Dominance seems pretty hard to give up, but I think difference-making risk aversion is a plausible reason to give it up), so maybe the title even makes a claim stronger than what I’m confident in.
It’s still a claim that seems plausible enough to me to state outright as-is, though. (EDIT: I also don’t think the self-undermining bit should be controversial, but how much it would self-undermine is a matter of degree and subjective. Maybe “self-undermine” isn’t the right word, because that suggests that utilitarianism is false, not just that we’ve weakened positive arguments for utilitarianism).
Also, maybe it is unfair to single out utilitarianism in particular.
Recommendation: A collection of paradoxes dealing with Utilitarianism. This seems to me to be what you wrote, and would have had me come to the post with more of a “ooo! Fun philosophy discussion” rather than “well, thats a very strong claim… oh look at that, all so called inconsistencies and irrationalities either deal with weird infinite ethics stuff or are things I can’t understand. Time to be annoyed about how the headline is poorly argued for.” the latter experience is not useful or fun, the former nice depending on the day & company.
Thanks for the feedback!
I think your general point can still stand, but I do want to point out that the results here don’t depend on actual infinities (infinite universe, infinitely long lives, infinite value), which is the domain of infinite ethics. We only need infinitely many possible outcomes and unbounded but finite value. My impression is that this is a less exotic/controversial domain (although I think an infinite universe shouldn’t be controversial, and I’d guess our universe is infinite with probability >80%).
Furthermore, impossibility results in infinite ethics are problematic for everyone with impartial intuitions, but the results here seem more problematic for utilitarianism in particular. You can keep Impartiality and Pareto and/or Separability in deterministic + unbounded but finite cases here, but when extending to uncertain cases, you wouldn’t end up with utilitarianism, or you’d undermine utilitarianism in doing so. You can’t extend both Impartiality and Pareto to infinite cases (allowing arbitrary bijections or swapping infinitely many people in Impartiality), and this is a problem for everyone sympathetic to both principles, not just utilitarians.
This seems pretty important to me. You can handwave away standard infinite ethics by positing that everything is finite with 100% certainty, but you can’t handwave away the implications of a finite-everywere distribution with infinite EV.
(Just an offhand thought, I wonder if there’s a way to fix infinite-EV distributions by positing that utility is bounded, but that you don’t know what the bound is? My subjective belief is something like, utility is bounded, I don’t know the bound, and the expected value of the upper bound is infinity. If the upper bound is guaranteed finite but with an infinite EV, does that still cause problems?)
I think someone could hand-wave away heavy tailed distributions, too, but rather than assigning some outcomes 0 probability or refusing to rank them, they’re assuming some prospects of valid outcomes aren’t valid or never occur, even though they’re perfectly valid measure-theoretically. Or, they might actually just assign 0 probability to outcomes outside those with a bounded range of utility. In the latter case, you could represent them with both a bounded utility function and an unbounded utility function, agreeing on the bounded utility set of outcomes.
You could have moral/normative uncertainty across multiple bounded utility functions. Just make sure you don’t weigh them together via maximizing expected choiceworthiness in such a way that the weighted sum of utility functions is unbounded, because the weighted sum is a utility function. If the weighted sum is unbounded, then the same arguments in the post will apply to it. You could normalize all the utility functions first. Or, use a completely different approach to normative uncertainty, e.g. a moral parliament. That being said, the other approaches to normative uncertainty also violate Independence and can be money pumped, AFAIK.
Fairly related to this is section 6 in Beckstead and Thomas, 2022. https://onlinelibrary.wiley.com/doi/full/10.1111/nous.12462
Hmm, in response, one might claim that if we accept Pareto (in deterministic finite cases), we should accept Ex Ante Pareto + Anteriority (including Goodsell’s version), too, and if we accept Separability in deterministic finite cases, we should accept it in uncertain and possibly unbounded but finite cases, too. This would be because the arguments for the stronger principles are similar to the arguments for the weaker more restricted setting ones. So, there would be little reason to satisfy Pareto and Separability only in bounded and/or deterministic cases.
Impartiality + (Ex Ante Pareto or Separability) doesn’t work in unbounded but finite uncertain cases, but because of this, we should also doubt Impartiality + (Pareto or Separability) in unbounded but finite deterministic cases. And that counts against a lot more than just utilitarianism.
Personally, I would have kept the original title. Titles that are both accurate and clickbaity are the best kind—they get engagement without being deceptive.
I don’t think karma is always a great marker of a post’s quality or appropriateness. See an earlier exchange we had.
Unfortunately, I think clickbait also gets downvotes even if accurate, and that will drop the post down the front page or off it.
I might have gone for “Utilitarianism may be irrational or self-undermining” rather than “Utilitarianism is irrational or self-undermining”.