I’d be happy to get constructive criticism, given downvotes I was getting soon after posting. I’ll leave some comment replies here for people to agreevote/disagreevote with in case they want to stay anonymous. I also welcome feedback as comments here or private messages.
I’ve removed my own upvotes from these comments so this thread doesn’t start at the top of the comment section. EDIT: Also, keep my comments in this thread at 0 karma if you want to avoid affecting my karma for making so many comments.
I haven’t downvoted it, and I’m sorry you’re getting that response for a thoughtful and in-depth piece of work, but I can offer a couple of criticisms I had that have stopped me upvoting it yet because I don’t feel like I understand it, mixed in with a couple of criticisms where I feel like I did:
Too much work done by citations. Perhaps it’s not possible to extract key arguments, but most philosophy papers IME have their core point in just a couple of paragraphs, which you could quote, summarise or refer to more precisely than a link to the whole paper. Most people on this forum just won’t have the bandwidth to go digging through all the links.
The arguments for infinite prospective utility didn’t hold up for me. A spatially infinite universe doesn’t give us infinite expectation from our action—even if the universe never ends, our light cone will always be finite. Re Oesterheld’s paper, acausal influence seems an extremely controversial notion in which I personally see no reason to believe. Certainly if it’s a choice between rejecting that or scrabbling for some alternative to an intuitive approach that in the real world has always yielded reasonable solutions, I’m happy to count that as a point against Oesterheld.
Relatedly, some parts I felt like you didn’t explain well enough for me to understand your case, eg:
I don’t see the argument in this post for this: ‘So, based on the two theorems, if we assume Stochastic Dominance and Impartiality,[18] then we can’t have Anteriority (unless it’s not worse to add more people to hell) or Separability.’ It seemed like you just attempted to define these things and then asserted this—maybe I missed something in the definition?
‘You are facing a prospect A with infinite expected utility, but finite utility no matter what actually happens. Maybe A is your own future and you value your years of life linearly, and could live arbitrarily but finitely long, and so long under some possibilities that your life expectancy and corresponding expected utility is infinite.’ I don’t see how this makes sense. If all possible outcomes have me living a finite amount of time and generating finite utility per life-year, I don’t see why expectation would be infinite.
Too much emphasis on what you find ‘plausible’. IMO philosophy arguments should just taboo that word.
Hmm, I didn’t expect or intend for people to dig through the links, but it looks like I misjudged what things people would find cruxy for the rest of the arguments but not defended well enough, e.g. your concerns with infinite expected utility.
EDIT: I’ve rewritten the arguments for possibly unbounded impacts.
The arguments for infinite prospective utility didn’t hold up for me. A spatially infinite universe doesn’t give us infinite expectation from our action—even if the universe never ends, our light cone will always be finite.
But can you produce a finite upper bound on our lightcone that you’re 100% confident nothing can pass? (It doesn’t have to be tight.) If not, then you could consider a St Petersburg-like prospect, with for each n, has probability 1/2n of size (or impact) 2n, in whatever units you’re using. That’s finite under every possible outcome, but it has an infinite expected value.
Re Oesterheld’s paper, acausal influence seems an extremely controversial notion in which I personally see no reason to believe.
Section II from Carlsmith, 2021 is one of the best arguments for acausal influence I’m aware of, in case you’re interested in something more convincing. (FWIW, I also thought acausal influence was crazy for a long time, and I didn’t find Newcomb’s problem to be a compelling reason to reject causal decision theory.)
EDIT: I’ve now cut the acausal stuff and just focus on unbounded duration.
It seemed like you just attempted to define these things and then asserted this—maybe I missed something in the definition?
This follows from the theorems I cited, but I didn’t include proofs of the theorems here. The proofs are technical and tricky,[1] and I didn’t want to make my post much longer or spend so much more time on it. Explaining each proof in an intuitive way could probably be a post on its own.
I don’t see how this makes sense. If all possible outcomes have me living a finite amount of time and generating finite utility per life-year, I don’t see why expectation would be infinite.
How long you live could be distributed like a St Petersburg gamble, e.g. for each n, with probability 1/2n., you could live 2n years. The expected value of that is infinite, even though you’d definitely only live a finite amount of time.
Too much emphasis on what you find ‘plausible’. IMO philosophy arguments should just taboo that word.
Ya, I got similar feedback on an earlier draft for making it harder to read, and tried to cut some uses of the word, but still left a bunch. I’ll see if I can cut some more.
They work by producing some weird set of prospects. They then show that you can’t order them in a way that satisfies the axioms, applying them one-by-one and then violating one of them or getting a contradiction.
But can you produce a finite upper bound on our lightcone that you’re 100% confident nothing can pass? (It doesn’t have to be tight.)
I think Vasco already made this point elsewhere, but I don’t see why you need certainty about any specific line to have finite expectation. If for the counterfactual payoff x, you think (perhaps after a certain point) xP(x) approaches 0 as x tends to infinity, it seems like you get finite expectation without ever having absolute confidence in any boundary (this applies to life expectancy, too).
Section II from Carlsmith, 2021 is one of the best arguments for acausal influence I’m aware of, in case you’re interested in something more convincing. (FWIW, I also thought acausal influence was crazy for a long time, and I didn’t find Newcomb’s problem to be a compelling reason to reject causal decision theory.)
Thanks! I had a look, and it still doesn’t persuade me, for much the reasons Newcomb’s problem didn’t. In roughly ascending importance
Maybe this just a technicality, but the claim ‘you are exposed to exactly identical inputs’ seems impossible to realise with perfect precision. The simulator itself must differ in the two cases. So in the same way that outputs of two instances of a software program being run, even on the same computer in the same environment can theoretically differ for various reasons (looking at a high enough zoom level they will differ), the two simulations can’t be guaranteed identical (Carlsmith even admits this with ‘absent some kind of computer malfunction’, but just glosses over it). On the one hand, this might be too fine a distinction to matter in practice; on the other, if I’m supposed to believe a wildly counterintuitive proposition instead of a commonsense one that seems to work fine in the real world, based on supposed logical necessity that it turns out isn’t logically necessary, I’m going to be very sceptical of the proposition even if I can’t find a stronger reason to reject it.
The thought experiment gives no reason why the AI system should actually believe it’s in the scenario described, and that seems like a crucial element in its decision process. If in the real world, someone put me in a room with a chalkboard and told me this is what was happening, no matter what evidence they showed, I would have some element of doubt, both of their ability (cf point 1) but more importantly their motivations. If I discovered that the world was so bizarre as in this scenario, it would be at best a coinflip for me that I should take them at face value.
It seems contradictory to frame decision theory as applying to ‘a deterministic AI system’ whose clones ‘will make the same choice, as a matter of logical necessity’. There’s a whole free will debate lurking underneath any decision theoretic discussion involving recognisable agents that I don’t particularly want to get into—but if you’re taking away all agency from the ‘agent’, it’s hard to see what it means to advocate it adopting a particular decision theory. At that point the AI might as well be a rock, and I don’t feel like anyone is concerned about which decision theory rocks ‘should’ adopt.
This follows from the theorems I cited, but I didn’t include proofs of the theorems here. The proofs are technical and tricky,[1] and I didn’t want to make my post much longer or spend so much more time on it. Explaining each proof in an intuitive way could probably be a post on its own.
I would be less interested to see a reconstruction of a proof of the theorems and more interested to see them stated formally and a proof of the claim that it follows from them.
On Carlsmith’s example, we can just make it a logical necessity by assuming more. And, as you acknowledge the possibility, some distinctions can be too fine. Maybe you’re only 5% sure your copy exists at all and the conditions are right for you to get $1 million from your copy sending it.
5%*$1 million = $50,000 > $1,000, so you still make more in expectation from sending a million dollars. You break even in expected money if your decision to send $1 million increases your copy’s probability of sending $1 million by 1⁄1,000.
I do find it confusing to think about decision-making under determinism, but I think 3 proves too much. I don’t think quantum indeterminacy or randomness saves free will or agency if it weren’t already saved, and we don’t seem to have any other options, assuming physicalism and our current understanding of physics.
I think Vasco already made this point elsewhere, but I don’t see why you need certainty about any specific line to have finite expectation. If for the counterfactual payoff x, you think (perhaps after a certain point) xP(x) approaches 0 as x tends to infinity, it seems like you get finite expectation without ever having absolute confidence in any boundary (this applies to life expectancy, too).
Ya, I agree you don’t need certainty about the bound, but now you need certainty about the distribution not being heavy-tailed at all. Suppose your best guess is that it looks like some distribution X, with finite expected value. Now, I suggest that it might actually be Y, which is heavy-tailed (has infinite expected value). If you assign any nonzero probability to that being right, e.g. switch to pY+(1−p)X for some p>0, then your new distribution is heavy-tailed, too. In general, if you think there’s some chance you’d come to believe it’s heavy-tailed, then you should believe now that it’s heavy-tailed, because a probabilistic mixture with a heavy-tailed distribution is heavy-tailed. Or, if you think there’s some chance you’d come to believe there’s some chance it’s heavy-tailed, then you should believe now that it’s heavy-tailed.
(Vasco’s claim was stronger: the difference is exactly 0 past some point.)
I would be less interested to see a reconstruction of a proof of the theorems and more interested to see them stated formally and a proof of the claim that it follows from them.
Hmm, I might be misunderstanding.
I already have formal statements of the theorems in the post:
Stochastic Dominance, Anteriority and Impartiality are jointly inconsistent.
Stochastic Dominance, Separability and Impartiality are jointly inconsistent.
All of those terms are defined in the section Anti-utilitarian theorems. I guess I defined Impartiality a bit informally and might have hidden some background assumptions (preorder, so reflexivity + transitivity, and the set of prospects is every probability distribution over outcomes in the set of outcomes), but the rest were formally defined.
Then, from 1, assuming Stochastic Dominance and Impartiality, Anteriority must be false. From 2, assuming Stochastic Dominance and Impartiality, Separability must be false. Therefore assuming Stochastic Dominance and Impartiality, Anteriority and Separability must both be false.
The title is bad, e.g. too provocative, clickbaity, overstates the claims or singles out utilitarianism too much (there are serious problems with other views).
EDIT: I’ve changed the title to “Arguments for utilitarianism are impossibility arguments under unbounded prospects”. Previously, it was “Utilitarianism is irrational or self-undermining”. Kind of long now, but descriptive and less provocative.
I’ve changed the title to “Arguments for utilitarianism are impossibility arguments under unbounded prospects”. Previously, it was “Utilitarianism is irrational or self-undermining”. Kind of long now, but descriptive and less provocative.
My response: I admit it’s provocative and sounds like clickbait, but it literally describes what I’m arguing. Maybe I should water it down, e.g. “Utilitarianism seems irrational or self-undermining” or “Utilitarianism is plausibly irrational or self-undermining”? I guess someone could reject all of the assumed requirements of rationality used here. I’m personally sympathetic to that myself (although Stochastic Dominance seems pretty hard to give up, but I think difference-making risk aversion is a plausible reason to give it up), so maybe the title even makes a claim stronger than what I’m confident in.
It’s still a claim that seems plausible enough to me to state outright as-is, though. (EDIT: I also don’t think the self-undermining bit should be controversial, but how much it would self-undermine is a matter of degree and subjective. Maybe “self-undermine” isn’t the right word, because that suggests that utilitarianism is false, not just that we’ve weakened positive arguments for utilitarianism).
Also, maybe it is unfair to single out utilitarianism in particular.
Recommendation: A collection of paradoxes dealing with Utilitarianism. This seems to me to be what you wrote, and would have had me come to the post with more of a “ooo! Fun philosophy discussion” rather than “well, thats a very strong claim… oh look at that, all so called inconsistencies and irrationalities either deal with weird infinite ethics stuff or are things I can’t understand. Time to be annoyed about how the headline is poorly argued for.” the latter experience is not useful or fun, the former nice depending on the day & company.
I think your general point can still stand, but I do want to point out that the results here don’t depend on actual infinities (infinite universe, infinitely long lives, infinite value), which is the domain of infinite ethics. We only need infinitely many possible outcomes and unbounded but finite value. My impression is that this is a less exotic/controversial domain (although I think an infinite universe shouldn’t be controversial, and I’d guess our universe is infinite with probability >80%).
Furthermore, impossibility results in infinite ethics are problematic for everyone with impartial intuitions, but the results here seem more problematic for utilitarianism in particular. You can keep Impartiality and Pareto and/or Separability in deterministic + unbounded but finite cases here, but when extending to uncertain cases, you wouldn’t end up with utilitarianism, or you’d undermine utilitarianism in doing so. You can’t extend both Impartiality and Pareto to infinite cases (allowing arbitrary bijections or swapping infinitely many people in Impartiality), and this is a problem for everyone sympathetic to both principles, not just utilitarians.
the results here don’t depend on actual infinities (infinite universe, infinitely long lives, infinite value)
This seems pretty important to me. You can handwave away standard infinite ethics by positing that everything is finite with 100% certainty, but you can’t handwave away the implications of a finite-everywere distribution with infinite EV.
(Just an offhand thought, I wonder if there’s a way to fix infinite-EV distributions by positing that utility is bounded, but that you don’t know what the bound is? My subjective belief is something like, utility is bounded, I don’t know the bound, and the expected value of the upper bound is infinity. If the upper bound is guaranteed finite but with an infinite EV, does that still cause problems?)
I think someone could hand-wave away heavy tailed distributions, too, but rather than assigning some outcomes 0 probability or refusing to rank them, they’re assuming some prospects of valid outcomes aren’t valid or never occur, even though they’re perfectly valid measure-theoretically. Or, they might actually just assign 0 probability to outcomes outside those with a bounded range of utility. In the latter case, you could represent them with both a bounded utility function and an unbounded utility function, agreeing on the bounded utility set of outcomes.
You could have moral/normative uncertainty across multiple bounded utility functions. Just make sure you don’t weigh them together via maximizing expected choiceworthiness in such a way that the weighted sum of utility functions is unbounded, because the weighted sum is a utility function. If the weighted sum is unbounded, then the same arguments in the post will apply to it. You could normalize all the utility functions first. Or, use a completely different approach to normative uncertainty, e.g. a moral parliament. That being said, the other approaches to normative uncertainty also violate Independence and can be money pumped, AFAIK.
Hmm, in response, one might claim that if we accept Pareto (in deterministic finite cases), we should accept Ex Ante Pareto + Anteriority (including Goodsell’s version), too, and if we accept Separability in deterministic finite cases, we should accept it in uncertain and possibly unbounded but finite cases, too. This would be because the arguments for the stronger principles are similar to the arguments for the weaker more restricted setting ones. So, there would be little reason to satisfy Pareto and Separability only in bounded and/or deterministic cases.
Impartiality + (Ex Ante Pareto or Separability) doesn’t work in unbounded but finite uncertain cases, but because of this, we should also doubt Impartiality + (Pareto or Separability) in unbounded but finite deterministic cases. And that counts against a lot more than just utilitarianism.
Personally, I would have kept the original title. Titles that are both accurate and clickbaity are the best kind—they get engagement without being deceptive.
I don’t think karma is always a great marker of a post’s quality or appropriateness. See an earlier exchange we had.
My writing is unclear. Some things could have been better explained or explained in more detail (I got similar feedback on LW here). Or, my sentence structure is bad/hard to follow.
I think this subject is very important and underrated, so I’m glad you wrote the post, and you raised some points that I wasn’t aware of, and I would like to see people write more posts like this one. The post didn’t do as much for me as it could have because I found two of its three main arguments hard to understand:
For your first argument (“Unbounded utility functions are irrational”), the post spends several paragraphs setting up a specific function that I could have easily constructed myself (for me it’s pretty obvious that there exist finite utility functions with infinite EV), and then ends by saying utilitarianism “lead[s] to violations of generalizations of the Independence axiom and the Sure-Thing Principle”, which I take to be the central argument, but I don’t know what the Sure-Thing Principle is. I think I know what Independence is, but I don’t know what you mean by “generalizations of Independence”. So it feels like I still have no idea what your actual argument is.
I had no difficulty following your money pump argument.
For the third argument, the post claims that some axioms rule out expectational total utilitarianism, but the axioms aren’t defined and I don’t know what they mean, and I don’t know how they rule out expectational total utilitarianism. (I tried to look at the cited paper, but it’s not publicly available and it doesn’t look like it’s on Sci-Hub either.)
I can see how naming them without defining them would throw people off. In my view, it’s acting seemingly irrationally, like getting money pumped, getting Dutch booked or paying to avoid information, that matters, not satisfying Independence or the STP. If you don’t care about this apparently irrational behaviour, then you wouldn’t really have any independent reason to accept Independence or the STP, except maybe that they seem directly intuitive. If I introduced them, that could throw other people off or otherwise take up much more space in an already long post to explain with concrete examples. But footnotes probably would have been good.
Good to hear!
Which argument do you mean? I defined and motivated the axioms for the two impossibility theorems with SD and Impartiality I cite, but I did that after stating the theorems, in the Anti-utilitarian theorems section. (Maybe I should have linked the section in the summary and outline?)
Some of the (main) arguments are wrong/bad, or the main title claim is wrong.
EDIT: I’ve changed the title to “Arguments for utilitarianism are impossibility arguments under unbounded prospects”. Previously, it was “Utilitarianism is irrational or self-undermining”. Kind of long now, but descriptive and less provocative.
I’d be happy to get constructive criticism, given downvotes I was getting soon after posting. I’ll leave some comment replies here for people to agreevote/disagreevote with in case they want to stay anonymous. I also welcome feedback as comments here or private messages.
I’ve removed my own upvotes from these comments so this thread doesn’t start at the top of the comment section. EDIT: Also, keep my comments in this thread at 0 karma if you want to avoid affecting my karma for making so many comments.
I haven’t downvoted it, and I’m sorry you’re getting that response for a thoughtful and in-depth piece of work, but I can offer a couple of criticisms I had that have stopped me upvoting it yet because I don’t feel like I understand it, mixed in with a couple of criticisms where I feel like I did:
Too much work done by citations. Perhaps it’s not possible to extract key arguments, but most philosophy papers IME have their core point in just a couple of paragraphs, which you could quote, summarise or refer to more precisely than a link to the whole paper. Most people on this forum just won’t have the bandwidth to go digging through all the links.
The arguments for infinite prospective utility didn’t hold up for me. A spatially infinite universe doesn’t give us infinite expectation from our action—even if the universe never ends, our light cone will always be finite. Re Oesterheld’s paper, acausal influence seems an extremely controversial notion in which I personally see no reason to believe. Certainly if it’s a choice between rejecting that or scrabbling for some alternative to an intuitive approach that in the real world has always yielded reasonable solutions, I’m happy to count that as a point against Oesterheld.
Relatedly, some parts I felt like you didn’t explain well enough for me to understand your case, eg:
I don’t see the argument in this post for this: ‘So, based on the two theorems, if we assume Stochastic Dominance and Impartiality,[18] then we can’t have Anteriority (unless it’s not worse to add more people to hell) or Separability.’ It seemed like you just attempted to define these things and then asserted this—maybe I missed something in the definition?
‘You are facing a prospect A with infinite expected utility, but finite utility no matter what actually happens. Maybe A is your own future and you value your years of life linearly, and could live arbitrarily but finitely long, and so long under some possibilities that your life expectancy and corresponding expected utility is infinite.’ I don’t see how this makes sense. If all possible outcomes have me living a finite amount of time and generating finite utility per life-year, I don’t see why expectation would be infinite.
Too much emphasis on what you find ‘plausible’. IMO philosophy arguments should just taboo that word.
Thanks for the feedback and criticism!
Hmm, I didn’t expect or intend for people to dig through the links, but it looks like I misjudged what things people would find cruxy for the rest of the arguments but not defended well enough, e.g. your concerns with infinite expected utility.
EDIT: I’ve rewritten the arguments for possibly unbounded impacts.
But can you produce a finite upper bound on our lightcone that you’re 100% confident nothing can pass? (It doesn’t have to be tight.) If not, then you could consider a St Petersburg-like prospect, with for each n, has probability 1/2n of size (or impact) 2n, in whatever units you’re using. That’s finite under every possible outcome, but it has an infinite expected value.
Section II from Carlsmith, 2021 is one of the best arguments for acausal influence I’m aware of, in case you’re interested in something more convincing. (FWIW, I also thought acausal influence was crazy for a long time, and I didn’t find Newcomb’s problem to be a compelling reason to reject causal decision theory.)
EDIT: I’ve now cut the acausal stuff and just focus on unbounded duration.
This follows from the theorems I cited, but I didn’t include proofs of the theorems here. The proofs are technical and tricky,[1] and I didn’t want to make my post much longer or spend so much more time on it. Explaining each proof in an intuitive way could probably be a post on its own.
How long you live could be distributed like a St Petersburg gamble, e.g. for each n, with probability 1/2n., you could live 2n years. The expected value of that is infinite, even though you’d definitely only live a finite amount of time.
Ya, I got similar feedback on an earlier draft for making it harder to read, and tried to cut some uses of the word, but still left a bunch. I’ll see if I can cut some more.
They work by producing some weird set of prospects. They then show that you can’t order them in a way that satisfies the axioms, applying them one-by-one and then violating one of them or getting a contradiction.
I think Vasco already made this point elsewhere, but I don’t see why you need certainty about any specific line to have finite expectation. If for the counterfactual payoff x, you think (perhaps after a certain point) xP(x) approaches 0 as x tends to infinity, it seems like you get finite expectation without ever having absolute confidence in any boundary (this applies to life expectancy, too).
Thanks! I had a look, and it still doesn’t persuade me, for much the reasons Newcomb’s problem didn’t. In roughly ascending importance
Maybe this just a technicality, but the claim ‘you are exposed to exactly identical inputs’ seems impossible to realise with perfect precision. The simulator itself must differ in the two cases. So in the same way that outputs of two instances of a software program being run, even on the same computer in the same environment can theoretically differ for various reasons (looking at a high enough zoom level they will differ), the two simulations can’t be guaranteed identical (Carlsmith even admits this with ‘absent some kind of computer malfunction’, but just glosses over it). On the one hand, this might be too fine a distinction to matter in practice; on the other, if I’m supposed to believe a wildly counterintuitive proposition instead of a commonsense one that seems to work fine in the real world, based on supposed logical necessity that it turns out isn’t logically necessary, I’m going to be very sceptical of the proposition even if I can’t find a stronger reason to reject it.
The thought experiment gives no reason why the AI system should actually believe it’s in the scenario described, and that seems like a crucial element in its decision process. If in the real world, someone put me in a room with a chalkboard and told me this is what was happening, no matter what evidence they showed, I would have some element of doubt, both of their ability (cf point 1) but more importantly their motivations. If I discovered that the world was so bizarre as in this scenario, it would be at best a coinflip for me that I should take them at face value.
It seems contradictory to frame decision theory as applying to ‘a deterministic AI system’ whose clones ‘will make the same choice, as a matter of logical necessity’. There’s a whole free will debate lurking underneath any decision theoretic discussion involving recognisable agents that I don’t particularly want to get into—but if you’re taking away all agency from the ‘agent’, it’s hard to see what it means to advocate it adopting a particular decision theory. At that point the AI might as well be a rock, and I don’t feel like anyone is concerned about which decision theory rocks ‘should’ adopt.
I would be less interested to see a reconstruction of a proof of the theorems and more interested to see them stated formally and a proof of the claim that it follows from them.
On Carlsmith’s example, we can just make it a logical necessity by assuming more. And, as you acknowledge the possibility, some distinctions can be too fine. Maybe you’re only 5% sure your copy exists at all and the conditions are right for you to get $1 million from your copy sending it.
5%*$1 million = $50,000 > $1,000, so you still make more in expectation from sending a million dollars. You break even in expected money if your decision to send $1 million increases your copy’s probability of sending $1 million by 1⁄1,000.
I do find it confusing to think about decision-making under determinism, but I think 3 proves too much. I don’t think quantum indeterminacy or randomness saves free will or agency if it weren’t already saved, and we don’t seem to have any other options, assuming physicalism and our current understanding of physics.
Ya, I agree you don’t need certainty about the bound, but now you need certainty about the distribution not being heavy-tailed at all. Suppose your best guess is that it looks like some distribution X, with finite expected value. Now, I suggest that it might actually be Y, which is heavy-tailed (has infinite expected value). If you assign any nonzero probability to that being right, e.g. switch to pY+(1−p)X for some p>0, then your new distribution is heavy-tailed, too. In general, if you think there’s some chance you’d come to believe it’s heavy-tailed, then you should believe now that it’s heavy-tailed, because a probabilistic mixture with a heavy-tailed distribution is heavy-tailed. Or, if you think there’s some chance you’d come to believe there’s some chance it’s heavy-tailed, then you should believe now that it’s heavy-tailed.
(Vasco’s claim was stronger: the difference is exactly 0 past some point.)
Hmm, I might be misunderstanding.
I already have formal statements of the theorems in the post:
Stochastic Dominance, Anteriority and Impartiality are jointly inconsistent.
Stochastic Dominance, Separability and Impartiality are jointly inconsistent.
All of those terms are defined in the section Anti-utilitarian theorems. I guess I defined Impartiality a bit informally and might have hidden some background assumptions (preorder, so reflexivity + transitivity, and the set of prospects is every probability distribution over outcomes in the set of outcomes), but the rest were formally defined.
Then, from 1, assuming Stochastic Dominance and Impartiality, Anteriority must be false. From 2, assuming Stochastic Dominance and Impartiality, Separability must be false. Therefore assuming Stochastic Dominance and Impartiality, Anteriority and Separability must both be false.
The post is too long.
The title is bad, e.g. too provocative, clickbaity, overstates the claims or singles out utilitarianism too much (there are serious problems with other views).
EDIT: I’ve changed the title to “Arguments for utilitarianism are impossibility arguments under unbounded prospects”. Previously, it was “Utilitarianism is irrational or self-undermining”. Kind of long now, but descriptive and less provocative.
I’ve changed the title to “Arguments for utilitarianism are impossibility arguments under unbounded prospects”. Previously, it was “Utilitarianism is irrational or self-undermining”. Kind of long now, but descriptive and less provocative.
My response: I admit it’s provocative and sounds like clickbait, but it literally describes what I’m arguing. Maybe I should water it down, e.g. “Utilitarianism seems irrational or self-undermining” or “Utilitarianism is plausibly irrational or self-undermining”? I guess someone could reject all of the assumed requirements of rationality used here. I’m personally sympathetic to that myself (although Stochastic Dominance seems pretty hard to give up, but I think difference-making risk aversion is a plausible reason to give it up), so maybe the title even makes a claim stronger than what I’m confident in.
It’s still a claim that seems plausible enough to me to state outright as-is, though. (EDIT: I also don’t think the self-undermining bit should be controversial, but how much it would self-undermine is a matter of degree and subjective. Maybe “self-undermine” isn’t the right word, because that suggests that utilitarianism is false, not just that we’ve weakened positive arguments for utilitarianism).
Also, maybe it is unfair to single out utilitarianism in particular.
Recommendation: A collection of paradoxes dealing with Utilitarianism. This seems to me to be what you wrote, and would have had me come to the post with more of a “ooo! Fun philosophy discussion” rather than “well, thats a very strong claim… oh look at that, all so called inconsistencies and irrationalities either deal with weird infinite ethics stuff or are things I can’t understand. Time to be annoyed about how the headline is poorly argued for.” the latter experience is not useful or fun, the former nice depending on the day & company.
Thanks for the feedback!
I think your general point can still stand, but I do want to point out that the results here don’t depend on actual infinities (infinite universe, infinitely long lives, infinite value), which is the domain of infinite ethics. We only need infinitely many possible outcomes and unbounded but finite value. My impression is that this is a less exotic/controversial domain (although I think an infinite universe shouldn’t be controversial, and I’d guess our universe is infinite with probability >80%).
Furthermore, impossibility results in infinite ethics are problematic for everyone with impartial intuitions, but the results here seem more problematic for utilitarianism in particular. You can keep Impartiality and Pareto and/or Separability in deterministic + unbounded but finite cases here, but when extending to uncertain cases, you wouldn’t end up with utilitarianism, or you’d undermine utilitarianism in doing so. You can’t extend both Impartiality and Pareto to infinite cases (allowing arbitrary bijections or swapping infinitely many people in Impartiality), and this is a problem for everyone sympathetic to both principles, not just utilitarians.
This seems pretty important to me. You can handwave away standard infinite ethics by positing that everything is finite with 100% certainty, but you can’t handwave away the implications of a finite-everywere distribution with infinite EV.
(Just an offhand thought, I wonder if there’s a way to fix infinite-EV distributions by positing that utility is bounded, but that you don’t know what the bound is? My subjective belief is something like, utility is bounded, I don’t know the bound, and the expected value of the upper bound is infinity. If the upper bound is guaranteed finite but with an infinite EV, does that still cause problems?)
I think someone could hand-wave away heavy tailed distributions, too, but rather than assigning some outcomes 0 probability or refusing to rank them, they’re assuming some prospects of valid outcomes aren’t valid or never occur, even though they’re perfectly valid measure-theoretically. Or, they might actually just assign 0 probability to outcomes outside those with a bounded range of utility. In the latter case, you could represent them with both a bounded utility function and an unbounded utility function, agreeing on the bounded utility set of outcomes.
You could have moral/normative uncertainty across multiple bounded utility functions. Just make sure you don’t weigh them together via maximizing expected choiceworthiness in such a way that the weighted sum of utility functions is unbounded, because the weighted sum is a utility function. If the weighted sum is unbounded, then the same arguments in the post will apply to it. You could normalize all the utility functions first. Or, use a completely different approach to normative uncertainty, e.g. a moral parliament. That being said, the other approaches to normative uncertainty also violate Independence and can be money pumped, AFAIK.
Fairly related to this is section 6 in Beckstead and Thomas, 2022. https://onlinelibrary.wiley.com/doi/full/10.1111/nous.12462
Hmm, in response, one might claim that if we accept Pareto (in deterministic finite cases), we should accept Ex Ante Pareto + Anteriority (including Goodsell’s version), too, and if we accept Separability in deterministic finite cases, we should accept it in uncertain and possibly unbounded but finite cases, too. This would be because the arguments for the stronger principles are similar to the arguments for the weaker more restricted setting ones. So, there would be little reason to satisfy Pareto and Separability only in bounded and/or deterministic cases.
Impartiality + (Ex Ante Pareto or Separability) doesn’t work in unbounded but finite uncertain cases, but because of this, we should also doubt Impartiality + (Pareto or Separability) in unbounded but finite deterministic cases. And that counts against a lot more than just utilitarianism.
Personally, I would have kept the original title. Titles that are both accurate and clickbaity are the best kind—they get engagement without being deceptive.
I don’t think karma is always a great marker of a post’s quality or appropriateness. See an earlier exchange we had.
Unfortunately, I think clickbait also gets downvotes even if accurate, and that will drop the post down the front page or off it.
I might have gone for “Utilitarianism may be irrational or self-undermining” rather than “Utilitarianism is irrational or self-undermining”.
The post is misleading, because it singles out utilitarianism too much without pointing out serious problems with other views.
This post isn’t useful.
As a special case: these kinds of results have already been discussed enough in the community.
My writing is unclear. Some things could have been better explained or explained in more detail (I got similar feedback on LW here). Or, my sentence structure is bad/hard to follow.
I think this subject is very important and underrated, so I’m glad you wrote the post, and you raised some points that I wasn’t aware of, and I would like to see people write more posts like this one. The post didn’t do as much for me as it could have because I found two of its three main arguments hard to understand:
For your first argument (“Unbounded utility functions are irrational”), the post spends several paragraphs setting up a specific function that I could have easily constructed myself (for me it’s pretty obvious that there exist finite utility functions with infinite EV), and then ends by saying utilitarianism “lead[s] to violations of generalizations of the Independence axiom and the Sure-Thing Principle”, which I take to be the central argument, but I don’t know what the Sure-Thing Principle is. I think I know what Independence is, but I don’t know what you mean by “generalizations of Independence”. So it feels like I still have no idea what your actual argument is.
I had no difficulty following your money pump argument.
For the third argument, the post claims that some axioms rule out expectational total utilitarianism, but the axioms aren’t defined and I don’t know what they mean, and I don’t know how they rule out expectational total utilitarianism. (I tried to look at the cited paper, but it’s not publicly available and it doesn’t look like it’s on Sci-Hub either.)
Thanks, this is helpful!
To respond to the points:
I can see how naming them without defining them would throw people off. In my view, it’s acting seemingly irrationally, like getting money pumped, getting Dutch booked or paying to avoid information, that matters, not satisfying Independence or the STP. If you don’t care about this apparently irrational behaviour, then you wouldn’t really have any independent reason to accept Independence or the STP, except maybe that they seem directly intuitive. If I introduced them, that could throw other people off or otherwise take up much more space in an already long post to explain with concrete examples. But footnotes probably would have been good.
Good to hear!
Which argument do you mean? I defined and motivated the axioms for the two impossibility theorems with SD and Impartiality I cite, but I did that after stating the theorems, in the Anti-utilitarian theorems section. (Maybe I should have linked the section in the summary and outline?)
Some of the (main) arguments are wrong/bad, or the main title claim is wrong.
EDIT: I’ve changed the title to “Arguments for utilitarianism are impossibility arguments under unbounded prospects”. Previously, it was “Utilitarianism is irrational or self-undermining”. Kind of long now, but descriptive and less provocative.