Iâd be happy to get constructive criticism, given downvotes I was getting soon after posting. Iâll leave some comment replies here for people to agreevote/âdisagreevote with in case they want to stay anonymous. I also welcome feedback as comments here or private messages.
Iâve removed my own upvotes from these comments so this thread doesnât start at the top of the comment section. EDIT: Also, keep my comments in this thread at 0 karma if you want to avoid affecting my karma for making so many comments.
I havenât downvoted it, and Iâm sorry youâre getting that response for a thoughtful and in-depth piece of work, but I can offer a couple of criticisms I had that have stopped me upvoting it yet because I donât feel like I understand it, mixed in with a couple of criticisms where I feel like I did:
Too much work done by citations. Perhaps itâs not possible to extract key arguments, but most philosophy papers IME have their core point in just a couple of paragraphs, which you could quote, summarise or refer to more precisely than a link to the whole paper. Most people on this forum just wonât have the bandwidth to go digging through all the links.
The arguments for infinite prospective utility didnât hold up for me. A spatially infinite universe doesnât give us infinite expectation from our actionâeven if the universe never ends, our light cone will always be finite. Re Oesterheldâs paper, acausal influence seems an extremely controversial notion in which I personally see no reason to believe. Certainly if itâs a choice between rejecting that or scrabbling for some alternative to an intuitive approach that in the real world has always yielded reasonable solutions, Iâm happy to count that as a point against Oesterheld.
Relatedly, some parts I felt like you didnât explain well enough for me to understand your case, eg:
I donât see the argument in this post for this: âSo, based on the two theorems, if we assume Stochastic Dominance and Impartiality,[18] then we canât have Anteriority (unless itâs not worse to add more people to hell) or Separability.â It seemed like you just attempted to define these things and then asserted thisâmaybe I missed something in the definition?
âYou are facing a prospect A with infinite expected utility, but finite utility no matter what actually happens. Maybe A is your own future and you value your years of life linearly, and could live arbitrarily but finitely long, and so long under some possibilities that your life expectancy and corresponding expected utility is infinite.â I donât see how this makes sense. If all possible outcomes have me living a finite amount of time and generating finite utility per life-year, I donât see why expectation would be infinite.
Too much emphasis on what you find âplausibleâ. IMO philosophy arguments should just taboo that word.
Hmm, I didnât expect or intend for people to dig through the links, but it looks like I misjudged what things people would find cruxy for the rest of the arguments but not defended well enough, e.g. your concerns with infinite expected utility.
EDIT: Iâve rewritten the arguments for possibly unbounded impacts.
The arguments for infinite prospective utility didnât hold up for me. A spatially infinite universe doesnât give us infinite expectation from our actionâeven if the universe never ends, our light cone will always be finite.
But can you produce a finite upper bound on our lightcone that youâre 100% confident nothing can pass? (It doesnât have to be tight.) If not, then you could consider a St Petersburg-like prospect, with for each n, has probability 1/2n of size (or impact) 2n, in whatever units youâre using. Thatâs finite under every possible outcome, but it has an infinite expected value.
Re Oesterheldâs paper, acausal influence seems an extremely controversial notion in which I personally see no reason to believe.
Section II from Carlsmith, 2021 is one of the best arguments for acausal influence Iâm aware of, in case youâre interested in something more convincing. (FWIW, I also thought acausal influence was crazy for a long time, and I didnât find Newcombâs problem to be a compelling reason to reject causal decision theory.)
EDIT: Iâve now cut the acausal stuff and just focus on unbounded duration.
It seemed like you just attempted to define these things and then asserted thisâmaybe I missed something in the definition?
This follows from the theorems I cited, but I didnât include proofs of the theorems here. The proofs are technical and tricky,[1] and I didnât want to make my post much longer or spend so much more time on it. Explaining each proof in an intuitive way could probably be a post on its own.
I donât see how this makes sense. If all possible outcomes have me living a finite amount of time and generating finite utility per life-year, I donât see why expectation would be infinite.
How long you live could be distributed like a St Petersburg gamble, e.g. for each n, with probability 1/2n., you could live 2n years. The expected value of that is infinite, even though youâd definitely only live a finite amount of time.
Too much emphasis on what you find âplausibleâ. IMO philosophy arguments should just taboo that word.
Ya, I got similar feedback on an earlier draft for making it harder to read, and tried to cut some uses of the word, but still left a bunch. Iâll see if I can cut some more.
They work by producing some weird set of prospects. They then show that you canât order them in a way that satisfies the axioms, applying them one-by-one and then violating one of them or getting a contradiction.
But can you produce a finite upper bound on our lightcone that youâre 100% confident nothing can pass? (It doesnât have to be tight.)
I think Vasco already made this point elsewhere, but I donât see why you need certainty about any specific line to have finite expectation. If for the counterfactual payoff x, you think (perhaps after a certain point) xP(x) approaches 0 as x tends to infinity, it seems like you get finite expectation without ever having absolute confidence in any boundary (this applies to life expectancy, too).
Section II from Carlsmith, 2021 is one of the best arguments for acausal influence Iâm aware of, in case youâre interested in something more convincing. (FWIW, I also thought acausal influence was crazy for a long time, and I didnât find Newcombâs problem to be a compelling reason to reject causal decision theory.)
Thanks! I had a look, and it still doesnât persuade me, for much the reasons Newcombâs problem didnât. In roughly ascending importance
Maybe this just a technicality, but the claim âyou are exposed to exactly identical inputsâ seems impossible to realise with perfect precision. The simulator itself must differ in the two cases. So in the same way that outputs of two instances of a software program being run, even on the same computer in the same environment can theoretically differ for various reasons (looking at a high enough zoom level they will differ), the two simulations canât be guaranteed identical (Carlsmith even admits this with âabsent some kind of computer malfunctionâ, but just glosses over it). On the one hand, this might be too fine a distinction to matter in practice; on the other, if Iâm supposed to believe a wildly counterintuitive proposition instead of a commonsense one that seems to work fine in the real world, based on supposed logical necessity that it turns out isnât logically necessary, Iâm going to be very sceptical of the proposition even if I canât find a stronger reason to reject it.
The thought experiment gives no reason why the AI system should actually believe itâs in the scenario described, and that seems like a crucial element in its decision process. If in the real world, someone put me in a room with a chalkboard and told me this is what was happening, no matter what evidence they showed, I would have some element of doubt, both of their ability (cf point 1) but more importantly their motivations. If I discovered that the world was so bizarre as in this scenario, it would be at best a coinflip for me that I should take them at face value.
It seems contradictory to frame decision theory as applying to âa deterministic AI systemâ whose clones âwill make the same choice, as a matter of logical necessityâ. Thereâs a whole free will debate lurking underneath any decision theoretic discussion involving recognisable agents that I donât particularly want to get intoâbut if youâre taking away all agency from the âagentâ, itâs hard to see what it means to advocate it adopting a particular decision theory. At that point the AI might as well be a rock, and I donât feel like anyone is concerned about which decision theory rocks âshouldâ adopt.
This follows from the theorems I cited, but I didnât include proofs of the theorems here. The proofs are technical and tricky,[1] and I didnât want to make my post much longer or spend so much more time on it. Explaining each proof in an intuitive way could probably be a post on its own.
I would be less interested to see a reconstruction of a proof of the theorems and more interested to see them stated formally and a proof of the claim that it follows from them.
On Carlsmithâs example, we can just make it a logical necessity by assuming more. And, as you acknowledge the possibility, some distinctions can be too fine. Maybe youâre only 5% sure your copy exists at all and the conditions are right for you to get $1 million from your copy sending it.
5%*$1 million = $50,000 > $1,000, so you still make more in expectation from sending a million dollars. You break even in expected money if your decision to send $1 million increases your copyâs probability of sending $1 million by 1â1,000.
I do find it confusing to think about decision-making under determinism, but I think 3 proves too much. I donât think quantum indeterminacy or randomness saves free will or agency if it werenât already saved, and we donât seem to have any other options, assuming physicalism and our current understanding of physics.
I think Vasco already made this point elsewhere, but I donât see why you need certainty about any specific line to have finite expectation. If for the counterfactual payoff x, you think (perhaps after a certain point) xP(x) approaches 0 as x tends to infinity, it seems like you get finite expectation without ever having absolute confidence in any boundary (this applies to life expectancy, too).
Ya, I agree you donât need certainty about the bound, but now you need certainty about the distribution not being heavy-tailed at all. Suppose your best guess is that it looks like some distribution X, with finite expected value. Now, I suggest that it might actually be Y, which is heavy-tailed (has infinite expected value). If you assign any nonzero probability to that being right, e.g. switch to pY+(1âp)X for some p>0, then your new distribution is heavy-tailed, too. In general, if you think thereâs some chance youâd come to believe itâs heavy-tailed, then you should believe now that itâs heavy-tailed, because a probabilistic mixture with a heavy-tailed distribution is heavy-tailed. Or, if you think thereâs some chance youâd come to believe thereâs some chance itâs heavy-tailed, then you should believe now that itâs heavy-tailed.
(Vascoâs claim was stronger: the difference is exactly 0 past some point.)
I would be less interested to see a reconstruction of a proof of the theorems and more interested to see them stated formally and a proof of the claim that it follows from them.
Hmm, I might be misunderstanding.
I already have formal statements of the theorems in the post:
Stochastic Dominance, Anteriority and Impartiality are jointly inconsistent.
Stochastic Dominance, Separability and Impartiality are jointly inconsistent.
All of those terms are defined in the section Anti-utilitarian theorems. I guess I defined Impartiality a bit informally and might have hidden some background assumptions (preorder, so reflexivity + transitivity, and the set of prospects is every probability distribution over outcomes in the set of outcomes), but the rest were formally defined.
Then, from 1, assuming Stochastic Dominance and Impartiality, Anteriority must be false. From 2, assuming Stochastic Dominance and Impartiality, Separability must be false. Therefore assuming Stochastic Dominance and Impartiality, Anteriority and Separability must both be false.
The title is bad, e.g. too provocative, clickbaity, overstates the claims or singles out utilitarianism too much (there are serious problems with other views).
EDIT: Iâve changed the title to âArguments for utilitarianism are impossibility arguments under unbounded prospectsâ. Previously, it was âUtilitarianism is irrational or self-underminingâ. Kind of long now, but descriptive and less provocative.
Iâve changed the title to âArguments for utilitarianism are impossibility arguments under unbounded prospectsâ. Previously, it was âUtilitarianism is irrational or self-underminingâ. Kind of long now, but descriptive and less provocative.
My response: I admit itâs provocative and sounds like clickbait, but it literally describes what Iâm arguing. Maybe I should water it down, e.g. âUtilitarianism seems irrational or self-underminingâ or âUtilitarianism is plausibly irrational or self-underminingâ? I guess someone could reject all of the assumed requirements of rationality used here. Iâm personally sympathetic to that myself (although Stochastic Dominance seems pretty hard to give up, but I think difference-making risk aversion is a plausible reason to give it up), so maybe the title even makes a claim stronger than what Iâm confident in.
Itâs still a claim that seems plausible enough to me to state outright as-is, though. (EDIT: I also donât think the self-undermining bit should be controversial, but how much it would self-undermine is a matter of degree and subjective. Maybe âself-undermineâ isnât the right word, because that suggests that utilitarianism is false, not just that weâve weakened positive arguments for utilitarianism).
Also, maybe it is unfair to single out utilitarianism in particular.
Recommendation: A collection of paradoxes dealing with Utilitarianism. This seems to me to be what you wrote, and would have had me come to the post with more of a âooo! Fun philosophy discussionâ rather than âwell, thats a very strong claim⊠oh look at that, all so called inconsistencies and irrationalities either deal with weird infinite ethics stuff or are things I canât understand. Time to be annoyed about how the headline is poorly argued for.â the latter experience is not useful or fun, the former nice depending on the day & company.
I think your general point can still stand, but I do want to point out that the results here donât depend on actual infinities (infinite universe, infinitely long lives, infinite value), which is the domain of infinite ethics. We only need infinitely many possible outcomes and unbounded but finite value. My impression is that this is a less exotic/âcontroversial domain (although I think an infinite universe shouldnât be controversial, and Iâd guess our universe is infinite with probability >80%).
Furthermore, impossibility results in infinite ethics are problematic for everyone with impartial intuitions, but the results here seem more problematic for utilitarianism in particular. You can keep Impartiality and Pareto and/âor Separability in deterministic + unbounded but finite cases here, but when extending to uncertain cases, you wouldnât end up with utilitarianism, or youâd undermine utilitarianism in doing so. You canât extend both Impartiality and Pareto to infinite cases (allowing arbitrary bijections or swapping infinitely many people in Impartiality), and this is a problem for everyone sympathetic to both principles, not just utilitarians.
the results here donât depend on actual infinities (infinite universe, infinitely long lives, infinite value)
This seems pretty important to me. You can handwave away standard infinite ethics by positing that everything is finite with 100% certainty, but you canât handwave away the implications of a finite-everywere distribution with infinite EV.
(Just an offhand thought, I wonder if thereâs a way to fix infinite-EV distributions by positing that utility is bounded, but that you donât know what the bound is? My subjective belief is something like, utility is bounded, I donât know the bound, and the expected value of the upper bound is infinity. If the upper bound is guaranteed finite but with an infinite EV, does that still cause problems?)
I think someone could hand-wave away heavy tailed distributions, too, but rather than assigning some outcomes 0 probability or refusing to rank them, theyâre assuming some prospects of valid outcomes arenât valid or never occur, even though theyâre perfectly valid measure-theoretically. Or, they might actually just assign 0 probability to outcomes outside those with a bounded range of utility. In the latter case, you could represent them with both a bounded utility function and an unbounded utility function, agreeing on the bounded utility set of outcomes.
You could have moral/ânormative uncertainty across multiple bounded utility functions. Just make sure you donât weigh them together via maximizing expected choiceworthiness in such a way that the weighted sum of utility functions is unbounded, because the weighted sum is a utility function. If the weighted sum is unbounded, then the same arguments in the post will apply to it. You could normalize all the utility functions first. Or, use a completely different approach to normative uncertainty, e.g. a moral parliament. That being said, the other approaches to normative uncertainty also violate Independence and can be money pumped, AFAIK.
Hmm, in response, one might claim that if we accept Pareto (in deterministic finite cases), we should accept Ex Ante Pareto + Anteriority (including Goodsellâs version), too, and if we accept Separability in deterministic finite cases, we should accept it in uncertain and possibly unbounded but finite cases, too. This would be because the arguments for the stronger principles are similar to the arguments for the weaker more restricted setting ones. So, there would be little reason to satisfy Pareto and Separability only in bounded and/âor deterministic cases.
Impartiality + (Ex Ante Pareto or Separability) doesnât work in unbounded but finite uncertain cases, but because of this, we should also doubt Impartiality + (Pareto or Separability) in unbounded but finite deterministic cases. And that counts against a lot more than just utilitarianism.
Personally, I would have kept the original title. Titles that are both accurate and clickbaity are the best kindâthey get engagement without being deceptive.
I donât think karma is always a great marker of a postâs quality or appropriateness. See an earlier exchange we had.
My writing is unclear. Some things could have been better explained or explained in more detail (I got similar feedback on LW here). Or, my sentence structure is bad/âhard to follow.
I think this subject is very important and underrated, so Iâm glad you wrote the post, and you raised some points that I wasnât aware of, and I would like to see people write more posts like this one. The post didnât do as much for me as it could have because I found two of its three main arguments hard to understand:
For your first argument (âUnbounded utility functions are irrationalâ), the post spends several paragraphs setting up a specific function that I could have easily constructed myself (for me itâs pretty obvious that there exist finite utility functions with infinite EV), and then ends by saying utilitarianism âlead[s] to violations of generalizations of the Independence axiom and the Sure-Thing Principleâ, which I take to be the central argument, but I donât know what the Sure-Thing Principle is. I think I know what Independence is, but I donât know what you mean by âgeneralizations of Independenceâ. So it feels like I still have no idea what your actual argument is.
I had no difficulty following your money pump argument.
For the third argument, the post claims that some axioms rule out expectational total utilitarianism, but the axioms arenât defined and I donât know what they mean, and I donât know how they rule out expectational total utilitarianism. (I tried to look at the cited paper, but itâs not publicly available and it doesnât look like itâs on Sci-Hub either.)
I can see how naming them without defining them would throw people off. In my view, itâs acting seemingly irrationally, like getting money pumped, getting Dutch booked or paying to avoid information, that matters, not satisfying Independence or the STP. If you donât care about this apparently irrational behaviour, then you wouldnât really have any independent reason to accept Independence or the STP, except maybe that they seem directly intuitive. If I introduced them, that could throw other people off or otherwise take up much more space in an already long post to explain with concrete examples. But footnotes probably would have been good.
Good to hear!
Which argument do you mean? I defined and motivated the axioms for the two impossibility theorems with SD and Impartiality I cite, but I did that after stating the theorems, in the Anti-utilitarian theorems section. (Maybe I should have linked the section in the summary and outline?)
Some of the (main) arguments are wrong/âbad, or the main title claim is wrong.
EDIT: Iâve changed the title to âArguments for utilitarianism are impossibility arguments under unbounded prospectsâ. Previously, it was âUtilitarianism is irrational or self-underminingâ. Kind of long now, but descriptive and less provocative.
Iâd be happy to get constructive criticism, given downvotes I was getting soon after posting. Iâll leave some comment replies here for people to agreevote/âdisagreevote with in case they want to stay anonymous. I also welcome feedback as comments here or private messages.
Iâve removed my own upvotes from these comments so this thread doesnât start at the top of the comment section. EDIT: Also, keep my comments in this thread at 0 karma if you want to avoid affecting my karma for making so many comments.
I havenât downvoted it, and Iâm sorry youâre getting that response for a thoughtful and in-depth piece of work, but I can offer a couple of criticisms I had that have stopped me upvoting it yet because I donât feel like I understand it, mixed in with a couple of criticisms where I feel like I did:
Too much work done by citations. Perhaps itâs not possible to extract key arguments, but most philosophy papers IME have their core point in just a couple of paragraphs, which you could quote, summarise or refer to more precisely than a link to the whole paper. Most people on this forum just wonât have the bandwidth to go digging through all the links.
The arguments for infinite prospective utility didnât hold up for me. A spatially infinite universe doesnât give us infinite expectation from our actionâeven if the universe never ends, our light cone will always be finite. Re Oesterheldâs paper, acausal influence seems an extremely controversial notion in which I personally see no reason to believe. Certainly if itâs a choice between rejecting that or scrabbling for some alternative to an intuitive approach that in the real world has always yielded reasonable solutions, Iâm happy to count that as a point against Oesterheld.
Relatedly, some parts I felt like you didnât explain well enough for me to understand your case, eg:
I donât see the argument in this post for this: âSo, based on the two theorems, if we assume Stochastic Dominance and Impartiality,[18] then we canât have Anteriority (unless itâs not worse to add more people to hell) or Separability.â It seemed like you just attempted to define these things and then asserted thisâmaybe I missed something in the definition?
âYou are facing a prospect A with infinite expected utility, but finite utility no matter what actually happens. Maybe A is your own future and you value your years of life linearly, and could live arbitrarily but finitely long, and so long under some possibilities that your life expectancy and corresponding expected utility is infinite.â I donât see how this makes sense. If all possible outcomes have me living a finite amount of time and generating finite utility per life-year, I donât see why expectation would be infinite.
Too much emphasis on what you find âplausibleâ. IMO philosophy arguments should just taboo that word.
Thanks for the feedback and criticism!
Hmm, I didnât expect or intend for people to dig through the links, but it looks like I misjudged what things people would find cruxy for the rest of the arguments but not defended well enough, e.g. your concerns with infinite expected utility.
EDIT: Iâve rewritten the arguments for possibly unbounded impacts.
But can you produce a finite upper bound on our lightcone that youâre 100% confident nothing can pass? (It doesnât have to be tight.) If not, then you could consider a St Petersburg-like prospect, with for each n, has probability 1/2n of size (or impact) 2n, in whatever units youâre using. Thatâs finite under every possible outcome, but it has an infinite expected value.
Section II from Carlsmith, 2021 is one of the best arguments for acausal influence Iâm aware of, in case youâre interested in something more convincing. (FWIW, I also thought acausal influence was crazy for a long time, and I didnât find Newcombâs problem to be a compelling reason to reject causal decision theory.)
EDIT: Iâve now cut the acausal stuff and just focus on unbounded duration.
This follows from the theorems I cited, but I didnât include proofs of the theorems here. The proofs are technical and tricky,[1] and I didnât want to make my post much longer or spend so much more time on it. Explaining each proof in an intuitive way could probably be a post on its own.
How long you live could be distributed like a St Petersburg gamble, e.g. for each n, with probability 1/2n., you could live 2n years. The expected value of that is infinite, even though youâd definitely only live a finite amount of time.
Ya, I got similar feedback on an earlier draft for making it harder to read, and tried to cut some uses of the word, but still left a bunch. Iâll see if I can cut some more.
They work by producing some weird set of prospects. They then show that you canât order them in a way that satisfies the axioms, applying them one-by-one and then violating one of them or getting a contradiction.
I think Vasco already made this point elsewhere, but I donât see why you need certainty about any specific line to have finite expectation. If for the counterfactual payoff x, you think (perhaps after a certain point) xP(x) approaches 0 as x tends to infinity, it seems like you get finite expectation without ever having absolute confidence in any boundary (this applies to life expectancy, too).
Thanks! I had a look, and it still doesnât persuade me, for much the reasons Newcombâs problem didnât. In roughly ascending importance
Maybe this just a technicality, but the claim âyou are exposed to exactly identical inputsâ seems impossible to realise with perfect precision. The simulator itself must differ in the two cases. So in the same way that outputs of two instances of a software program being run, even on the same computer in the same environment can theoretically differ for various reasons (looking at a high enough zoom level they will differ), the two simulations canât be guaranteed identical (Carlsmith even admits this with âabsent some kind of computer malfunctionâ, but just glosses over it). On the one hand, this might be too fine a distinction to matter in practice; on the other, if Iâm supposed to believe a wildly counterintuitive proposition instead of a commonsense one that seems to work fine in the real world, based on supposed logical necessity that it turns out isnât logically necessary, Iâm going to be very sceptical of the proposition even if I canât find a stronger reason to reject it.
The thought experiment gives no reason why the AI system should actually believe itâs in the scenario described, and that seems like a crucial element in its decision process. If in the real world, someone put me in a room with a chalkboard and told me this is what was happening, no matter what evidence they showed, I would have some element of doubt, both of their ability (cf point 1) but more importantly their motivations. If I discovered that the world was so bizarre as in this scenario, it would be at best a coinflip for me that I should take them at face value.
It seems contradictory to frame decision theory as applying to âa deterministic AI systemâ whose clones âwill make the same choice, as a matter of logical necessityâ. Thereâs a whole free will debate lurking underneath any decision theoretic discussion involving recognisable agents that I donât particularly want to get intoâbut if youâre taking away all agency from the âagentâ, itâs hard to see what it means to advocate it adopting a particular decision theory. At that point the AI might as well be a rock, and I donât feel like anyone is concerned about which decision theory rocks âshouldâ adopt.
I would be less interested to see a reconstruction of a proof of the theorems and more interested to see them stated formally and a proof of the claim that it follows from them.
On Carlsmithâs example, we can just make it a logical necessity by assuming more. And, as you acknowledge the possibility, some distinctions can be too fine. Maybe youâre only 5% sure your copy exists at all and the conditions are right for you to get $1 million from your copy sending it.
5%*$1 million = $50,000 > $1,000, so you still make more in expectation from sending a million dollars. You break even in expected money if your decision to send $1 million increases your copyâs probability of sending $1 million by 1â1,000.
I do find it confusing to think about decision-making under determinism, but I think 3 proves too much. I donât think quantum indeterminacy or randomness saves free will or agency if it werenât already saved, and we donât seem to have any other options, assuming physicalism and our current understanding of physics.
Ya, I agree you donât need certainty about the bound, but now you need certainty about the distribution not being heavy-tailed at all. Suppose your best guess is that it looks like some distribution X, with finite expected value. Now, I suggest that it might actually be Y, which is heavy-tailed (has infinite expected value). If you assign any nonzero probability to that being right, e.g. switch to pY+(1âp)X for some p>0, then your new distribution is heavy-tailed, too. In general, if you think thereâs some chance youâd come to believe itâs heavy-tailed, then you should believe now that itâs heavy-tailed, because a probabilistic mixture with a heavy-tailed distribution is heavy-tailed. Or, if you think thereâs some chance youâd come to believe thereâs some chance itâs heavy-tailed, then you should believe now that itâs heavy-tailed.
(Vascoâs claim was stronger: the difference is exactly 0 past some point.)
Hmm, I might be misunderstanding.
I already have formal statements of the theorems in the post:
Stochastic Dominance, Anteriority and Impartiality are jointly inconsistent.
Stochastic Dominance, Separability and Impartiality are jointly inconsistent.
All of those terms are defined in the section Anti-utilitarian theorems. I guess I defined Impartiality a bit informally and might have hidden some background assumptions (preorder, so reflexivity + transitivity, and the set of prospects is every probability distribution over outcomes in the set of outcomes), but the rest were formally defined.
Then, from 1, assuming Stochastic Dominance and Impartiality, Anteriority must be false. From 2, assuming Stochastic Dominance and Impartiality, Separability must be false. Therefore assuming Stochastic Dominance and Impartiality, Anteriority and Separability must both be false.
The post is too long.
The title is bad, e.g. too provocative, clickbaity, overstates the claims or singles out utilitarianism too much (there are serious problems with other views).
EDIT: Iâve changed the title to âArguments for utilitarianism are impossibility arguments under unbounded prospectsâ. Previously, it was âUtilitarianism is irrational or self-underminingâ. Kind of long now, but descriptive and less provocative.
Iâve changed the title to âArguments for utilitarianism are impossibility arguments under unbounded prospectsâ. Previously, it was âUtilitarianism is irrational or self-underminingâ. Kind of long now, but descriptive and less provocative.
My response: I admit itâs provocative and sounds like clickbait, but it literally describes what Iâm arguing. Maybe I should water it down, e.g. âUtilitarianism seems irrational or self-underminingâ or âUtilitarianism is plausibly irrational or self-underminingâ? I guess someone could reject all of the assumed requirements of rationality used here. Iâm personally sympathetic to that myself (although Stochastic Dominance seems pretty hard to give up, but I think difference-making risk aversion is a plausible reason to give it up), so maybe the title even makes a claim stronger than what Iâm confident in.
Itâs still a claim that seems plausible enough to me to state outright as-is, though. (EDIT: I also donât think the self-undermining bit should be controversial, but how much it would self-undermine is a matter of degree and subjective. Maybe âself-undermineâ isnât the right word, because that suggests that utilitarianism is false, not just that weâve weakened positive arguments for utilitarianism).
Also, maybe it is unfair to single out utilitarianism in particular.
Recommendation: A collection of paradoxes dealing with Utilitarianism. This seems to me to be what you wrote, and would have had me come to the post with more of a âooo! Fun philosophy discussionâ rather than âwell, thats a very strong claim⊠oh look at that, all so called inconsistencies and irrationalities either deal with weird infinite ethics stuff or are things I canât understand. Time to be annoyed about how the headline is poorly argued for.â the latter experience is not useful or fun, the former nice depending on the day & company.
Thanks for the feedback!
I think your general point can still stand, but I do want to point out that the results here donât depend on actual infinities (infinite universe, infinitely long lives, infinite value), which is the domain of infinite ethics. We only need infinitely many possible outcomes and unbounded but finite value. My impression is that this is a less exotic/âcontroversial domain (although I think an infinite universe shouldnât be controversial, and Iâd guess our universe is infinite with probability >80%).
Furthermore, impossibility results in infinite ethics are problematic for everyone with impartial intuitions, but the results here seem more problematic for utilitarianism in particular. You can keep Impartiality and Pareto and/âor Separability in deterministic + unbounded but finite cases here, but when extending to uncertain cases, you wouldnât end up with utilitarianism, or youâd undermine utilitarianism in doing so. You canât extend both Impartiality and Pareto to infinite cases (allowing arbitrary bijections or swapping infinitely many people in Impartiality), and this is a problem for everyone sympathetic to both principles, not just utilitarians.
This seems pretty important to me. You can handwave away standard infinite ethics by positing that everything is finite with 100% certainty, but you canât handwave away the implications of a finite-everywere distribution with infinite EV.
(Just an offhand thought, I wonder if thereâs a way to fix infinite-EV distributions by positing that utility is bounded, but that you donât know what the bound is? My subjective belief is something like, utility is bounded, I donât know the bound, and the expected value of the upper bound is infinity. If the upper bound is guaranteed finite but with an infinite EV, does that still cause problems?)
I think someone could hand-wave away heavy tailed distributions, too, but rather than assigning some outcomes 0 probability or refusing to rank them, theyâre assuming some prospects of valid outcomes arenât valid or never occur, even though theyâre perfectly valid measure-theoretically. Or, they might actually just assign 0 probability to outcomes outside those with a bounded range of utility. In the latter case, you could represent them with both a bounded utility function and an unbounded utility function, agreeing on the bounded utility set of outcomes.
You could have moral/ânormative uncertainty across multiple bounded utility functions. Just make sure you donât weigh them together via maximizing expected choiceworthiness in such a way that the weighted sum of utility functions is unbounded, because the weighted sum is a utility function. If the weighted sum is unbounded, then the same arguments in the post will apply to it. You could normalize all the utility functions first. Or, use a completely different approach to normative uncertainty, e.g. a moral parliament. That being said, the other approaches to normative uncertainty also violate Independence and can be money pumped, AFAIK.
Fairly related to this is section 6 in Beckstead and Thomas, 2022. https://ââonlinelibrary.wiley.com/ââdoi/ââfull/ââ10.1111/âânous.12462
Hmm, in response, one might claim that if we accept Pareto (in deterministic finite cases), we should accept Ex Ante Pareto + Anteriority (including Goodsellâs version), too, and if we accept Separability in deterministic finite cases, we should accept it in uncertain and possibly unbounded but finite cases, too. This would be because the arguments for the stronger principles are similar to the arguments for the weaker more restricted setting ones. So, there would be little reason to satisfy Pareto and Separability only in bounded and/âor deterministic cases.
Impartiality + (Ex Ante Pareto or Separability) doesnât work in unbounded but finite uncertain cases, but because of this, we should also doubt Impartiality + (Pareto or Separability) in unbounded but finite deterministic cases. And that counts against a lot more than just utilitarianism.
Personally, I would have kept the original title. Titles that are both accurate and clickbaity are the best kindâthey get engagement without being deceptive.
I donât think karma is always a great marker of a postâs quality or appropriateness. See an earlier exchange we had.
Unfortunately, I think clickbait also gets downvotes even if accurate, and that will drop the post down the front page or off it.
I might have gone for âUtilitarianism may be irrational or self-underminingâ rather than âUtilitarianism is irrational or self-underminingâ.
The post is misleading, because it singles out utilitarianism too much without pointing out serious problems with other views.
This post isnât useful.
As a special case: these kinds of results have already been discussed enough in the community.
My writing is unclear. Some things could have been better explained or explained in more detail (I got similar feedback on LW here). Or, my sentence structure is bad/âhard to follow.
I think this subject is very important and underrated, so Iâm glad you wrote the post, and you raised some points that I wasnât aware of, and I would like to see people write more posts like this one. The post didnât do as much for me as it could have because I found two of its three main arguments hard to understand:
For your first argument (âUnbounded utility functions are irrationalâ), the post spends several paragraphs setting up a specific function that I could have easily constructed myself (for me itâs pretty obvious that there exist finite utility functions with infinite EV), and then ends by saying utilitarianism âlead[s] to violations of generalizations of the Independence axiom and the Sure-Thing Principleâ, which I take to be the central argument, but I donât know what the Sure-Thing Principle is. I think I know what Independence is, but I donât know what you mean by âgeneralizations of Independenceâ. So it feels like I still have no idea what your actual argument is.
I had no difficulty following your money pump argument.
For the third argument, the post claims that some axioms rule out expectational total utilitarianism, but the axioms arenât defined and I donât know what they mean, and I donât know how they rule out expectational total utilitarianism. (I tried to look at the cited paper, but itâs not publicly available and it doesnât look like itâs on Sci-Hub either.)
Thanks, this is helpful!
To respond to the points:
I can see how naming them without defining them would throw people off. In my view, itâs acting seemingly irrationally, like getting money pumped, getting Dutch booked or paying to avoid information, that matters, not satisfying Independence or the STP. If you donât care about this apparently irrational behaviour, then you wouldnât really have any independent reason to accept Independence or the STP, except maybe that they seem directly intuitive. If I introduced them, that could throw other people off or otherwise take up much more space in an already long post to explain with concrete examples. But footnotes probably would have been good.
Good to hear!
Which argument do you mean? I defined and motivated the axioms for the two impossibility theorems with SD and Impartiality I cite, but I did that after stating the theorems, in the Anti-utilitarian theorems section. (Maybe I should have linked the section in the summary and outline?)
Some of the (main) arguments are wrong/âbad, or the main title claim is wrong.
EDIT: Iâve changed the title to âArguments for utilitarianism are impossibility arguments under unbounded prospectsâ. Previously, it was âUtilitarianism is irrational or self-underminingâ. Kind of long now, but descriptive and less provocative.