Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Thank you for sharing this post—it’s well written, well structured, relevant and concise. (And I agree with the conclusion, which I’m sure makes me like it more!)
Glad you enjoyed it!
Thanks for the post!
I’m particularly interested in the third objection you present—that the value of “lives barely worth living” may be underrated.
I wonder to what extent the intuition that world Z is bad compared to A is influenced by framing effects. For instance, if I think of “lives net positive but not by much”, or something similar, this seems much more valueable than “lives barely worth living”, allthough it means the same in population ethics (as I understand it).
I’m also sympathetic to the claim that ones response to world Z may be affected by ones perception of the goodness of the ordinary (human) life. Perhaps, buddhists, who are convinced that ordinary life is pervaded with suffering, view any live that is net-positive as remarkably good.
Do you know if there exists any psychological literature on any of these two hypotheses? I’d be interested to research both.
However, even if we’d show that the repugnance of the repugnant conclusion is influenced in these ways or even rendered unreliable, I doubt the same would be true for the “very repugnant conclusion”:
for any world A with billions of happy people living wonderful lives, there is a world Z+ containing both a vast amount of mildly-satisfied lizards and billions of suffering people, such that Z+ is better than A.
(Credit to joe carlsmith who mentioned this on some podcast)
You raised some interesting points!
It seems plausible that the framing effect could be at play here, and that different people would draw the line between a life that’s worth living and one that’s not at different points. I don’t know about any literature about this, but maybe I’d give a look at the Happier Lives Institute’s work.
And I’ll need to think more seriously about the very repugnant conclusion. That’s a tough one!
Instead of rejecting any of the Benign Addition Principle, Non-anti-egalitarianism, and Transitivity, you can reject the Independence of Irrelevant Alternatives, and I think this is more plausible than rejecting Transitivity and pretty plausible generally, although many may disagree. See my comment here illustrating: https://forum.effectivealtruism.org/posts/DCZhan8phEMRHuewk/person-affecting-intuitions-can-often-be-money-pumped?commentId=ZadcAxa2oBo3zQLuQ
I didn’t think of that!
I’m curious about why you find rejecting IIA generally plausible.
I think it’s plausible that some interests matter in relative terms between possible outcomes, rather than only in terms that can be described absolutely. I think it can be the case that it’s neither better nor worse in itself to have a specific preference at all, no matter how satisfied or frustrated, even though it’s better for it to be more satisfied between two outcomes both in which it exists. Say a child’s dream to go to the moon, or the wish of a specific person who can’t walk to walk, or the wish to be with a loved one (e.g. grief over loss). I don’t think taking away a frustrated preference makes someone better off in itself, except for certain kinds of preferences. I don’t think adding a (satisfied) preference is ever good in itself.
Part of the reason might be that there’s no natural unique 0 or neutral point, i.e. a single degree of preference satisfaction/frustration where we should be indifferent about having that preference at all. Or, at least, you can imagine degrees between perfectly satisfied and perfectly frustrated, but no natural way to set some partial satisfaction/frustration states on either side of 0.
Other common intuitions may violate IIA. We might say you’re not obligated to make a great sacrifice for others, but if you are going to, it could be obligatory to do the most good with the same level of sacrifice (see this example and the discussion in that thread). Similarly for having a child: you have no obligation to have one at all, and it may be permissible to have a child as long as they at least have a good life, but if you do have a child, and you could easily guarantee a much better life for them than just good, you may be obligated to do so. Frick discusses these as “conditional reasons”.
I guess these reasons could apply similarly to transitivity. An important issue with intransitivity is that it’s not clear what act to choose if each available option is beaten by another, but intransitive views can be turned into transitive views that violate IIA through voting methods, especially beatpath/Schulze, like in this paper.
Perhaps I am thinking about this all wrong, but isn’t it the case that whether or not Z is better than A, most people would prefer a “ZA” world (Z’s population AND A’s happiness) to Z?
Therefore, the repugnant conclusion is only a problem if there is , in fact, a tradeoff between population size & happiness. However, this does not appear to be the case in a non-Malthusian world.
For instance, it seems pretty clear that we live in a “ZA” world compared to the world of only 300 years ago. Population, life expectancy and dignity all improved dramatically at the same time.
There is also no clear reason to believe that a ZA world compared to today’s world is impossible either.
If, instead of the possible, we turn to the likely, the trend appears to be that population ultimately stabilizes. As such the real world task within our lifespan should be increasing the happiness of a mostly stable population, which is about as far as you can be from a repugnant conclusion sort of dilemma.
Yes, and the total view would bring you to that conclusion as well!
ZA>Z>A
All ethical arguments are based on intuition, and here this one is doing a lot of work: “we tend to underestimate the quality of lives barely worth living”. To me this is the important crux because the rest of the argument is well-trodden. Yes, moral philosophy is hard and there are no obvious unproblematic answers, and yes, small numbers add up. Tännsjö, Zapffe, Metzinger, and Benatar play this weird trick where they introspectively set an arbitrary line that separates net-negative and net-positive experience, extrapolate it to the rest of humanity, and based on that argue that most people spend most of their time on the wrong side of it. More standard intuitions point in the opposite direction; for not-super-depressed people, things can and do get really bad before not-existing starts to outshine existing! Admittedly “not-super-depressed people” is a huge qualifier, but on Earth the number of people who have, from our affluent Western country perspective, terrible lives, yet still want to exist, swamps the number of the (even idly) suicidally depressed. It’s very implausible to me that I exist right above this line of neutrality when 1) most people have much worse lives than me and 2) they generally like living.
And whenever I see this argument that liking life is just a cognitive bias I imagine this conversation:
A: How are you?
B: Fine, how are–
A: Actually your life sucks.