It implies that there wouldn’t be anything wrong with immediately killing everyone reading this, their families, and everyone else, since this supposedly wouldn’t be destroying anything positive.
That’s not how many people with the views Magnus described would interpret their views.
For instance, let’s take my article on tranquilism, which Magnus cites. It says this in the introduction:
Tranquilism is not meant as a standalone moral theory, but as a way to think about well-being and the value of different experiences. Tranquilism can then serve as a building block for more complex moral views where things other than experiences also matter morally.
Further in the text, it contains the following passage:
As a theory limited to the evaluation of experienced well-being, tranquilism is compatible with pluralistic moral views where things other than experiences – for instance the accomplishment of preferences and life goals – can be (dis)valuable too.
And at the end in the summary:
Tranquilism is not committed to the view that cravings are all that matter. Our motivation is multifaceted, and next to impulsive motivation through cravings, we are also motivated by desires to achieve certain goals. People do not solely live for the sake of their personal well-being; we may also (or even exclusively) hold other goals, including goals about the welfare of others or goals about the state of the world. Thinking about morality in terms of goals (or “ends”) has inspired rationality-based accounts of cooperation such as Kantianism (arguably), contractualism, or coordinated decision-making between different value systems.28
Furthermore, if one chooses to regard the achievement of preferences or goals as valuable in itself, this can inspire moral axiologies such as preference-based consequentialism or coherent extrapolation, either as a complement or an alternative to one’s theory of well-being. (And of course, one’s goals may include many other components, including non-moral or non-altruistic ones.)
I generally think EAs are too fond of single-minded conceptions of morality. I see ethics as being largely about people’s interests/goals. On that perspective, it would be preposterous to kill people against their will to prevent future suffering.
That said, people’s “goals” are often under-defined, and population ethics as a whole is under-defined (it isn’t fixed how many people there will be or what types of goals new people will have), so there’s also room for an experience-focused “axiology” like tranquilism to deal with cases that are under-defined according to goal-focused morality.
I think there’s a bit of confusion around the conclusion “there’s nothing with intrinsic value.” You seem to be assuming that the people who come to this conclusion completely share your framework for how to think about population ethics and then conclude that where you see “intrinsic value,” there’s nothing in its place. So you interpret them as thinking that killing people is okay (edit: or would be okay absent considerations around cooperation or perhaps moral uncertainty). However, when I argue that “nothing has intrinsic value,” I mostly mean “this way of thinking is a bit confused and we should think about population ethics in an entirely different way.” (Specifically, things can beconditionally valuable if they’re grounded in people’s interests/goals, but they aren’t “intrinsically valuable” in the sense that it’s morally pressing to bring them about regardless of circumstances.)
Thanks for the thoughtful reply. You’re right, you can avoid the implications I mentioned by adopting a preference/goal-focused framework. (I’ve edited my original comment to flag this; thanks for helping me recognize it.) That does resolve some problems, but I think it also breaks most of the original post’s arguments, since they weren’t made in (and don’t easily fit into) a preference-focused framework. For example:
The post argues that making happy people isn’t good and making miserable people is bad, because creating happiness isn’t good and creating suffering is bad. But it’s unclear how this argument can be translated into a preference-focused framework.
Could it be that “satisfying preferences isn’t good, and frustrating preferences is bad”? That doesn’t make sense to me; it’s not clear to me there’s a meaningful distinction between satisfying a preference and keeping it from being frustrated.
Could it be that “satisfying positive preferences isn’t good, and satisfying negative preferences is good?” But that seems pretty arbitrary, since whether we call some preference positive or negative seems pretty arbitrary (e.g. do I have a positive preference to eat or a negative preference to not be hungry? Is there a meaningful difference?).
The second section of the original post emphasizes extreme suffering and how it might not be outweighable. But what does this mean in a preference-focused context? Extreme preference frustration? I suspect, for many, that doesn’t have the intuitive horribleness that extreme suffering does.
The third section of the post focuses on surveys that ask questions about happiness and suffering, so we can’t easily generalize these results to a preference-focused framework.
(I also agree—as I tried to note in my original comment’s first bullet point—that pluralistic or “all-things-considered” views avoid the implications I mentioned. But I think ethical views should be partly judged based on the implications they have on their own. The original post also seems to assume this, since it highlights the implications total utilitarianism has on its own rather than as a part of some broader pluralistic framework.)
That does resolve some problems, but I think it also breaks most of the original post’s arguments, since they weren’t made in (and don’t easily fit into) a preference-focused framework.
My impression of the OP’s primary point was that asymmetric views are under-discussed. Many asymmetric views are preference-based and this is mentioned in the OP (e.g., the link to Anti-frustrationism or mention of Benatar).
Of the experience-based asymmetric views discussed in the OP, my posts on tranquilism and suffering-focused ethics mention value pluralism and the idea that things other than experiences (i.e., preferences mostly) could also be valuable. Given these explicit mentions it seems false to claim that “these views don’t easily fit into a preference-focused framework.”
Probably similarly, the OP links to posts by Teo Ajantaival which I’ve only skimmed but there’s a lengthy and nuanced-seeming discussion on why minimalist axiologies, properly construed, don’t have the implications you ascribed to them.
The NU FAQ is a bit more single-minded in its style/approach, but on the question “Does negative utilitarianism solve ethics” it says “ethics is nothing that can be ‘solved.’” This at least tones down the fanaticism a bit and opens up options to incorporate other principles or other perspectives. (Also, it contains an entire section on NIPU – negative idealized preference utilitarianism. So, that may count as another preference-based view alluded in the OP, since the NU FAQ doesn’t say whether it finds N(H)U or NIPU “more convincing.”)
The post argues that making happy people isn’t good and making miserable people is bad, because creating happiness isn’t good and creating suffering is bad. But it’s unclear how this argument can be translated into a preference-focused framework.
I’m not sure why you think the argument would have to be translated into a preference-focused framework. In my previous comment I wanted to say the following:
(1) The OP mentions that asymmetric positions are underappreciated and cites some examples, including Anti-Frustrationism, which is (already) a preference-based view.
(2) While the OP does discuss experience-focused views that say nothing is of intrinsic value, those views are compatible with a pluralistic conception of “ethics/morality” where preferences could matter too. Therefore, killing people against their will to reduce suffering isn’t a clear implication of the views.
Neither (1) or (2) require translating a specific argument from experiences to preferences. (That said, I think it’s actually easier to argue for an asymmetry in preference contexts. The notion that acquiring a new preference and then fulfilling it is a good in itself seems counterintuitive. Relatedly, the tranquilist conception of suffering is more like a momentary preference rather than an ‘experience’ and this shift IMO made it easier to justify the asymmetry.)
Could it be that “satisfying preferences isn’t good, and frustrating preferences is bad”?
Why do you want to pack the argument into the framing “What is good and what is bad?” I feel like that’s an artificially limited approach to population ethics, this approach of talking about what’s good or bad. When something is good, it means that we have to create as much of it as possible? That’s a weird framework! At the very least, I want to emphasize that this is far from the only way to think about what matters.
I concede that there’s a sense in which “pleasure is good” and “suffering is bad.” However, I don’t think that brings us to hedonist axiology, or any comprehensively-specified axiology for that matter.
Behind the statement “pleasure is good,” there’s an under-defined and uncontroversial claim and a specific but controversial one. Only the under-defined and uncontroversial claim is correct.
Under-defined and uncontroversial claim: All else equal, pleasure is always unobjectionable and often something we come to desire.
Specific and controversial claim: All else equal, we should pursue pleasure with an optimizing mindset.
This claim is meant to capture things like:
that, all else equal, it would be a mistake not to value all pleasures
that no mental states without pleasure are in themselves desirable
that, all else equal, more pleasure is always better than less pleasure
According to moral realist proponents of hedonist axiology, we can establish, via introspection, that pleasure is good in the second, “specific and controversial” sense. However, I don’t see how that’s possible from mere introspection!
Unlike the under-defined and uncontroversial claim, the specific and controversial claim not only concerns what pleasure feels like, but also how we are to behave toward pleasure in all contexts of life. To make that claim, we have to go far beyond introspecting about pleasure’s nature.
Introspection fundamentally can’t account for false consensus effects (“typical mind fallacy”). My error theory is that moral realist proponents of hedonist axiology tend to reify intuitions they have about pleasure as intrinsic components to pleasure.
Even if it seems obvious to a person that the way pleasure feels automatically warrants the pursuit of such pleasures (at some proportionate effort cost), the fact that other people don’t always see things that way should give them pause. Many hedonist axiology critics are philosophically sophisticated reasoners (consider, for example, that hedonism is not too popular in academic philosophy), so it would be uncharitable to shrug off this disagreement. For instance, it would be uncharitable and unconvincing to say that the non-hedonists are (e.g.) chronically anhedonic or confused about the difference between instrumental and intrinsic goods. To maintain that hedonist axiology is the foundation for objective morality, one would need a more convincing error theory.
I suspect that many proponents of hedonist axiology indeed don’t just “introspect on the nature of pleasure.” Instead, I get the impression that they rely on an additional consideration, a hidden background assumption that does most of the heavy lifting. I think that background assumption has them put the cart before the horse.
In the quoted passages above, I argued that the way hedonists think of “pleasure is good” smuggles in unwarranted connotations. Similarly and more generally, I think the concept “x is good,” the way you and others use it for framing discussions on population ethics, bakes in an optimizing mindset around “good things ought to be promoted.” This should be labelled as an assumption we can question, rather than as the default for how to hold any discussion on population ethics. It really isn’t the only way to do moral philosophy. (In addition, I personally find it counterintuitive.)
(I make similar points in my recent post on a framework proposal for population ethics, which I’ve linked to in previous comments her.)
(I also agree—as I tried to note in my original comment’s first bullet point—that pluralistic or “all-things-considered” views avoid the implications I mentioned. But I think ethical views should be partly judged based on the implications they have on their own. The original post also seems to assume this, since it highlights the implications total utilitarianism has on its own rather than as a part of some broader pluralistic framework.)
Okay, that helps me understand where you’re coming from. I feel like “ethical views should be partly judged based on the implications they have on their own” is downstream of the question of pluralism vs. single-minded theory. In other words, when you evaluate a particular view, it already has to be clear what scope it has. Are we evaluating the view as “the solution to everything in ethics?” or are we evaluating it as “a theory about the value of experiences that doesn’t necessary say that experiences are all that matters?” If the view is presented as the latter (which, again, is explicitly the case for at least two articles the OP cited), then that’s what it should be evaluated as. Views should be evaluated on exactly the scope that they aspire to have.
Overall, I get the impression that you approach population ethics with an artificially narrow lens about what sort of features views “should” have and this seems to lead to a bunch of interrelated misunderstandings about how some others think about their views. I think this applies to probably >50% of the views the OP discussed rather than just edge cases. That said, your criticisms apply to some particular proponents of suffering-focused ethics and some texts.
Of the experience-based asymmetric views discussed in the OP, my posts on tranquilism and suffering-focused ethics mention value pluralism and the idea that things other than experiences (i.e., preferences mostly) could also be valuable. Given these explicit mentions it seems false to claim that “these views don’t easily fit into a preference-focused framework.” [...] I’m not sure why you think [a certain] argument would have to be translated into a preference-focused framework.
I think this misunderstands the point I was making. I meant to highlight how, if you’re adopting a pluralistic view, then to defend a strong population asymmetry (the view emphasized in the post’s title), you need reasons why none of the components of your pluralistic view value making happy people.* This gets harder the more pluralistic you are, especially if you can’t easily generalize hedonic arguments to other values. As you suggest, you can get the needed reasons by introducing additional assumptions/frameworks, like rejecting the principle that it’s better for there to be more good things. But I wouldn’t call that an “easy fit”; that’s substantial additional argument, sometimes involving arguing against views that many readers of this forum find axiomatically appealing (like that it’s better for there to be more good things).
(* Technically you don’t need reasons why none of the views consider the making of happy people valuable, just reasons why overall they don’t. Still, I’d guess those two claims are roughly equivalent, since I’m not aware of any prominent views which hold the creation of purely happy people to be actively bad.)
Besides that, I think at this point we’re largely in agreement on the main points we’ve been discussing?
I’ve mainly meant to argue that some of the ethical frameworks that the original post draws on and emphasizes, in arguing for a population asymmetry, have implications that many find very counterintuitive. You seem to agree.
If I’ve understood, you’ve mainly been arguing that there are many other views (including some that the original post draws on) which support a population asymmetry while avoiding certain counterintuitive implications. I agree.
Your most recent comment seems to frame several arguments for this point as arguments against the first bullet point above, but I don’t think they’re actually arguments against the above, since the views you’re defending aren’t the ones my most-discussed criticism applies to (though that does limit the applicability of the criticism).
I think this misunderstands the point I was making. I meant to highlight how, if you’re adopting a pluralistic view, then to defend a strong population asymmetry (the view emphasized in the post’s title), you need reasons why none of the components of your pluralistic view value making happy people.
Thanks for elaborating! I agree I misunderstood your point here.
(I think preference-based views fit neatly into the asymmetry. For instance, Peter Singer initially weakly defended an asymmetric view in Practical Ethics, as arguably the most popular exponent of preference utilitarianism at the time. He only changed his view on population ethics once he became a hedonist. I don’t think I’m even aware of a text that explicitly defends preference-based totalism. By contrast, there are several texts defending asymmetric preference-based views: Benatar, Fehige, Frick, younger version of Singer.)
as you suggest, you can get the needed reasons by introducing additional assumptions/frameworks, like rejecting the principle that it’s better for there to be more good things.
Or that “(intrinsically) good things” don’t have to be a fixed component in our “ontology” (in how we conceptualize the philosophical option space). Or, relatedly, that the formula “maximize goods minus bads” isn’t the only way to approach (population) ethics. Not because it’s conceptually obvious that specific states of the world aren’t worthy of taking serious effort (and even risks, if necessary) to bring about. Instead, because it’s questionable to assume that “good states” are intrinsically good, that we should bring them about regardless of circumstances, independently of people’s interests/goals.
Besides that, I think at this point we’re largely in agreement on the main points we’ve been discussing?
I agree that we’re mainly in agreement. To summarize the thread, I think we’ve kept discussing because we both felt like the other party was presenting a slightly unfair summary of how many views a specific criticism applies or doesn’t apply to (or applies “easily” vs. “applies only with some additional, non-obvious assumptions”).
I still feel a bit like that now, so I want to flag that out of all the citations from the OP, the NU FAQ is really the only one where it’s straightforward to say that one of the two views within the text – NHU but not NIPU – implies that it would (on some level, before other caveats) be good to kill people against their will (as you claimed in your original comment).
From further discussion, I then gathered that you probably meant that specific arguments from the OP could straightforwardly imply that it’s good to kill people. I see the connection there. Still, two points I tried to make that speak against this interpretation:
(1) People who buy into these arguments mostly don’t think their views imply killing people.
(2) To judge what an argument “in isolation” implies, we need some framework for (population) ethics. The framework that totalists in EA rely on is question begging and often not shared by proponents of the asymmetry.
I think preference-based views fit neatly into the asymmetry.
Here I’m moving on from the original topic, but if you’re interested in following this tangent—I’m not quite getting how preference-based views (specifically, person-affecting preference utilitarianism) maintain the asymmetry while avoiding (a slightly/somewhat weaker version of) “killing happy people is good.”
Under “pure” person-affecting preference utilitarianism (ignoring broader pluralistic views of which this view is just one component, and also ignoring instrumental justifications), clearly one reason why it’s bad to kill people is that this would frustrate some of their preferences. Under this view, is another (pro tanto) reason why it’s bad to kill (not-entirely-satisfied) people that their satisfaction/fulfillment is worth preserving (i.e. is good in a way that outweighs associated frustration)?
My intuition is that one answer to the above question breaks the asymmetry, while the other revives some very counterintuitive implications.
If we answer “Yes,” then, through that answer, we’ve accepted a concept of “actively good things” into our ethics, rejecting the view that ethics is just about fixing states of affairs that are actively problematic. Now we’re back in (or much closer to?) a framework of “maximize goods minus bads” / “there are intrinsically good things,” which seems to (severely) undermines the asymmetry.
If we answer “No,” on the grounds that fulfillment can’t outweigh frustration, this would seem to imply that one should kill people, whenever their being killed would frustrate them less than their continued living. Problematically, that seems like it would probably apply to many people, including many pretty happy people.
After all, suppose someone is fairly happy (though not entirely, constantly fulfilled), is quite myopic, and only has a moderate intrinsic preference against being killed. Then, the preference utilitarianism we’re considering seems to endorse killing them (since killing them would “only” frustrate their preferences for a short while, while continued living would leave them with decades of frustration, amid their general happiness).
There seem to be additional bizarre implications, like “if someone suddenly gets an unrealizable preference, even if they mistakenly think it’s being satisfied and are happy about that, this gives one stronger reasons to kill them.” (Since killing them means the preference won’t go unsatisfied as long.)
(I’m assuming that frustration matters (roughly) in proportion to its duration, since e.g. long-lasting suffering seems especially bad.)
(Of course, hedonic utilitarianism also endorses some non-instrumental killing, but only under what seem to be much more restrictive conditions—never killing happy people.)
Under this view, is another (pro tanto) reason why it’s bad to kill (not-entirely-satisfied) people that their satisfaction/fulfillment is worth preserving (i.e. is good in a way that outweighs associated frustration)?
I would answer “No.”
If we answer “No,” on the grounds that fulfillment can’t outweigh frustration, this would seem to imply that one should kill people, whenever their being killed would frustrate them less than their continued living. Problematically, that seems like it would probably apply to many people, including many pretty happy people.
The preference against being killed is as strong as the happy person wants it to be. If they have a strong preference against being killed then the preference frustration from being killed would be lot worse than the preference frustration from an unhappy decade or two – it depends how the person herself would want to make these choices.
(The post I linked to primarily focuses on cases where people have well-specified preferences/goals. Many people will have under-defined preferences and preference utilitarians would also want to have a way to deal with these cases. One way to deal with under-defined preferences could be “fill in the gaps with what’s good on our experience-focused account of what matters.”)
That’s not how many people with the views Magnus described would interpret their views.
For instance, let’s take my article on tranquilism, which Magnus cites. It says this in the introduction:
Further in the text, it contains the following passage:
And at the end in the summary:
I generally think EAs are too fond of single-minded conceptions of morality. I see ethics as being largely about people’s interests/goals. On that perspective, it would be preposterous to kill people against their will to prevent future suffering.
That said, people’s “goals” are often under-defined, and population ethics as a whole is under-defined (it isn’t fixed how many people there will be or what types of goals new people will have), so there’s also room for an experience-focused “axiology” like tranquilism to deal with cases that are under-defined according to goal-focused morality.
I think there’s a bit of confusion around the conclusion “there’s nothing with intrinsic value.” You seem to be assuming that the people who come to this conclusion completely share your framework for how to think about population ethics and then conclude that where you see “intrinsic value,” there’s nothing in its place. So you interpret them as thinking that killing people is okay (edit: or would be okay absent considerations around cooperation or perhaps moral uncertainty). However, when I argue that “nothing has intrinsic value,” I mostly mean “this way of thinking is a bit confused and we should think about population ethics in an entirely different way.” (Specifically, things can beconditionally valuable if they’re grounded in people’s interests/goals, but they aren’t “intrinsically valuable” in the sense that it’s morally pressing to bring them about regardless of circumstances.)
Thanks for the thoughtful reply. You’re right, you can avoid the implications I mentioned by adopting a preference/goal-focused framework. (I’ve edited my original comment to flag this; thanks for helping me recognize it.) That does resolve some problems, but I think it also breaks most of the original post’s arguments, since they weren’t made in (and don’t easily fit into) a preference-focused framework. For example:
The post argues that making happy people isn’t good and making miserable people is bad, because creating happiness isn’t good and creating suffering is bad. But it’s unclear how this argument can be translated into a preference-focused framework.
Could it be that “satisfying preferences isn’t good, and frustrating preferences is bad”? That doesn’t make sense to me; it’s not clear to me there’s a meaningful distinction between satisfying a preference and keeping it from being frustrated.
Could it be that “satisfying positive preferences isn’t good, and satisfying negative preferences is good?” But that seems pretty arbitrary, since whether we call some preference positive or negative seems pretty arbitrary (e.g. do I have a positive preference to eat or a negative preference to not be hungry? Is there a meaningful difference?).
The second section of the original post emphasizes extreme suffering and how it might not be outweighable. But what does this mean in a preference-focused context? Extreme preference frustration? I suspect, for many, that doesn’t have the intuitive horribleness that extreme suffering does.
The third section of the post focuses on surveys that ask questions about happiness and suffering, so we can’t easily generalize these results to a preference-focused framework.
(I also agree—as I tried to note in my original comment’s first bullet point—that pluralistic or “all-things-considered” views avoid the implications I mentioned. But I think ethical views should be partly judged based on the implications they have on their own. The original post also seems to assume this, since it highlights the implications total utilitarianism has on its own rather than as a part of some broader pluralistic framework.)
My impression of the OP’s primary point was that asymmetric views are under-discussed. Many asymmetric views are preference-based and this is mentioned in the OP (e.g., the link to Anti-frustrationism or mention of Benatar).
Of the experience-based asymmetric views discussed in the OP, my posts on tranquilism and suffering-focused ethics mention value pluralism and the idea that things other than experiences (i.e., preferences mostly) could also be valuable. Given these explicit mentions it seems false to claim that “these views don’t easily fit into a preference-focused framework.”
Probably similarly, the OP links to posts by Teo Ajantaival which I’ve only skimmed but there’s a lengthy and nuanced-seeming discussion on why minimalist axiologies, properly construed, don’t have the implications you ascribed to them.
The NU FAQ is a bit more single-minded in its style/approach, but on the question “Does negative utilitarianism solve ethics” it says “ethics is nothing that can be ‘solved.’” This at least tones down the fanaticism a bit and opens up options to incorporate other principles or other perspectives. (Also, it contains an entire section on NIPU – negative idealized preference utilitarianism. So, that may count as another preference-based view alluded in the OP, since the NU FAQ doesn’t say whether it finds N(H)U or NIPU “more convincing.”)
I’m not sure why you think the argument would have to be translated into a preference-focused framework. In my previous comment I wanted to say the following: (1) The OP mentions that asymmetric positions are underappreciated and cites some examples, including Anti-Frustrationism, which is (already) a preference-based view.
(2) While the OP does discuss experience-focused views that say nothing is of intrinsic value, those views are compatible with a pluralistic conception of “ethics/morality” where preferences could matter too. Therefore, killing people against their will to reduce suffering isn’t a clear implication of the views.
Neither (1) or (2) require translating a specific argument from experiences to preferences. (That said, I think it’s actually easier to argue for an asymmetry in preference contexts. The notion that acquiring a new preference and then fulfilling it is a good in itself seems counterintuitive. Relatedly, the tranquilist conception of suffering is more like a momentary preference rather than an ‘experience’ and this shift IMO made it easier to justify the asymmetry.)
Why do you want to pack the argument into the framing “What is good and what is bad?” I feel like that’s an artificially limited approach to population ethics, this approach of talking about what’s good or bad. When something is good, it means that we have to create as much of it as possible? That’s a weird framework! At the very least, I want to emphasize that this is far from the only way to think about what matters.
In my post Dismantling Hedonism-inspired Moral Realism, I wrote the following:
Pleasure’s “goodness” is under-defined
that, all else equal, it would be a mistake not to value all pleasures
that no mental states without pleasure are in themselves desirable
that, all else equal, more pleasure is always better than less pleasure
In the quoted passages above, I argued that the way hedonists think of “pleasure is good” smuggles in unwarranted connotations. Similarly and more generally, I think the concept “x is good,” the way you and others use it for framing discussions on population ethics, bakes in an optimizing mindset around “good things ought to be promoted.” This should be labelled as an assumption we can question, rather than as the default for how to hold any discussion on population ethics. It really isn’t the only way to do moral philosophy. (In addition, I personally find it counterintuitive.)
(I make similar points in my recent post on a framework proposal for population ethics, which I’ve linked to in previous comments her.)
Okay, that helps me understand where you’re coming from. I feel like “ethical views should be partly judged based on the implications they have on their own” is downstream of the question of pluralism vs. single-minded theory. In other words, when you evaluate a particular view, it already has to be clear what scope it has. Are we evaluating the view as “the solution to everything in ethics?” or are we evaluating it as “a theory about the value of experiences that doesn’t necessary say that experiences are all that matters?” If the view is presented as the latter (which, again, is explicitly the case for at least two articles the OP cited), then that’s what it should be evaluated as. Views should be evaluated on exactly the scope that they aspire to have.
Overall, I get the impression that you approach population ethics with an artificially narrow lens about what sort of features views “should” have and this seems to lead to a bunch of interrelated misunderstandings about how some others think about their views. I think this applies to probably >50% of the views the OP discussed rather than just edge cases. That said, your criticisms apply to some particular proponents of suffering-focused ethics and some texts.
I think this misunderstands the point I was making. I meant to highlight how, if you’re adopting a pluralistic view, then to defend a strong population asymmetry (the view emphasized in the post’s title), you need reasons why none of the components of your pluralistic view value making happy people.* This gets harder the more pluralistic you are, especially if you can’t easily generalize hedonic arguments to other values. As you suggest, you can get the needed reasons by introducing additional assumptions/frameworks, like rejecting the principle that it’s better for there to be more good things. But I wouldn’t call that an “easy fit”; that’s substantial additional argument, sometimes involving arguing against views that many readers of this forum find axiomatically appealing (like that it’s better for there to be more good things).
(* Technically you don’t need reasons why none of the views consider the making of happy people valuable, just reasons why overall they don’t. Still, I’d guess those two claims are roughly equivalent, since I’m not aware of any prominent views which hold the creation of purely happy people to be actively bad.)
Besides that, I think at this point we’re largely in agreement on the main points we’ve been discussing?
I’ve mainly meant to argue that some of the ethical frameworks that the original post draws on and emphasizes, in arguing for a population asymmetry, have implications that many find very counterintuitive. You seem to agree.
If I’ve understood, you’ve mainly been arguing that there are many other views (including some that the original post draws on) which support a population asymmetry while avoiding certain counterintuitive implications. I agree.
Your most recent comment seems to frame several arguments for this point as arguments against the first bullet point above, but I don’t think they’re actually arguments against the above, since the views you’re defending aren’t the ones my most-discussed criticism applies to (though that does limit the applicability of the criticism).
Thanks for elaborating! I agree I misunderstood your point here.
(I think preference-based views fit neatly into the asymmetry. For instance, Peter Singer initially weakly defended an asymmetric view in Practical Ethics, as arguably the most popular exponent of preference utilitarianism at the time. He only changed his view on population ethics once he became a hedonist. I don’t think I’m even aware of a text that explicitly defends preference-based totalism. By contrast, there are several texts defending asymmetric preference-based views: Benatar, Fehige, Frick, younger version of Singer.)
Or that “(intrinsically) good things” don’t have to be a fixed component in our “ontology” (in how we conceptualize the philosophical option space). Or, relatedly, that the formula “maximize goods minus bads” isn’t the only way to approach (population) ethics. Not because it’s conceptually obvious that specific states of the world aren’t worthy of taking serious effort (and even risks, if necessary) to bring about. Instead, because it’s questionable to assume that “good states” are intrinsically good, that we should bring them about regardless of circumstances, independently of people’s interests/goals.
I agree that we’re mainly in agreement. To summarize the thread, I think we’ve kept discussing because we both felt like the other party was presenting a slightly unfair summary of how many views a specific criticism applies or doesn’t apply to (or applies “easily” vs. “applies only with some additional, non-obvious assumptions”).
I still feel a bit like that now, so I want to flag that out of all the citations from the OP, the NU FAQ is really the only one where it’s straightforward to say that one of the two views within the text – NHU but not NIPU – implies that it would (on some level, before other caveats) be good to kill people against their will (as you claimed in your original comment).
From further discussion, I then gathered that you probably meant that specific arguments from the OP could straightforwardly imply that it’s good to kill people. I see the connection there. Still, two points I tried to make that speak against this interpretation:
(1) People who buy into these arguments mostly don’t think their views imply killing people. (2) To judge what an argument “in isolation” implies, we need some framework for (population) ethics. The framework that totalists in EA rely on is question begging and often not shared by proponents of the asymmetry.
Fair points!
Here I’m moving on from the original topic, but if you’re interested in following this tangent—I’m not quite getting how preference-based views (specifically, person-affecting preference utilitarianism) maintain the asymmetry while avoiding (a slightly/somewhat weaker version of) “killing happy people is good.”
Under “pure” person-affecting preference utilitarianism (ignoring broader pluralistic views of which this view is just one component, and also ignoring instrumental justifications), clearly one reason why it’s bad to kill people is that this would frustrate some of their preferences. Under this view, is another (pro tanto) reason why it’s bad to kill (not-entirely-satisfied) people that their satisfaction/fulfillment is worth preserving (i.e. is good in a way that outweighs associated frustration)?
My intuition is that one answer to the above question breaks the asymmetry, while the other revives some very counterintuitive implications.
If we answer “Yes,” then, through that answer, we’ve accepted a concept of “actively good things” into our ethics, rejecting the view that ethics is just about fixing states of affairs that are actively problematic. Now we’re back in (or much closer to?) a framework of “maximize goods minus bads” / “there are intrinsically good things,” which seems to (severely) undermines the asymmetry.
If we answer “No,” on the grounds that fulfillment can’t outweigh frustration, this would seem to imply that one should kill people, whenever their being killed would frustrate them less than their continued living. Problematically, that seems like it would probably apply to many people, including many pretty happy people.
After all, suppose someone is fairly happy (though not entirely, constantly fulfilled), is quite myopic, and only has a moderate intrinsic preference against being killed. Then, the preference utilitarianism we’re considering seems to endorse killing them (since killing them would “only” frustrate their preferences for a short while, while continued living would leave them with decades of frustration, amid their general happiness).
There seem to be additional bizarre implications, like “if someone suddenly gets an unrealizable preference, even if they mistakenly think it’s being satisfied and are happy about that, this gives one stronger reasons to kill them.” (Since killing them means the preference won’t go unsatisfied as long.)
(I’m assuming that frustration matters (roughly) in proportion to its duration, since e.g. long-lasting suffering seems especially bad.)
(Of course, hedonic utilitarianism also endorses some non-instrumental killing, but only under what seem to be much more restrictive conditions—never killing happy people.)
I would answer “No.”
The preference against being killed is as strong as the happy person wants it to be. If they have a strong preference against being killed then the preference frustration from being killed would be lot worse than the preference frustration from an unhappy decade or two – it depends how the person herself would want to make these choices.
I haven’t worked this out as a formal theory but here are some thoughts on how I’d think about “preferences.”
(The post I linked to primarily focuses on cases where people have well-specified preferences/goals. Many people will have under-defined preferences and preference utilitarians would also want to have a way to deal with these cases. One way to deal with under-defined preferences could be “fill in the gaps with what’s good on our experience-focused account of what matters.”)