The Procreative Asymmetry is very widely held, and much discussed, by philosophers who work on population ethics (and seemingly very common in the general population). If anything, it’s the default view, rather than a niche position (except among EA philosophers). If you do a quick search for it on philpapers.org there’s quite a lot there.
You might think the Asymmetry is deeply mistaken, but describing it as a ‘niche position’ is much like calling non-consequentialism a ‘niche position’.
The Asymmetry is certainly widely discussed by academic philosophers, as shown by e.g. the philpapers search you link to. I also agree that it seems off to characterize it as a “niche view”.
I’m not sure, however, whether it is widely endorsed or even widely defended. Are you aware of any surveys or other kinds of evidence that would speak to that more directly than the fact that there are lot of papers on the subject (which I think primarily shows that it’s an attractive topic to write about by the standards of academic philosophy)?
I’d be pretty interested in understanding the actual distribution of views among professional philosophers, with the caveat that I don’t think this is necessarily that much evidence for what view on population ethics should ultimately guide our actions. The caveat is roughly because I think the incentives of academic philosophy aren’t strongly favoring beliefs on which it’d be overall good to act on, as opposed to views one can publish well about (of course there are things pushing in the other direction as well, e.g. these are people who’ve thought about it a lot and use criteria for critizing and refining views that are more widely endorsed, so it is certainly some evidence, hence my interest).
FWIW my own impression is closer to:
The Asymmetry is widely held to be an intuitive desideratum for theories of population ethics.
As usual (cf. the founding impetus of ‘experimental philosophy’), philosophers don’t usually check whether the intuition is in fact widely held, and recent empirical work casts some doubt on that.
As usual, there are also at least some philosophers trying to ‘explain away’ the intuition (e.g. in this case Chappell 2017).
However, it turns out that it is hard to find a theory of population ethics that rationalizes the Asymmetry without having other problems. My sense is that this assessment – in part due to prominent impossibility theorems – is widely shared, and that there is likely no single widely held specific view that implies the Asymmetry.
This is basically the kind of situation that tends to spawn an ‘industry’ in academic philosophy, in which people come up with increasingly complex views that avoid known problems with previous views, other people point out new problems, and so on. And this is precisely what happened.
Overall, it is pretty hard to tell from this how many philosophers ‘actually believe’ the Asymmetry, in part because many participants in the conversation may not think of themselves as having any settled beliefs on the matter and in part because the whole language game seems to often involve “beliefs” that are at best pretty compartmentalized (e.g. don’t explain an agent’s actions in the world at large) and at worst not central examples of belief at all (perhaps more similar to how an actor relates to the beliefs of a character while enacting a play).
I think in many ways, the Asymmetry is like the view that there is some kind of principled difference between ideas and matter or that humans have free will of some sort – a perhaps widely held intuition, and certainly a fertile ground for long debates between philosophers, from which, however, it is hard to draw any clear conclusion if you are an agent who (unlike the debating philosophers) faces a high-stakes, real-world action depending on the matter. (It’s also different in some ways, e.g. it seems easier to agree on a precise statement of the Asymmetry than for some of these other issues.)
Curious how well this impression matches yours? I could imagine that the impression one gets (like me) primarily from reading the literature may be somewhat different from e.g. the vibe at conferences.
I agree with the ‘spawned an industry’ point and how that makes it difficult to assess how widespread various views really are.
As usual (cf. the founding impetus of ‘experimental philosophy’), philosophers don’t usually check whether the intuition is in fact widely held, and recent empirical work casts some doubt on that.
Magnus in the OP discusses the paper you link to in the quoted passage and points out that it also contains findings we can interpret in support of a (weak) asymmetry of some kind. Also, David (the David who’s a co-author of the paper) told me recently that he thinks these types of surveys are not worth updating on by much [edit: but “casts some doubt on” is still accurate if we previously believed people would have clear answers that favor the asymmetry] because the subjects often interpret things in all kinds of ways or don’t seem to have consistent views across multiple answers. (The publication itself mentions in the “Supplementary Materials” that framing effects play a huge role.)
This impression strikes me as basically spot on. It would have been more accurate for me to say it’s taken to be a “widely held to be an intuitive desideratum for theories of population ethics”. It does have its defenders, though, e.g. Frick, Roberts, Bader. I agree that there does not seem to be any theory that rationalises this intuition without having other problems (but this is merely a specific instance of the general case that there seems to be no theory of population ethics that retains all our intuitions—hence Arrhenius’ famous impossibility result).
I’m not aware of any surveys of philosophers on their views on population ethics. AFAIT, the number of professional philosophers who are experts in population ethics—depending on how one wants to define those terms—could probably fit into one lecture room.
and seemingly very common in the general population
So consider the wording in the post:
bringing a miserable life into the world has negative value while bringing a happy life into the world does not have positive value — except potentially through its instrumental effects and positive roles
If we do a survey of 100 Americans on Positly, with that exact wording, what percentage of randomly chosen people do you think would agree? I happen to respect Positly, but I am open to other survey methodologies.
I was intuitively thinking 5% tops, but the fact that you disagree strongly takes me aback a little bit.
Note that I think you were mostly thinking about philosophers, whereas I was mostly thinking about the general population.
I’m surprised you’d have such a low threshold—I would have thought noise, misreading the question, trolling, misclicks etc. alone would push above that level.
It might also be worth distinguishing stronger and weaker asymmetries in population ethics. Caviola et al.’s main study indicates that laypeople on average endorse at least a weak axiological asymmetry (which becomes increasingly strong as the populations under consideration become larger), and the pilot study suggests that people in certain situations (e.g. when considering foreign worlds) tend to endorse a rather strong one, cf. the 100-to-1 ratio.
Wow, I’d have said 30-65% for my 50% confidence interval, and <5% is only about 5-10% of my probability mass. But maybe we’re envisioning this survey very differently.
We found that people do not endorse the so-called intuition of neutrality according to which creating new people with lives worth living is morally neutral. In Studies 2a-b, participants considered a world containing an additional happy person better and a world containing an additional unhappy person worse.
Moreover, we also found that people’s judgments about the positive value of adding a new happy person and the negative value of adding a new unhappy person were symmetrical. That is, their judgments did not reflect the so-called asymmetry—according to which adding a new unhappy person is bad but adding a new happy person is neutral.
The study design is quite different from Nuno’s, though. No doubt the study design matters.
In 2a, it looks like they didn’t explicitly get subjects to try to control for impacts on other people in their question like Nuno did, and (I’m not sure if this matters) they assumed the extra person would be added to a world of a million neutral life people. They just asked, for each of adding a neutral life, adding a bad life and adding a good life:
In terms of its overall value, how much better or worse would this world (containing this additional person) be compared to before?
2b was pretty similar, but used either an empty world or world of a billion neutral life people.
I wonder if the reason for adding the happy person to the empty world is not welfarist, though, e.g. maybe people really dislike empty worlds, value life in itself or think empty worlds lack beauty or something. EDIT: Indeed, it seemed some people preferred to add an unhappy life than not, basically no one preferred not to add a happy life and people tended to prefer adding a neutral life than not, based on figure 5 (an answer of 4 means “equally good”, above means better and below means worse). Maybe another explanation compatible with welfarist symmetry is that if there’s at least one life, good or bad, they expect good lives eventually, and for them to outweigh the bad.
Also, does the question actually answer whether anyone in particular holds the asymmetry, or are they just averaging responses across people? You could have some people who actually give greater weight to adding a happy life to an empty world than adding a miserable life to an empty world (which seems to be the case, based on Figure 5), along with people holding the standard asymmetry or weaker versions, and they could roughly cancel out in aggregate to support symmetry.
Words cannot express how much I appreciate your presence Nuno.
Sorry for being off-topic but I just can’t help myself. This is comment is such a perfect example of the attitude that made me fall in with this community.
It is “very widely held” by philosophers only in the sense that it is a pre-theoretic intuition that many people, including philosophers, share. It is not “very widely held” by philosophers on reflection.
The intuition seems to be almost universally held. I agree many philosophers (and others) think that this intuition must, on reflection, be mistaken. But many philosophers, even after reflection, still think the procreative asymmetry is correct. I’m not sure how interesting it would be to argue about the appropriate meaning of the phrase “very widely held”. Based on my (perhaps atypical) experience, I guess that if you polled those who had taken a class on population ethics, I expect about 10% would agree with the statement “the procreative asymmetry is a niche position”.
Which version of the intuition? If you just mean ‘there is greater value in preventing the creation of a life with X net utils of suffering than in creating a life with X net utils of pleasure’, then maybe. But people often claim that ‘adding net-happy people is neutral, whilst adding net-suffering people is bad’ is intuitive, and there was a fairly recent paper claiming to find that this wasn’t what ordinary people thought when surveyed: https://www.iza.org/publications/dp/12537/the-asymmetry-of-population-ethics-experimental-social-choice-and-dual-process-moral-reasoning
I haven’t actually read the paper to check if it’s any good though...
I upvoted this comment because I think there’s something to it.
That said, see the comment I made elsewhere in this thread about the existence of selection effects. The asymmetry is hard to justify for believers in an objective axiology, but philosophers who don’t believe in an objective axiology likely won’t write paper after paper on population ethics.
Another selection effect is that consequentialists are morally motivated to spread their views, which could amplify consensus effects (even if it applies to consequentialists on both sides of the split, one group being larger and better positioned to start with can amplify the proportions after a growth phase). For instance, before the EA-driven wave of population ethics papers, presumably the field would have been split more evenly?
Of course, if EA were to come out largely against any sort of population-ethical asymmetry, that’s itself evidence for (a lack of) convincingness of the position. (At the same time, a lot of EAs take moral realism seriously* and I don’t think they’re right – I’d be curious what a poll of anti-realist EAs would tell us about population-ethical asymmetries of various kinds and various strengths.)
*I should mention that this includes Magnus, author of the OP. I probably don’t agree with his specific arguments for there being an asymmetry, but I do agree with the claim that the topic is underexplored/underappreciated.
What exactly do you mean by “have an objective axiology” and why do you think it makes it (distinctively) hard to defend asymmetry? (I have an eccentric philosophical view that the word “objective” nearly always causes more trouble than it’s worth and should be tabooed.)
Thinking in terms of “something has intrinsic value” privileges particular answers. For instance, in this comment today, MichaelPlant asked Magnus the following:
[...] why do we have reason to prevent what is bad but no reason to bring about what is good?”
The comment presupposes that there’s “something that is bad” and “something that is good” (in a sense independent of particular people’s judgments – this is what I meant by “objective”). If we grant this framing, any arguments for why “create what’s good” is less important than “don’t create what’s bad” will seem ad hoc!
Instead, for people interested in exploring person-affecting intuitions (and possibly defending them), I recommend taking a step back to investigate what we mean when we say things like “what’s good” or “something has intrinsic value.” I think things are good when they’re connected to the interests/goals of people/beings, but not in some absolute sense that goes beyond it. In other words, I only understand the notion of (something like) “conditional value,” but I don’t understand “intrinsic value.”
The longer answer:
Here’s a related intuition:
There’s a tension between the beliefs “there’s an objective axiology” and “people are free to choose their life goals.”
Many effective altruists hesitate to say, “One of you must be wrong!” when one person cares greatly about living forever while the other doesn’t. By contrast, when two people disagree on population ethics “One of you must be wrong!” seems to be the standard (implicit) opinion. I think these two attitudes are in tension. To the degree people are confident that life goals are up to the individual to decide/pursue, I suggest they lean in on this belief. I expect that resolving the tension in that way – leaning in on the belief “people are free to choose their life goals;” giving up on “there’s an axiology that applies to everyone” – makes my framework more intuitive and gives a better sense of what the framework is for, what it’s trying to accomplish.
If there were an objective axiology, I might be making a mistake in how I plan to live a fulfilled, self-oriented life. Namely, if the way I chose to live my life doesn’t give sufficient weight to things that are intrinsically good according to the objective axiology, then I’m making some kind of mistake. I think it’s occasionally possible for people to make “mistakes” about their goals/values if they’re insufficiently aware of alternatives and would change their minds if they knew more, etc. However, I don’t think it’s possible for truly-well-informed reasoners to be wrong about what they think they deeply care about, and I don’t think “becoming well-informed” leads to convergence of life goals among people/reasoners.
I’d say that the main force behind arguments against person-affecting views in population ethics is usually something like the following:
“We want to figure out what’s best for morally relevant others. Well-being differences in morally relevant others should always matter – if they don’t matter on someone’s account, then this particular account couldn’t be concerned with what’s best for morally relevant others.”
As you know, person-affecting views tend to come out in such a way that they say things like “it’s neutral to create the perfect life and (equally) neutral to create a merely quite good life.” (Or they may say that whether to create a specific life depends on other options we have available, thereby violating the axiom of independence of irrelevant alternatives.)
These features of person-affecting views show that well-being differences don’t always matter on those views. Some people will interpret this as “person-affecting views are incompatible with the goal of ethics – figuring out what’s best for morally relevant others.”
However, all of this is begging the question. Who says that the same ethical rules should govern existing (and sure-to-exist) people/beings as well as possible people/beings? If there’s an objective axiology, it’s implicit that the same rules would apply (why wouldn’t they?). However, without an objective axiology, all we’re left is the following:
Ethics is about interests/goals.
Nothing is intrinsically valuable, but various things can be conditionally valuable if grounded in someone’s interests/goals.
The rule “focus on interests/goals” has comparatively clear implications in fixed population contexts. The minimal morality of “don’t be a jerk” means we shouldn’t violate others’ interests/goals (and perhaps even help them where it’s easy and our comparative advantage). The ambitious morality of “do the most moral/altruistic thing” has a lot of overlap with something like preference utilitarianism. (Though there are instances where people’s life goals are under-defined, in which case people with different takes on “do the most moral/altruistic thing” may wish to fill in the gaps according to subjectivist “axiologies” that they endorse.)
On creating new people/beings, “focus on interests/goals” no longer gives unambiguous results:
(1) The number of interests/goals isn’t fixed
(2) The types of interests/goals aren’t fixed
This leaves population ethics under-defined with two different perspectives: that of existing or sure-to-exist people/beings (what they want from the future) and that of possible people/beings (what they want from their potential creators).
Without an objective axiology, any attempt to unify these perspectives involves subjective judgment calls.
So, without an objective axiology, there are these two separate perspectives. We can view person-affecting views as making the following statement:
“‘Doing the most moral/altruistic thing’ isn’t about creating new people with new interests/goals. Instead, it’s about benefitting existing (or sure-to-exist) people/beings according to their interests/goals.”
In other words, person-affecting views concentrate their caring budget on one of two possible perspectives (instead of trying to design an axiology that incorporates both). That seems like a perfectly defensible approach to me!
Still, we’re left with the question, “If your view focuses on existing (and sure-to-exist) people, why is it bad to create a miserable person?”
Someone with person-affecting views could reply the following:
“While I concentrate my caring budget on one perspective (existing and sure-to-exist people/beings), that doesn’t mean my concern for the interests of possible people/beings is zero. My approach to dealing with merely possible people is essentially ‘don’t be a jerk.’ That’s exactly why I’m sometimes indifferent between creating a medium-happy possible person and a very happy possible person. I understand that the latter is better for possible people/beings, but since I concentrate my caring budget on existing (and sure-to-exist) people/beings, bringing the happier person into existence usually isn’t a priority to me. Lastly, you’re probably going to ask ‘why is your notion of ‘don’t be a jerk’ asymmetric?.′ I.e., why not ‘don’t be a jerk’ by creating people who would be grateful to be alive (at least in instances where it’s easy/cheap to do so)? To this, my reply is that creating a specific person singles out that person (from the sea of possible people/beings) in a way that not creating them does not. There’s no answer to ‘What do possible people/beings want?’ that applies to all conceivable beings, so I cannot do right by all of them, anyway. By not giving an existence slot to someone who would be grateful to exist, I admit that I’m arguably failing to benefit a particular subset of possible people/beings (the ones who would be grateful to get the slot). Still, other possible people/beings don’t mind not getting the spot, so there’s at least a sense in which I didn’t disrespect possible people/beings as a whole interest group. By contrast, if I create someone who hates being alive, saying ‘Other people would be grateful in your spot’ doesn’t seem like a defensible excuse. ‘Not creating happy people’ only means I’m not giving maximum concern to possible people/beings, whereas ‘creating a miserable person’ means I’m flat-out disrespecting someone specific, who I chose to ‘highlight’ from the sea of all possible people/beings (in the most real sense) – there doesn’t seem to be a defensible excuse for that.”
I’m not sure I really follow (though I admit I’ve only read the comment, not the post you’ve linked to.) Is the argument something like we should only care about fulfilling preferences that already exist, and adding people to the world doesn’t automatically do that, so there’s no general reason to add happy people if it doesn’t satisfy a preference of someone who is here already? Couldn’t you show that adding suffering people isn’t automatically bad by the same reasoning, since it doesn’t necessarily violate an existing preference? (Also, on the word “objective”: you can definitely have a view of morality on which satisfying existing preference or doing what people value is all that matters, but it is mind-independently true that this is the correct morality, which makes it a realist view as academic philosophers classify things, and hence a view on which morality is objective in one sense of “objective”. Hence why I think “objective” should be tabooed.)
Is the argument something like we should only care about fulfilling preferences that already exist, and adding people to the world doesn’t automatically do that, so there’s no general reason to add happy people if it doesn’t satisfy a preference of someone who is here already?
Pretty much, but my point is only that this is a perfectly defensible way to think about population ethics, not that I expect everyone to find it compelling over alternatives.
As I say in the longer post:
Just like the concept “athletic fitness” has several defensible interpretations (e.g., the difference between a 100m sprinter and a marathon runner), so (I argue) does “doing the most moral/altruistic thing.”
I agree with what you write about “objective” – I’m guilty of violating your advice.
(That said, I think there’s a sense in which preference utilitarianism would be unsatisfying as a “moral realist” answer to all of ethics because it doesn’t say anything about what preferences to adopt. Or, if it did say what preferences to adopt, then it would again be subject to my criticism – what if objective preference utilitarianism says I should think of my preferences in one particular way but that doesn’t resonate with me?)
Couldn’t you show that adding suffering people isn’t automatically bad by the same reasoning, since it doesn’t necessarily violate an existing preference?
I tried to address this in the last paragraph of my previous comment. It gets a bit complicated because I’m relying on a distinction between “ambitious morality” and “minimal morality” ( = “don’t be a jerk”) which also only makes sense if there’s no objective axiology.
I don’t expect the following to be easily intelligible to people used to thinking within the moral realist framework, but for more context, I recommend the section “minimal morality vs. ambitious morality” here. This link explains why I think it makes sense to have a distinction between minimal morality and ambitious morality, instead of treating all of morality as the same thing. (“Care morality” vs. “cooperation morality” is a similar framing, which probably tells you more about what I mean here.) And my earlier comment (in particular, the last paragraph in my previous comment) already explained why I think minimal morality contains a population-ethical asymmetry.
I’d guess contractualists and rights-based theorists (less sure about deontologists generally) would normally take the asymmetry to be true, because if someone is never born, there are no claims or rights of theirs to be concerned with.
I don’t know how popular it is among consequentialists, virtue ethicists or those with mixed views. I wouldn’t expect it to be extremely uncommon or for the vast majority to accept it.
The Procreative Asymmetry is very widely held, and much discussed, by philosophers who work on population ethics (and seemingly very common in the general population). If anything, it’s the default view, rather than a niche position (except among EA philosophers). If you do a quick search for it on philpapers.org there’s quite a lot there.
You might think the Asymmetry is deeply mistaken, but describing it as a ‘niche position’ is much like calling non-consequentialism a ‘niche position’.
The Asymmetry is certainly widely discussed by academic philosophers, as shown by e.g. the philpapers search you link to. I also agree that it seems off to characterize it as a “niche view”.
I’m not sure, however, whether it is widely endorsed or even widely defended. Are you aware of any surveys or other kinds of evidence that would speak to that more directly than the fact that there are lot of papers on the subject (which I think primarily shows that it’s an attractive topic to write about by the standards of academic philosophy)?
I’d be pretty interested in understanding the actual distribution of views among professional philosophers, with the caveat that I don’t think this is necessarily that much evidence for what view on population ethics should ultimately guide our actions. The caveat is roughly because I think the incentives of academic philosophy aren’t strongly favoring beliefs on which it’d be overall good to act on, as opposed to views one can publish well about (of course there are things pushing in the other direction as well, e.g. these are people who’ve thought about it a lot and use criteria for critizing and refining views that are more widely endorsed, so it is certainly some evidence, hence my interest).
FWIW my own impression is closer to:
The Asymmetry is widely held to be an intuitive desideratum for theories of population ethics.
As usual (cf. the founding impetus of ‘experimental philosophy’), philosophers don’t usually check whether the intuition is in fact widely held, and recent empirical work casts some doubt on that.
As usual, there are also at least some philosophers trying to ‘explain away’ the intuition (e.g. in this case Chappell 2017).
However, it turns out that it is hard to find a theory of population ethics that rationalizes the Asymmetry without having other problems. My sense is that this assessment – in part due to prominent impossibility theorems – is widely shared, and that there is likely no single widely held specific view that implies the Asymmetry.
This is basically the kind of situation that tends to spawn an ‘industry’ in academic philosophy, in which people come up with increasingly complex views that avoid known problems with previous views, other people point out new problems, and so on. And this is precisely what happened.
Overall, it is pretty hard to tell from this how many philosophers ‘actually believe’ the Asymmetry, in part because many participants in the conversation may not think of themselves as having any settled beliefs on the matter and in part because the whole language game seems to often involve “beliefs” that are at best pretty compartmentalized (e.g. don’t explain an agent’s actions in the world at large) and at worst not central examples of belief at all (perhaps more similar to how an actor relates to the beliefs of a character while enacting a play).
I think in many ways, the Asymmetry is like the view that there is some kind of principled difference between ideas and matter or that humans have free will of some sort – a perhaps widely held intuition, and certainly a fertile ground for long debates between philosophers, from which, however, it is hard to draw any clear conclusion if you are an agent who (unlike the debating philosophers) faces a high-stakes, real-world action depending on the matter. (It’s also different in some ways, e.g. it seems easier to agree on a precise statement of the Asymmetry than for some of these other issues.)
Curious how well this impression matches yours? I could imagine that the impression one gets (like me) primarily from reading the literature may be somewhat different from e.g. the vibe at conferences.
I agree with the ‘spawned an industry’ point and how that makes it difficult to assess how widespread various views really are.
Magnus in the OP discusses the paper you link to in the quoted passage and points out that it also contains findings we can interpret in support of a (weak) asymmetry of some kind. Also, David (the David who’s a co-author of the paper) told me recently that he thinks these types of surveys are not worth updating on by much [edit: but “casts some doubt on” is still accurate if we previously believed people would have clear answers that favor the asymmetry] because the subjects often interpret things in all kinds of ways or don’t seem to have consistent views across multiple answers. (The publication itself mentions in the “Supplementary Materials” that framing effects play a huge role.)
Thank you, that’s interesting and I hadn’t seen this.
(I now wrote a comment elaborating on some of these inconsistencies here.)
This impression strikes me as basically spot on. It would have been more accurate for me to say it’s taken to be a “widely held to be an intuitive desideratum for theories of population ethics”. It does have its defenders, though, e.g. Frick, Roberts, Bader. I agree that there does not seem to be any theory that rationalises this intuition without having other problems (but this is merely a specific instance of the general case that there seems to be no theory of population ethics that retains all our intuitions—hence Arrhenius’ famous impossibility result).
I’m not aware of any surveys of philosophers on their views on population ethics. AFAIT, the number of professional philosophers who are experts in population ethics—depending on how one wants to define those terms—could probably fit into one lecture room.
So consider the wording in the post:
If we do a survey of 100 Americans on Positly, with that exact wording, what percentage of randomly chosen people do you think would agree? I happen to respect Positly, but I am open to other survey methodologies.
I was intuitively thinking 5% tops, but the fact that you disagree strongly takes me aback a little bit.
Note that I think you were mostly thinking about philosophers, whereas I was mostly thinking about the general population.
I’m surprised you’d have such a low threshold—I would have thought noise, misreading the question, trolling, misclicks etc. alone would push above that level.
You can imagine survey designs which would filter trolls &c, but you right I should have been slightly higher based on that.
It might also be worth distinguishing stronger and weaker asymmetries in population ethics. Caviola et al.’s main study indicates that laypeople on average endorse at least a weak axiological asymmetry (which becomes increasingly strong as the populations under consideration become larger), and the pilot study suggests that people in certain situations (e.g. when considering foreign worlds) tend to endorse a rather strong one, cf. the 100-to-1 ratio.
Makes sense.
Wow, I’d have said 30-65% for my 50% confidence interval, and <5% is only about 5-10% of my probability mass. But maybe we’re envisioning this survey very differently.
Did a test run with 58 participants (I got two attempted repeats):
So you were right, and I’m super surprised here.
There is a paper by Lucius Caviola et al of relevance:
The study design is quite different from Nuno’s, though. No doubt the study design matters.
In 2a, it looks like they didn’t explicitly get subjects to try to control for impacts on other people in their question like Nuno did, and (I’m not sure if this matters) they assumed the extra person would be added to a world of a million neutral life people. They just asked, for each of adding a neutral life, adding a bad life and adding a good life:
2b was pretty similar, but used either an empty world or world of a billion neutral life people.
2b involves an empty world—where there can’t be an effect on other people—and replicates 2a afaict.
Fair, my mistake.
I wonder if the reason for adding the happy person to the empty world is not welfarist, though, e.g. maybe people really dislike empty worlds, value life in itself or think empty worlds lack beauty or something. EDIT: Indeed, it seemed some people preferred to add an unhappy life than not, basically no one preferred not to add a happy life and people tended to prefer adding a neutral life than not, based on figure 5 (an answer of 4 means “equally good”, above means better and below means worse). Maybe another explanation compatible with welfarist symmetry is that if there’s at least one life, good or bad, they expect good lives eventually, and for them to outweigh the bad.
Also, does the question actually answer whether anyone in particular holds the asymmetry, or are they just averaging responses across people? You could have some people who actually give greater weight to adding a happy life to an empty world than adding a miserable life to an empty world (which seems to be the case, based on Figure 5), along with people holding the standard asymmetry or weaker versions, and they could roughly cancel out in aggregate to support symmetry.
Words cannot express how much I appreciate your presence Nuno.
Sorry for being off-topic but I just can’t help myself. This is comment is such a perfect example of the attitude that made me fall in with this community.
It is “very widely held” by philosophers only in the sense that it is a pre-theoretic intuition that many people, including philosophers, share. It is not “very widely held” by philosophers on reflection.
The intuition seems to be almost universally held. I agree many philosophers (and others) think that this intuition must, on reflection, be mistaken. But many philosophers, even after reflection, still think the procreative asymmetry is correct. I’m not sure how interesting it would be to argue about the appropriate meaning of the phrase “very widely held”. Based on my (perhaps atypical) experience, I guess that if you polled those who had taken a class on population ethics, I expect about 10% would agree with the statement “the procreative asymmetry is a niche position”.
Which version of the intuition? If you just mean ‘there is greater value in preventing the creation of a life with X net utils of suffering than in creating a life with X net utils of pleasure’, then maybe. But people often claim that ‘adding net-happy people is neutral, whilst adding net-suffering people is bad’ is intuitive, and there was a fairly recent paper claiming to find that this wasn’t what ordinary people thought when surveyed: https://www.iza.org/publications/dp/12537/the-asymmetry-of-population-ethics-experimental-social-choice-and-dual-process-moral-reasoning
I haven’t actually read the paper to check if it’s any good though...
I upvoted this comment because I think there’s something to it.
That said, see the comment I made elsewhere in this thread about the existence of selection effects. The asymmetry is hard to justify for believers in an objective axiology, but philosophers who don’t believe in an objective axiology likely won’t write paper after paper on population ethics.
Another selection effect is that consequentialists are morally motivated to spread their views, which could amplify consensus effects (even if it applies to consequentialists on both sides of the split, one group being larger and better positioned to start with can amplify the proportions after a growth phase). For instance, before the EA-driven wave of population ethics papers, presumably the field would have been split more evenly?
Of course, if EA were to come out largely against any sort of population-ethical asymmetry, that’s itself evidence for (a lack of) convincingness of the position. (At the same time, a lot of EAs take moral realism seriously* and I don’t think they’re right – I’d be curious what a poll of anti-realist EAs would tell us about population-ethical asymmetries of various kinds and various strengths.)
*I should mention that this includes Magnus, author of the OP. I probably don’t agree with his specific arguments for there being an asymmetry, but I do agree with the claim that the topic is underexplored/underappreciated.
What exactly do you mean by “have an objective axiology” and why do you think it makes it (distinctively) hard to defend asymmetry? (I have an eccentric philosophical view that the word “objective” nearly always causes more trouble than it’s worth and should be tabooed.)
The short answer:
Thinking in terms of “something has intrinsic value” privileges particular answers. For instance, in this comment today, MichaelPlant asked Magnus the following:
The comment presupposes that there’s “something that is bad” and “something that is good” (in a sense independent of particular people’s judgments – this is what I meant by “objective”). If we grant this framing, any arguments for why “create what’s good” is less important than “don’t create what’s bad” will seem ad hoc!
Instead, for people interested in exploring person-affecting intuitions (and possibly defending them), I recommend taking a step back to investigate what we mean when we say things like “what’s good” or “something has intrinsic value.” I think things are good when they’re connected to the interests/goals of people/beings, but not in some absolute sense that goes beyond it. In other words, I only understand the notion of (something like) “conditional value,” but I don’t understand “intrinsic value.”
The longer answer:
Here’s a related intuition:
There’s a tension between the beliefs “there’s an objective axiology” and “people are free to choose their life goals.”
In my post, “Population Ethics Without [an Objective] Axiology,” I defended a specific framework for thinking about population ethics. From the post:
If there were an objective axiology, I might be making a mistake in how I plan to live a fulfilled, self-oriented life. Namely, if the way I chose to live my life doesn’t give sufficient weight to things that are intrinsically good according to the objective axiology, then I’m making some kind of mistake. I think it’s occasionally possible for people to make “mistakes” about their goals/values if they’re insufficiently aware of alternatives and would change their minds if they knew more, etc. However, I don’t think it’s possible for truly-well-informed reasoners to be wrong about what they think they deeply care about, and I don’t think “becoming well-informed” leads to convergence of life goals among people/reasoners.
I’d say that the main force behind arguments against person-affecting views in population ethics is usually something like the following:
“We want to figure out what’s best for morally relevant others. Well-being differences in morally relevant others should always matter – if they don’t matter on someone’s account, then this particular account couldn’t be concerned with what’s best for morally relevant others.”
As you know, person-affecting views tend to come out in such a way that they say things like “it’s neutral to create the perfect life and (equally) neutral to create a merely quite good life.” (Or they may say that whether to create a specific life depends on other options we have available, thereby violating the axiom of independence of irrelevant alternatives.)
These features of person-affecting views show that well-being differences don’t always matter on those views. Some people will interpret this as “person-affecting views are incompatible with the goal of ethics – figuring out what’s best for morally relevant others.”
However, all of this is begging the question. Who says that the same ethical rules should govern existing (and sure-to-exist) people/beings as well as possible people/beings? If there’s an objective axiology, it’s implicit that the same rules would apply (why wouldn’t they?). However, without an objective axiology, all we’re left is the following:
Ethics is about interests/goals.
Nothing is intrinsically valuable, but various things can be conditionally valuable if grounded in someone’s interests/goals.
The rule “focus on interests/goals” has comparatively clear implications in fixed population contexts. The minimal morality of “don’t be a jerk” means we shouldn’t violate others’ interests/goals (and perhaps even help them where it’s easy and our comparative advantage). The ambitious morality of “do the most moral/altruistic thing” has a lot of overlap with something like preference utilitarianism. (Though there are instances where people’s life goals are under-defined, in which case people with different takes on “do the most moral/altruistic thing” may wish to fill in the gaps according to subjectivist “axiologies” that they endorse.)
On creating new people/beings, “focus on interests/goals” no longer gives unambiguous results: (1) The number of interests/goals isn’t fixed (2) The types of interests/goals aren’t fixed
This leaves population ethics under-defined with two different perspectives: that of existing or sure-to-exist people/beings (what they want from the future) and that of possible people/beings (what they want from their potential creators).
Without an objective axiology, any attempt to unify these perspectives involves subjective judgment calls.
So, without an objective axiology, there are these two separate perspectives. We can view person-affecting views as making the following statement:
“‘Doing the most moral/altruistic thing’ isn’t about creating new people with new interests/goals. Instead, it’s about benefitting existing (or sure-to-exist) people/beings according to their interests/goals.”
In other words, person-affecting views concentrate their caring budget on one of two possible perspectives (instead of trying to design an axiology that incorporates both). That seems like a perfectly defensible approach to me!
Still, we’re left with the question, “If your view focuses on existing (and sure-to-exist) people, why is it bad to create a miserable person?”
Someone with person-affecting views could reply the following:
“While I concentrate my caring budget on one perspective (existing and sure-to-exist people/beings), that doesn’t mean my concern for the interests of possible people/beings is zero. My approach to dealing with merely possible people is essentially ‘don’t be a jerk.’ That’s exactly why I’m sometimes indifferent between creating a medium-happy possible person and a very happy possible person. I understand that the latter is better for possible people/beings, but since I concentrate my caring budget on existing (and sure-to-exist) people/beings, bringing the happier person into existence usually isn’t a priority to me. Lastly, you’re probably going to ask ‘why is your notion of ‘don’t be a jerk’ asymmetric?.′ I.e., why not ‘don’t be a jerk’ by creating people who would be grateful to be alive (at least in instances where it’s easy/cheap to do so)? To this, my reply is that creating a specific person singles out that person (from the sea of possible people/beings) in a way that not creating them does not. There’s no answer to ‘What do possible people/beings want?’ that applies to all conceivable beings, so I cannot do right by all of them, anyway. By not giving an existence slot to someone who would be grateful to exist, I admit that I’m arguably failing to benefit a particular subset of possible people/beings (the ones who would be grateful to get the slot). Still, other possible people/beings don’t mind not getting the spot, so there’s at least a sense in which I didn’t disrespect possible people/beings as a whole interest group. By contrast, if I create someone who hates being alive, saying ‘Other people would be grateful in your spot’ doesn’t seem like a defensible excuse. ‘Not creating happy people’ only means I’m not giving maximum concern to possible people/beings, whereas ‘creating a miserable person’ means I’m flat-out disrespecting someone specific, who I chose to ‘highlight’ from the sea of all possible people/beings (in the most real sense) – there doesn’t seem to be a defensible excuse for that.”
The long answer: My post Population Ethics Without ((an Objective)) Axiology: A Framework.
I’m not sure I really follow (though I admit I’ve only read the comment, not the post you’ve linked to.) Is the argument something like we should only care about fulfilling preferences that already exist, and adding people to the world doesn’t automatically do that, so there’s no general reason to add happy people if it doesn’t satisfy a preference of someone who is here already? Couldn’t you show that adding suffering people isn’t automatically bad by the same reasoning, since it doesn’t necessarily violate an existing preference? (Also, on the word “objective”: you can definitely have a view of morality on which satisfying existing preference or doing what people value is all that matters, but it is mind-independently true that this is the correct morality, which makes it a realist view as academic philosophers classify things, and hence a view on which morality is objective in one sense of “objective”. Hence why I think “objective” should be tabooed.)
Pretty much, but my point is only that this is a perfectly defensible way to think about population ethics, not that I expect everyone to find it compelling over alternatives.
As I say in the longer post:
I agree with what you write about “objective” – I’m guilty of violating your advice.
(That said, I think there’s a sense in which preference utilitarianism would be unsatisfying as a “moral realist” answer to all of ethics because it doesn’t say anything about what preferences to adopt. Or, if it did say what preferences to adopt, then it would again be subject to my criticism – what if objective preference utilitarianism says I should think of my preferences in one particular way but that doesn’t resonate with me?)
I tried to address this in the last paragraph of my previous comment. It gets a bit complicated because I’m relying on a distinction between “ambitious morality” and “minimal morality” ( = “don’t be a jerk”) which also only makes sense if there’s no objective axiology.
I don’t expect the following to be easily intelligible to people used to thinking within the moral realist framework, but for more context, I recommend the section “minimal morality vs. ambitious morality” here. This link explains why I think it makes sense to have a distinction between minimal morality and ambitious morality, instead of treating all of morality as the same thing. (“Care morality” vs. “cooperation morality” is a similar framing, which probably tells you more about what I mean here.) And my earlier comment (in particular, the last paragraph in my previous comment) already explained why I think minimal morality contains a population-ethical asymmetry.
I’d guess contractualists and rights-based theorists (less sure about deontologists generally) would normally take the asymmetry to be true, because if someone is never born, there are no claims or rights of theirs to be concerned with.
I don’t know how popular it is among consequentialists, virtue ethicists or those with mixed views. I wouldn’t expect it to be extremely uncommon or for the vast majority to accept it.