First question: in broad terms, what do you think moral philosophers should infer from psychological studies of this type in general, and from this one in particular? One perspective would be for moral philosophers to update their views towards that of the population—the “500 million Elvis fans can’t be wrong” approach.
This is tempting, except that the views of the average person appear inconsistent (ie they weigh suffering more but also think creating neutral lives is good) and implausible, by the lights of views amongst philosophers (eg those surveyed believe adding unhappy lives can be good where it increases average happiness). Even if the views were coherent and plausible (eg those surveyed congregated on a single, consistent view) it would still seem open to philosophers to discount the views of non-experts who hadn’t really familiarised themselves enough with the literature and so did not constitute epistemic peers.
Second question: for the adding people experiment, how confident should we be that those surveyed were thinking solely about the value of adding the new person, as it relates to that person themselves, and not instead thinking about the effects adding a life has on other people? In skimming the paper, I couldn’t see anything about how you had tested the participants were answered the right question.
I ask because, when I speak to people about the value of adding new lives, it is incredibly hard to get people to think just about the value related to the created individuals, and not that person’s parents, society, etc. Yet, to find out their views on population ethics, people need to realise they are just thinking about the effects regarding the created individual themself only. Of course, I might say that adding a happy life is very good, but that’s just because I am thinking it is good for the parents, etc.; conversely, I could answer that adding unhappy lives are bad because they are a drain on others. If I do this, I wouldn’t have answered the question you want me to. As such, it’s not clear to me your experiment has really tested what you said it has.
A comment: in the one about adding lives, you describe the populations as ‘empty’ and ‘full’. This is confusing, as in the paper ‘empty’ actually means 1 millon people, not an actually empty world; ‘full’ means 10 billion (which is questionably ‘full’, either). I think you should flag this more clearly and/or use different terms. ‘Small’ and ‘large’ might be better. I can imagine people having different intuitions if there are genuinely no people existing at the time, and also if the world seems more genuinely full, eg had 100 billion people.
As for your first question about the philosophical implications of this psychological research: In general, the primary goal of our project was a descriptive one and it would require a separate project (ideally lead by philosophers) to figure out what the possible normative implications are. I also believe that we need much more empirical research to understand in greater detail what exactly the psychological mechanisms are that drive people’s population ethical views. I see this as a very first exploration.
That said, I agree with much of what Jack says in the other comment. We should be cautious in simply accepting lay people’s intuitive reactions to these tricky moral dilemmas or even making our policies based on them. Most people’s reactions are very uninformed (most have never thought about these questions before), their reactions are often inconsistent, framing-dependent and — as we saw in some of our studies — people themselves tend to revise their opinions after more careful reasoning.
At the end of our paper, we say:
However, this [the fact that people’s judgments are inconsistent and biased] does not mean that it is not valuable to examine lay people’s population ethical intuitions. Population ethics has important implications for policy making and global priority setting. Philosophers often rely on their own intuitions when discussing population ethics. An understanding of the psychology of these population ethical intuitions can therefore be informative. For example, greater awareness of the specific psychological mechanisms and biases driving these intuitions could elucidate which ones should be endorsed under reflection and which ones not. The apparent inconsistencies between some of these intuitions demonstrate that it may be impossible to formulate a population ethical theory that is both consistent and intuitive (cf. impossibility theorems; Arrhenius, 2000). One possible solution could be a debunking approach: attempting to understand the psychological underpinnings of different philosophical positions, with an eye to identifying those that result from unreliable or biased cognitive processes. This in turn allows the resolution of inconsistency by discounting certain intuitions as untrustworthy (cf. Greene, 2014). Another possible resolution is to accept the fact that we are internally conflicted and, as a consequence, uncertain which moral theory is right (MacAskill, Bykvist, & Ord, 2020).
As for your second question about the adding-people experiment (Studies 2a-b): You are right that participants may misinterpret our dilemmas and questions. This is a general issue with studying such abstract questions and we tried our best to make things as clear as possible to people. In most studies, for example, we double checked if people understood and accepted our assumptions (and excluded participants from the analyses who have failed these checks).
In Studies 2a-b, the question we asked was “In terms of its overall value, how much better or worse would this world (containing this additional person) be compared to before?” (1 Much worse − 7 Much better). Even though this seems pretty clear to me, I think you’re right that it’s possible that some participants also considered the indirect effects on other people it would have to add a new person. One reason why I believe our finding would largely stay the same, even if we ensured that participants did not take the indirect effects into account, is the empty world condition in Study 2b. (And this relates to your comment.) In Study 2b, we indeed had a condition where the initial world contained zero people (empty world) and another condition where the initial world contained 10 billion people (full world). And even in the empty world condition, where you’d expect such indirect-effect considerations to be ruled out, we still find the same pattern. (That being said, I believe it’s possible that a different question and different framing could yield different results.)
Regarding your comment, let me clarify: in Study 2a, the initial world contained 1 million people, but in Study 2b we tried to replicate this effect with a scenario where the initial world contained either zero people or 10 billion people. I believe this should be described correctly in the paper (if not, please let me know). But I noticed that there was an incorrect paragraph in our supplementary materials, which may have lead to this confusion and which I’ve now fixed (Thanks for making me aware of it!).
I know this wasn’t directed at me but I have a few thoughts.
First question: in broad terms, what do you think moral philosophers should infer for psychological studies of this type in general, and from this one in particular? One perspective would be for moral philosophers to update their views towards that of the population—the “500 million Elvis fans can’t be wrong” approach.
I think there are various useful things one can take from this study. A few main ones off the top of my head:
Understanding people’s views allows us to potentially frame things in more appetising ways to people. For example, if we want people to take AI safety seriously and we find they weigh suffering very heavily we can focus on arguments that misaligned AI could cause vast amounts of suffering, rather than that aligned AI could cause vast amounts of happiness. That’s just one possible example.
We can also pinpoint where people may be getting things “wrong” and/or how developed their thinking is on these topics. The paper shows that after some deliberation people moved more towards total views and away from averagist (and indeed away from the “sadistic conclusion”). This implies that people have not thought much about these topics and that education can shift people’s views which may be desirable, especially from the point of view of people looking to increase concern for the far future.
Probably very minor updating towards the general population views. I agree with you that we should discount general population views when they are clearly very silly, but I don’t think we should discount general population views entirely. To expand on this, some views appear to me to rest on intuition more than others and so, if we find that not many people actually hold the necessary intuition, that may reduce our confidence in the view. For example, in my opinion what person-affecting views have going for them is the strong intuition that some people have in a procreation asymmetry/person-affecting restriction. Otherwise I would say person-affecting views encounter lots of issues (non-identity problem, problems related to intransitivity/IIA, incomparability) without much “objective” philosophical justification (I realise the claim that such justification exists is controversial). A view like totalism arguably has more objective philosophical justification beyond just intuition (e.g. simplicity, symmetry, clear parallels in reasoning to fixed population cases) with perhaps one issue, the repugnant conclusion, that many don’t accept to be an issue in the first place. So ultimately if we find people don’t hold the core intuitions of person-affecting views we may find ourselves asking what it really has going for it. I appreciate this is a very controversial bullet point I’ve written here and that you probably won’t agree with it!
With regards to your second question and comment I think you make fair points.
Most disagreements between professional philosophers on population ethics come down to disagreements about intuition:
Alice supports the total view because she has an intuition that the Repugnant Conclusion is not actually repugnant
Bob adopts a person-affecting view and rejects the independence of irrelevant alternatives (IIA) because his intuition is that IIA doesn’t matter
Carol rejects transitivity of preferences because her intuition is that that’s the least important premise
But none of them ultimately have any justification beyond their intuition. So I think it’s totally fair and relevant to survey non-philosophers’ intuitions.
Well, all disagreements in philosophy ultimately come down to intuitions, not just those in population ethics! The question I was pressing is what, if anything, the authors think we should infer from data about intuitions. One might think you should update toward people’s intuitions, but that’s not obvious to me, not least when (1) in aggregate, people’s answers are inconsistent and (2) this isn’t something they’ve thought about.
I found this paper really interesting—so, thanks!
Two questions and a comment
First question: in broad terms, what do you think moral philosophers should infer from psychological studies of this type in general, and from this one in particular? One perspective would be for moral philosophers to update their views towards that of the population—the “500 million Elvis fans can’t be wrong” approach.
This is tempting, except that the views of the average person appear inconsistent (ie they weigh suffering more but also think creating neutral lives is good) and implausible, by the lights of views amongst philosophers (eg those surveyed believe adding unhappy lives can be good where it increases average happiness). Even if the views were coherent and plausible (eg those surveyed congregated on a single, consistent view) it would still seem open to philosophers to discount the views of non-experts who hadn’t really familiarised themselves enough with the literature and so did not constitute epistemic peers.
Second question: for the adding people experiment, how confident should we be that those surveyed were thinking solely about the value of adding the new person, as it relates to that person themselves, and not instead thinking about the effects adding a life has on other people? In skimming the paper, I couldn’t see anything about how you had tested the participants were answered the right question.
I ask because, when I speak to people about the value of adding new lives, it is incredibly hard to get people to think just about the value related to the created individuals, and not that person’s parents, society, etc. Yet, to find out their views on population ethics, people need to realise they are just thinking about the effects regarding the created individual themself only. Of course, I might say that adding a happy life is very good, but that’s just because I am thinking it is good for the parents, etc.; conversely, I could answer that adding unhappy lives are bad because they are a drain on others. If I do this, I wouldn’t have answered the question you want me to. As such, it’s not clear to me your experiment has really tested what you said it has.
A comment: in the one about adding lives, you describe the populations as ‘empty’ and ‘full’. This is confusing, as in the paper ‘empty’ actually means 1 millon people, not an actually empty world; ‘full’ means 10 billion (which is questionably ‘full’, either). I think you should flag this more clearly and/or use different terms. ‘Small’ and ‘large’ might be better. I can imagine people having different intuitions if there are genuinely no people existing at the time, and also if the world seems more genuinely full, eg had 100 billion people.
Thanks, these are great points!
As for your first question about the philosophical implications of this psychological research: In general, the primary goal of our project was a descriptive one and it would require a separate project (ideally lead by philosophers) to figure out what the possible normative implications are. I also believe that we need much more empirical research to understand in greater detail what exactly the psychological mechanisms are that drive people’s population ethical views. I see this as a very first exploration.
That said, I agree with much of what Jack says in the other comment. We should be cautious in simply accepting lay people’s intuitive reactions to these tricky moral dilemmas or even making our policies based on them. Most people’s reactions are very uninformed (most have never thought about these questions before), their reactions are often inconsistent, framing-dependent and — as we saw in some of our studies — people themselves tend to revise their opinions after more careful reasoning.
At the end of our paper, we say:
As for your second question about the adding-people experiment (Studies 2a-b): You are right that participants may misinterpret our dilemmas and questions. This is a general issue with studying such abstract questions and we tried our best to make things as clear as possible to people. In most studies, for example, we double checked if people understood and accepted our assumptions (and excluded participants from the analyses who have failed these checks).
In Studies 2a-b, the question we asked was “In terms of its overall value, how much better or worse would this world (containing this additional person) be compared to before?” (1 Much worse − 7 Much better). Even though this seems pretty clear to me, I think you’re right that it’s possible that some participants also considered the indirect effects on other people it would have to add a new person. One reason why I believe our finding would largely stay the same, even if we ensured that participants did not take the indirect effects into account, is the empty world condition in Study 2b. (And this relates to your comment.) In Study 2b, we indeed had a condition where the initial world contained zero people (empty world) and another condition where the initial world contained 10 billion people (full world). And even in the empty world condition, where you’d expect such indirect-effect considerations to be ruled out, we still find the same pattern. (That being said, I believe it’s possible that a different question and different framing could yield different results.)
Regarding your comment, let me clarify: in Study 2a, the initial world contained 1 million people, but in Study 2b we tried to replicate this effect with a scenario where the initial world contained either zero people or 10 billion people. I believe this should be described correctly in the paper (if not, please let me know). But I noticed that there was an incorrect paragraph in our supplementary materials, which may have lead to this confusion and which I’ve now fixed (Thanks for making me aware of it!).
Thanks for this answer! It was really helpful. I hadn’t spotted that the ‘empty world’ really was empty in the experiment; not sure how I missed that.
I know this wasn’t directed at me but I have a few thoughts.
I think there are various useful things one can take from this study. A few main ones off the top of my head:
Understanding people’s views allows us to potentially frame things in more appetising ways to people. For example, if we want people to take AI safety seriously and we find they weigh suffering very heavily we can focus on arguments that misaligned AI could cause vast amounts of suffering, rather than that aligned AI could cause vast amounts of happiness. That’s just one possible example.
We can also pinpoint where people may be getting things “wrong” and/or how developed their thinking is on these topics. The paper shows that after some deliberation people moved more towards total views and away from averagist (and indeed away from the “sadistic conclusion”). This implies that people have not thought much about these topics and that education can shift people’s views which may be desirable, especially from the point of view of people looking to increase concern for the far future.
Probably very minor updating towards the general population views. I agree with you that we should discount general population views when they are clearly very silly, but I don’t think we should discount general population views entirely. To expand on this, some views appear to me to rest on intuition more than others and so, if we find that not many people actually hold the necessary intuition, that may reduce our confidence in the view. For example, in my opinion what person-affecting views have going for them is the strong intuition that some people have in a procreation asymmetry/person-affecting restriction. Otherwise I would say person-affecting views encounter lots of issues (non-identity problem, problems related to intransitivity/IIA, incomparability) without much “objective” philosophical justification (I realise the claim that such justification exists is controversial). A view like totalism arguably has more objective philosophical justification beyond just intuition (e.g. simplicity, symmetry, clear parallels in reasoning to fixed population cases) with perhaps one issue, the repugnant conclusion, that many don’t accept to be an issue in the first place. So ultimately if we find people don’t hold the core intuitions of person-affecting views we may find ourselves asking what it really has going for it. I appreciate this is a very controversial bullet point I’ve written here and that you probably won’t agree with it!
With regards to your second question and comment I think you make fair points.
Most disagreements between professional philosophers on population ethics come down to disagreements about intuition:
Alice supports the total view because she has an intuition that the Repugnant Conclusion is not actually repugnant
Bob adopts a person-affecting view and rejects the independence of irrelevant alternatives (IIA) because his intuition is that IIA doesn’t matter
Carol rejects transitivity of preferences because her intuition is that that’s the least important premise
But none of them ultimately have any justification beyond their intuition. So I think it’s totally fair and relevant to survey non-philosophers’ intuitions.
Well, all disagreements in philosophy ultimately come down to intuitions, not just those in population ethics! The question I was pressing is what, if anything, the authors think we should infer from data about intuitions. One might think you should update toward people’s intuitions, but that’s not obvious to me, not least when (1) in aggregate, people’s answers are inconsistent and (2) this isn’t something they’ve thought about.