Sadly, I don’t have a firm stance on what the right view is. Sometimes I’m attracted to the kind of view I defend in this paper, sometimes (like when corresponding with Melinda Roberts) I find myself pulled toward a more traditional person-affecting view, and sometimes I find myself inclined toward some form of totalism, or some fancy variant thereof.
Regarding extinction cases, I’m inclined to think that it’s easy to pull in a lot of potentially confounding intuitions. For example, in the blowing up the planet example Arden presents, in addition to well-being considerations, we have intuitions about violating people’s rights by killing them without their consent, intuitions about the continuing existence of various species (which would all be wiped out), intuitions about the value of various artwork (which would be destroyed if we blew up the planet), and so on. And if one thinks that many of these intuitions are mistaken (as many Utilitarians will), or that these intuitions bring in issues orthogonal to the particular issues that arise in population ethics (as many others will), then one won’t want to rest one’s evaluation of a theory on cases where all of these intuitive considerations are in play.
Here’s a variant of Arden’s case which allows us to bracket those considerations. Suppose our choice is between:
Option 1: Create a new planet in which 7 billion humans are created, and placed in an experience machine in which they live very miserable lives (-10).
Option 2: Create a new planet in which 11.007 trillion humans are created, and placed in experience machines, where 1.001 trillion are placed in experience machines in which they live miserable lives (-1), 10 trillion are placed in experience machines in which they live great lives (+50), and 0.006 trillion are placed in experience machines in which they live good lives (+10).
This allows us to largely bracket many of the above intuitions — humanity and the others species will still survive on our planet regardless of which option we choose, no priceless art is being destroyed, no one is being killed against their will, etc.
In this case, the position that option 1 is obligatory doesn’t strike me as that bad. (My folk intuition here is probably that option 2 is obligatory. But my intuitions here aren’t that strong, and I could easily be swayed if other commitments gave me reason to say something else in this case.)
Sadly, I don’t have a firm stance on what the right view is. Sometimes I’m attracted to the kind of view I defend in this paper, sometimes (like when corresponding with Melinda Roberts) I find myself pulled toward a more traditional person-affecting view, and sometimes I find myself inclined toward some form of totalism, or some fancy variant thereof.
Regarding extinction cases, I’m inclined to think that it’s easy to pull in a lot of potentially confounding intuitions. For example, in the blowing up the planet example Arden presents, in addition to well-being considerations, we have intuitions about violating people’s rights by killing them without their consent, intuitions about the continuing existence of various species (which would all be wiped out), intuitions about the value of various artwork (which would be destroyed if we blew up the planet), and so on. And if one thinks that many of these intuitions are mistaken (as many Utilitarians will), or that these intuitions bring in issues orthogonal to the particular issues that arise in population ethics (as many others will), then one won’t want to rest one’s evaluation of a theory on cases where all of these intuitive considerations are in play.
Here’s a variant of Arden’s case which allows us to bracket those considerations. Suppose our choice is between:
Option 1: Create a new planet in which 7 billion humans are created, and placed in an experience machine in which they live very miserable lives (-10).
Option 2: Create a new planet in which 11.007 trillion humans are created, and placed in experience machines, where 1.001 trillion are placed in experience machines in which they live miserable lives (-1), 10 trillion are placed in experience machines in which they live great lives (+50), and 0.006 trillion are placed in experience machines in which they live good lives (+10).
This allows us to largely bracket many of the above intuitions — humanity and the others species will still survive on our planet regardless of which option we choose, no priceless art is being destroyed, no one is being killed against their will, etc.
In this case, the position that option 1 is obligatory doesn’t strike me as that bad. (My folk intuition here is probably that option 2 is obligatory. But my intuitions here aren’t that strong, and I could easily be swayed if other commitments gave me reason to say something else in this case.)
Thanks for this, that’s an interesting idea. It certainly seems like a useful approach to bracket possibly confounding intuitions!