Thanks for the write up and interesting commentary Arden.
I had one question about the worry in the Addendum that Michelle Hutchinson raised, and the thought that “This seems like a reason why the counterpart relation really runs him into trouble compared to other [person-affecting] views. On other such views, bringing into existence happy people seems basically always fine, whereas due to the counterparts in this case it basically never is.”
I take this to be the kind of extinction case Michelle has in mind (where for simplicity I’m bracketing currently existing people and assuming they’ll have the same level of wellbeing in every outcome). Suppose you have a choice between three options:
W1-Inegalitarian Future
a(1): +1; a(2): +2; a(3): +3
W2-Egalitarian Future
b(1): +2; b(2): +2; b(3): +2
W3-Unpopulated Future
—
Since both W1 and W2 will yield harm while W3 won’t, it looks like W3 will come out obligatory.
I can see why one might worry about this. But I wasn’t sure how counterpart relations were playing an interesting role here. Suppose we reject counterpart theory, and adopt HMV and cross-world identity (where a(1)=b(1), a(2)=b(2), and a(3)=b(3)). Then won’t we get precisely the same verdicts (i.e., that W3 is obligatory)?
Sadly, I don’t have a firm stance on what the right view is. Sometimes I’m attracted to the kind of view I defend in this paper, sometimes (like when corresponding with Melinda Roberts) I find myself pulled toward a more traditional person-affecting view, and sometimes I find myself inclined toward some form of totalism, or some fancy variant thereof.
Regarding extinction cases, I’m inclined to think that it’s easy to pull in a lot of potentially confounding intuitions. For example, in the blowing up the planet example Arden presents, in addition to well-being considerations, we have intuitions about violating people’s rights by killing them without their consent, intuitions about the continuing existence of various species (which would all be wiped out), intuitions about the value of various artwork (which would be destroyed if we blew up the planet), and so on. And if one thinks that many of these intuitions are mistaken (as many Utilitarians will), or that these intuitions bring in issues orthogonal to the particular issues that arise in population ethics (as many others will), then one won’t want to rest one’s evaluation of a theory on cases where all of these intuitive considerations are in play.
Here’s a variant of Arden’s case which allows us to bracket those considerations. Suppose our choice is between:
Option 1: Create a new planet in which 7 billion humans are created, and placed in an experience machine in which they live very miserable lives (-10).
Option 2: Create a new planet in which 11.007 trillion humans are created, and placed in experience machines, where 1.001 trillion are placed in experience machines in which they live miserable lives (-1), 10 trillion are placed in experience machines in which they live great lives (+50), and 0.006 trillion are placed in experience machines in which they live good lives (+10).
This allows us to largely bracket many of the above intuitions — humanity and the others species will still survive on our planet regardless of which option we choose, no priceless art is being destroyed, no one is being killed against their will, etc.
In this case, the position that option 1 is obligatory doesn’t strike me as that bad. (My folk intuition here is probably that option 2 is obligatory. But my intuitions here aren’t that strong, and I could easily be swayed if other commitments gave me reason to say something else in this case.)