As a (swivel-eyed) totalist, I’m loath to stick up for a person-affecting view, but I don’t find your ‘extremely radical implications’ criticism of the view compelling and I think it is an example of an unpromising way of approaching moral reasoning in general. The approach I am thinking of here is one that selects theories by meeting intuitive constraints rather than by looking at the deeper rationales for the theories.
I think a good response for Meacham would be that if you find the rationale for his theory compelling, then it is simply correct that it would be better to stop everyone existing. Similarly, totalism holds that it would be good to make everyone extinct if there is net suffering over pleasure (including among wild animals). Many might also find this counter-intuitive. But if you actually believe the deeper theoretical arguments for totalism, then this is just the correct answer.
I agree that Meacham’s view on extinction is wrong, but that is because of the deeper theoretical reasons—I think adding happy people to the world makes that world better, and I don’t see an argument against that in the paper.
The Impossibility Theorems show formally that we cannot have a theory that satisfies people’s intuitions about cases. So, we should not use isolated case intuitions to select theories. We should instead focus on deeper rationales for theories.
Yeah, I mean you’re probably right, though I have a bit more hope in the ‘does this thing spit out the conclusions I independetnly think are right’ methodology than you do. Partly that’s becuase I think some of the intuitions that are, jointly, impossible to satisfy a la impossibility theorems are more important than others—so I’m ok trying to hang on to a few of them at the expense of others. Partly it’s becuase I feel unsure of how else to proceed -- that’s part of why I got out of the game!
I also think there’s something attractive in the idea that what moral theories are are webs of implications, and the things to hold on to are the things you’re most sure are right for whatever reason, and that might be the implications rather than the underlying rationales. I think whether that’s right might depend on your metaethics—if you think the moral truth is determined by your moral committments, then being very committed to a set of outputs could make it the case that the theories that imply them are true. I don’t really think that’s right as a matter of metaethics, though I’m not sure.
I think it’s important to ask why you think it’s horrible to bomb the planet into non-existence. Whatever reason you have, I suspect it probably just simplifies down to you disagreeing with the core rationale of person-affecting views.
For example, perhaps you’re concerned that bombing the plant will prevent a future that you expect to be good. In this case you’re just disagreeing with the very core of person-affecting views: that adding happy people can’t be good.
Or perhaps you’re concerned by the suffering caused by the bombing. Note that Meacham’s person-affecting view thinks that the suffering is ‘harmful’ too, it just thinks that the bombing will avoid a greater quantity of harm in the future. Also note that many people, including totalists, also hold intuitions that it is OK to cause some harm to prevent greater harm. So really what you’re probably disagreeing with in this case is the claim you would actually be avoiding a greater harm by bombing. This is probably because you disagree that adding some happy future people can’t ever outweigh the harm of adding some unhappy future people. In other words, once again, you’re simply disagreeing with the very core of person-affecting views: that adding happy people can’t be good.
Or perhaps you don’t like the bombing for deontological reasons i.e. you just can’t countenance that such an act could be OK. In this case you don’t want a moral view that is purely consequentialist without any deontological constraints. So you’re disagreeing with another core of person-affecting views: pure consequentialism.
I could probably go on, but my point is this: I do believe you find the implication horrible, but my guess is that this is because you fundamentally don’t accept the underlyingrationale.
Strong upvote. I thought this was a great reply: not least because you finally came clean about your eyes, but because I think the debate in population ethics is currently too focused on outputs and unduly disinterested in the rationales for those outputs.
Second comment, on your critique of Meacham...
As a (swivel-eyed) totalist, I’m loath to stick up for a person-affecting view, but I don’t find your ‘extremely radical implications’ criticism of the view compelling and I think it is an example of an unpromising way of approaching moral reasoning in general. The approach I am thinking of here is one that selects theories by meeting intuitive constraints rather than by looking at the deeper rationales for the theories.
I think a good response for Meacham would be that if you find the rationale for his theory compelling, then it is simply correct that it would be better to stop everyone existing. Similarly, totalism holds that it would be good to make everyone extinct if there is net suffering over pleasure (including among wild animals). Many might also find this counter-intuitive. But if you actually believe the deeper theoretical arguments for totalism, then this is just the correct answer.
I agree that Meacham’s view on extinction is wrong, but that is because of the deeper theoretical reasons—I think adding happy people to the world makes that world better, and I don’t see an argument against that in the paper.
The Impossibility Theorems show formally that we cannot have a theory that satisfies people’s intuitions about cases. So, we should not use isolated case intuitions to select theories. We should instead focus on deeper rationales for theories.
Yeah, I mean you’re probably right, though I have a bit more hope in the ‘does this thing spit out the conclusions I independetnly think are right’ methodology than you do. Partly that’s becuase I think some of the intuitions that are, jointly, impossible to satisfy a la impossibility theorems are more important than others—so I’m ok trying to hang on to a few of them at the expense of others. Partly it’s becuase I feel unsure of how else to proceed -- that’s part of why I got out of the game!
I also think there’s something attractive in the idea that what moral theories are are webs of implications, and the things to hold on to are the things you’re most sure are right for whatever reason, and that might be the implications rather than the underlying rationales. I think whether that’s right might depend on your metaethics—if you think the moral truth is determined by your moral committments, then being very committed to a set of outputs could make it the case that the theories that imply them are true. I don’t really think that’s right as a matter of metaethics, though I’m not sure.
I think it’s important to ask why you think it’s horrible to bomb the planet into non-existence. Whatever reason you have, I suspect it probably just simplifies down to you disagreeing with the core rationale of person-affecting views.
For example, perhaps you’re concerned that bombing the plant will prevent a future that you expect to be good. In this case you’re just disagreeing with the very core of person-affecting views: that adding happy people can’t be good.
Or perhaps you’re concerned by the suffering caused by the bombing. Note that Meacham’s person-affecting view thinks that the suffering is ‘harmful’ too, it just thinks that the bombing will avoid a greater quantity of harm in the future. Also note that many people, including totalists, also hold intuitions that it is OK to cause some harm to prevent greater harm. So really what you’re probably disagreeing with in this case is the claim you would actually be avoiding a greater harm by bombing. This is probably because you disagree that adding some happy future people can’t ever outweigh the harm of adding some unhappy future people. In other words, once again, you’re simply disagreeing with the very core of person-affecting views: that adding happy people can’t be good.
Or perhaps you don’t like the bombing for deontological reasons i.e. you just can’t countenance that such an act could be OK. In this case you don’t want a moral view that is purely consequentialist without any deontological constraints. So you’re disagreeing with another core of person-affecting views: pure consequentialism.
I could probably go on, but my point is this: I do believe you find the implication horrible, but my guess is that this is because you fundamentally don’t accept the underlying rationale.
Strong upvote. I thought this was a great reply: not least because you finally came clean about your eyes, but because I think the debate in population ethics is currently too focused on outputs and unduly disinterested in the rationales for those outputs.