I’m still getting through your post, so I apologize if this is addressed later in it.
In somewhat formal terms, the capacity for welfare for some subject, S, is determined by the range of welfare values S[12] experiences in some proper subset of physically possible worlds.
EDIT: I don’t think it necessarily means rejecting independence of irrelevant alternatives (IIA), but doing so might be part of some approaches.
I think this means rejecting the independence of irrelevant alternatives (IIA), which is something consequentialists typically take for granted, often without even knowing, by simply assuming we can rank all conceivable outcomes according to a single ranking. Rejecting it means whether choice A is better or worse than choice B can depend on what other alternatives are available. I’m not personally convinced IIA is true or false (and I think rejecting it can resolve important paradoxes and impossibility results in population ethics, like the repugnant conclusion), but I wouldn’t want to reject IIA to define and value something like capacity.
Another assumption this seems to make is that S is actually the same subject across different outcomes (in which they have different levels of welfare). I think there’s a better argument against this in cases of genetic enhancement, which could be used to support valuing capacities if we think subjects who differ genetically or significantly in capacities are different subjects, but I also think attempts to identify subjects across outcomes or time are poorly justified, pretty arbitrary and face good objections. This is the problem of personal identity, and Parfit’s Relation R seems like the best solution I’m aware of, but it also seems too arbitrary to me. I lean towards empty individualism.
Thanks for the comment. The definition is meant to be neutral with respect to IIA.
The definition does assume that either S is identical across the relevant worlds or (as I mention in footnote 12) the subjects in the world stand in the counterpart relation to one another. Transworld identity is a notoriously difficult topic. I’m here assuming that there is some reasonable solution to the problem.
I’m not sure how much genetic change an individual can undergo whilst remaining the same individual. (I suspect lots, but intuitions seem to differ on this question.) As I mention in footnote 9, it’s also unclear how much genetic change an individual can undergo whilst remaining the same species.
Thanks! I wasn’t aware of transworld identity being a separate problem.
I’m not sure how much genetic change an individual can undergo whilst remaining the same individual. (I suspect lots, but intuitions seem to differ on this question.)
I doubt that there will be a satisfying answer here (especially in light of transworld identity), and I think this undermines the case for different degrees of moral status. If we want to allow morally relevant features to sometimes vary continuously without changing identity, then, imo, the only non-arbitrary lines to draw would be where a feature is completely absent in one but present in another. But, I think there are few features that are non-instrumentally morally relevant; indeed only welfare and welfare capacity on their own seem like they could be morally relevant. So, it seems this could only work if there are different kinds of welfare, like in objective list theories, or with higher and lower pleasures.
As I mention in footnote 9, it’s also unclear how much genetic change an individual can undergo whilst remaining the same species.
I think species isn’t fundamental anyway; its definition is fuzzy, and it’s speciesist to to refer to it non-instrumentally. It’s not implausible to me that, if identity works at all (which I doubt), that a pig in one world is identical to an individual who isn’t a pig in another world.
I wrote some thoughts related to moral status (not specifically welfare capacity) and personal identity here (EDIT: to clarify, the context was a discussion of the proposed importance of moral agency to moral status, but you could substitute many other psychological features for moral agency and the same argument should apply):
It seems to me that any specific individual is only a moral agent sometimes, at most. For example, if someone is so impaired by drugs or overcome with emotion that it prevents them from reasoning, are they a moral agent in those moments? Is someone a moral agent when they’re asleep (and dreaming or not dreaming)? Are these cases so different from removing and then reinserting and reattaching the brain structures responsible for moral agency? In all these cases, the connections can’t be used due to the circumstances, and while the last case is the clearest since the structure has been removed, you could say the structure has been functionally removed in the others. I don’t think it’s accurate to say “they can engage in rational choice” under these circumstances.
Perhaps people are moral agents most of the time, but wouldn’t your account mean their suffering matters less in itself while they aren’t moral agents, even as normally developed adults? In particular, I think intense suffering will often prevent moral agency, and while the loss of agency may be bad in itself (although I’m not sure I agree), the loss of agency from sleep would be similarly bad in itself, so this shouldn’t be much worse than a human being forced to sleep and a nonhuman animal suffering as intensely, ignoring differences in long-term effects, and if the nonhuman animal’s suffering doesn’t matter much in itself relative to the (temporary) loss of moral agency, then neither would the human’s. Torturing someone may often not be much worse than forcing someone to sleep (ignoring long-term effects), if the torture is intense enough to prevent moral agency. Or, deliberately, coercively and temporarily preventing a person’s moral agency and torturing them isn’t much worse than just deliberately, coercively and temporarily preventing their moral agency. This seems very counterintuitive to me, and I certainly wouldn’t feel this way about it if I were the victim. Suffering in itself can be far worse than death.
Now, let’s suppose identity and moral status are preserved to some degree in more commonsensical ways, and the human prefrontal cortex confers extra moral status. Then, there might be weird temporal effects. Committing to an act of destroying someone’s prefrontal cortex and torturing them would be worse than destroying their prefrontal cortex and then later and independently torturing them, because in the first case, their extra moral status still applies to the torture beforehand, but in the second, once their prefrontal cortex is destroyed, they lose that extra moral status that would make the torture worse.
I think what you’re saying makes sense to me, but I’m confused by the fact you say “I wrote some thoughts related to moral status (not specifically welfare capacity) and personal identity here”, but then the passage appears to be about moral agency, rather than about moral status/patienthood.
And then occasionally the passage appears to use moral agency as if it means moral status/patienthood. E.g., “Perhaps people are moral agents most of the time, but wouldn’t your account mean their suffering matters less in itself while they aren’t moral agents, even as normally developed adults”. Although perhaps that reflects the particular arguments that that passage of yours was responding to.
Could you clarify which concept you were talking about in that passage?
(It looks to me like essentially the same argument you make could hold in relation to moral status anyway, so I’m not saying this undermines your points.)
The original context for that comment was in a discussion where moral agency was proposed to be important, but I think you could substitute other psychological features (autonomy, intelligence, rationality, social nature, social attachments/love, etc.) for moral agency and the same argument would apply to them.
Very excited about this series. Thanks!
I’m still getting through your post, so I apologize if this is addressed later in it.
EDIT: I don’t think it necessarily means rejecting independence of irrelevant alternatives (IIA), but doing so might be part of some approaches.
I think this means rejecting the independence of irrelevant alternatives (IIA), which is something consequentialists typically take for granted, often without even knowing, by simply assuming we can rank all conceivable outcomes according to a single ranking. Rejecting it means whether choice A is better or worse than choice B can depend on what other alternatives are available. I’m not personally convinced IIA is true or false (and I think rejecting it can resolve important paradoxes and impossibility results in population ethics, like the repugnant conclusion), but I wouldn’t want to reject IIA to define and value something like capacity.
Another assumption this seems to make is that S is actually the same subject across different outcomes (in which they have different levels of welfare). I think there’s a better argument against this in cases of genetic enhancement, which could be used to support valuing capacities if we think subjects who differ genetically or significantly in capacities are different subjects, but I also think attempts to identify subjects across outcomes or time are poorly justified, pretty arbitrary and face good objections. This is the problem of personal identity, and Parfit’s Relation R seems like the best solution I’m aware of, but it also seems too arbitrary to me. I lean towards empty individualism.
Hi Michael,
Thanks for the comment. The definition is meant to be neutral with respect to IIA.
The definition does assume that either S is identical across the relevant worlds or (as I mention in footnote 12) the subjects in the world stand in the counterpart relation to one another. Transworld identity is a notoriously difficult topic. I’m here assuming that there is some reasonable solution to the problem.
I’m not sure how much genetic change an individual can undergo whilst remaining the same individual. (I suspect lots, but intuitions seem to differ on this question.) As I mention in footnote 9, it’s also unclear how much genetic change an individual can undergo whilst remaining the same species.
Thanks! I wasn’t aware of transworld identity being a separate problem.
I doubt that there will be a satisfying answer here (especially in light of transworld identity), and I think this undermines the case for different degrees of moral status. If we want to allow morally relevant features to sometimes vary continuously without changing identity, then, imo, the only non-arbitrary lines to draw would be where a feature is completely absent in one but present in another. But, I think there are few features that are non-instrumentally morally relevant; indeed only welfare and welfare capacity on their own seem like they could be morally relevant. So, it seems this could only work if there are different kinds of welfare, like in objective list theories, or with higher and lower pleasures.
I think species isn’t fundamental anyway; its definition is fuzzy, and it’s speciesist to to refer to it non-instrumentally. It’s not implausible to me that, if identity works at all (which I doubt), that a pig in one world is identical to an individual who isn’t a pig in another world.
I wrote some thoughts related to moral status (not specifically welfare capacity) and personal identity here (EDIT: to clarify, the context was a discussion of the proposed importance of moral agency to moral status, but you could substitute many other psychological features for moral agency and the same argument should apply):
Now, let’s suppose identity and moral status are preserved to some degree in more commonsensical ways, and the human prefrontal cortex confers extra moral status. Then, there might be weird temporal effects. Committing to an act of destroying someone’s prefrontal cortex and torturing them would be worse than destroying their prefrontal cortex and then later and independently torturing them, because in the first case, their extra moral status still applies to the torture beforehand, but in the second, once their prefrontal cortex is destroyed, they lose that extra moral status that would make the torture worse.
I think what you’re saying makes sense to me, but I’m confused by the fact you say “I wrote some thoughts related to moral status (not specifically welfare capacity) and personal identity here”, but then the passage appears to be about moral agency, rather than about moral status/patienthood.
And then occasionally the passage appears to use moral agency as if it means moral status/patienthood. E.g., “Perhaps people are moral agents most of the time, but wouldn’t your account mean their suffering matters less in itself while they aren’t moral agents, even as normally developed adults”. Although perhaps that reflects the particular arguments that that passage of yours was responding to.
Could you clarify which concept you were talking about in that passage?
(It looks to me like essentially the same argument you make could hold in relation to moral status anyway, so I’m not saying this undermines your points.)
The original context for that comment was in a discussion where moral agency was proposed to be important, but I think you could substitute other psychological features (autonomy, intelligence, rationality, social nature, social attachments/love, etc.) for moral agency and the same argument would apply to them.