Parfit’s distinction between agent-neutral aims (e.g. maximise happiness) and agent-relative aims (e.g. care for your family) strikes me as more semantic than substantive. All moral reasoning depends on the agent’s situation, and identity (like being a parent) can be viewed as part of that situation. Take Peter Singer’s drowning child for example: the moral responsibility to act arises because you are standing there and you can save the child. That situational fact is decisive, much like being a parent is. In this sense, even utilitarianism relies on agent-specific facts, making it functionally agent-relative. I’m not sure any moral theory is truly agent-neutral in practice.
I disagree, I think the difference is substantive.
A utilitarian form of consequentialism might tell Alice to save the drowning child in front of her, while it tells Bob to donate to the AMF, but despite acting differently, both Alice and Bob are pursuing the same ultimate agent-neutral aim: to maximize welfare. The agent-relative ‘aims’ of saving the child or making the donation are merely instrumental aims. They exist only as a means to an end, the end being the fundamental agent-neutral aim that both both Alice and Bob have in common.
This might sound like semantics, but I think the difference can be made clearer by considering situations involving conflict.
Suppose that Alice and Bob are in complete agreement about what the correct theory of ethics is. They are also in complete agreement on every question of fact (and wherever they are uncertain about a question of fact, they are in agreement on how to model this uncertainty e.g. maybe they are Bayesians with identical subjective probabilities for every concievable proposition). This does not imply that they will act identically, because they may still have different capacities. As you point out, Alice might have greater capacity to help the child drowning in front of her than Bob does, and so any sensible theory will tell her to do that, instead of Bob. But still, there is an important difference between the case where they are consequentialists and the case where they are not.
If Alice and Bob subscribe to a consequentialist theory of ethics, then there can be no conflict between them. If Alice realises that saving the child is going to interfere with Bob’s pursuit of donating, or vice versa, then they should be able to figure this conflict out between them and come up with an agreed way of coordinating to achieve the best outcome, as judged by their common shared ultimate aims. This is possible because their shared aims are agent-neutral.
But if Alice and Bob subscribe to a non-consequentialist theory (e.g. one that says we should give priority to our own family) then it is still possible for them to end up in conflict with one another, despite being in complete agreement on the answer to every normative and empirical question. For example, they might each pursue an outcome which is best for their respective families, and this may involve competing over the same resources.
If i recall correctly, in Reasons+Persons, Parfit examines this difference around conflicts in detail. He considers the particular case of prioners-dilemma style conflicts (where each party acting in their own interests leaves them worse off than if they had cooperated) and claims this gives a decisive argument against non-consequentialist theories which do not at least switch to more agent-neutral recommendations in such circumstances (and he argues this includes ‘common-sense morality’).
Parfit’s distinction between agent-neutral aims (e.g. maximise happiness) and agent-relative aims (e.g. care for your family) strikes me as more semantic than substantive. All moral reasoning depends on the agent’s situation, and identity (like being a parent) can be viewed as part of that situation. Take Peter Singer’s drowning child for example: the moral responsibility to act arises because you are standing there and you can save the child. That situational fact is decisive, much like being a parent is. In this sense, even utilitarianism relies on agent-specific facts, making it functionally agent-relative. I’m not sure any moral theory is truly agent-neutral in practice.
I disagree, I think the difference is substantive.
A utilitarian form of consequentialism might tell Alice to save the drowning child in front of her, while it tells Bob to donate to the AMF, but despite acting differently, both Alice and Bob are pursuing the same ultimate agent-neutral aim: to maximize welfare. The agent-relative ‘aims’ of saving the child or making the donation are merely instrumental aims. They exist only as a means to an end, the end being the fundamental agent-neutral aim that both both Alice and Bob have in common.
This might sound like semantics, but I think the difference can be made clearer by considering situations involving conflict.
Suppose that Alice and Bob are in complete agreement about what the correct theory of ethics is. They are also in complete agreement on every question of fact (and wherever they are uncertain about a question of fact, they are in agreement on how to model this uncertainty e.g. maybe they are Bayesians with identical subjective probabilities for every concievable proposition). This does not imply that they will act identically, because they may still have different capacities. As you point out, Alice might have greater capacity to help the child drowning in front of her than Bob does, and so any sensible theory will tell her to do that, instead of Bob. But still, there is an important difference between the case where they are consequentialists and the case where they are not.
If Alice and Bob subscribe to a consequentialist theory of ethics, then there can be no conflict between them. If Alice realises that saving the child is going to interfere with Bob’s pursuit of donating, or vice versa, then they should be able to figure this conflict out between them and come up with an agreed way of coordinating to achieve the best outcome, as judged by their common shared ultimate aims. This is possible because their shared aims are agent-neutral.
But if Alice and Bob subscribe to a non-consequentialist theory (e.g. one that says we should give priority to our own family) then it is still possible for them to end up in conflict with one another, despite being in complete agreement on the answer to every normative and empirical question. For example, they might each pursue an outcome which is best for their respective families, and this may involve competing over the same resources.
If i recall correctly, in Reasons+Persons, Parfit examines this difference around conflicts in detail. He considers the particular case of prioners-dilemma style conflicts (where each party acting in their own interests leaves them worse off than if they had cooperated) and claims this gives a decisive argument against non-consequentialist theories which do not at least switch to more agent-neutral recommendations in such circumstances (and he argues this includes ‘common-sense morality’).