Consequentialism asserts that whether an action is morally right or wrong is determined by its outcomes. The commonly held perspective is that there are moral theories which distinguish themselves from consequentialism, examples of which include deontology, virtue ethics, and contractualism. However, a deeper analysis of the underlying motivations and principles of these theories suggests a different picture. If we were to trace the logical progression of any moral statement within any moral theory back to its origins, the pursuit of improving outcomes would inevitably emerge. After all, what other purpose would a Kantian imperative, an Aristotelian virtue or a Rawlsian contract serve, if not to ultimately improve the world in some way? Detaching a moral theory from its outcomes would render it arbitrary and devoid of purpose. Put another way, a “consequentialist moral theory” is a tautology. The real question is, which strategy produces the best outcomes, including consideration of highest-order consequences. Should we evaluate each action individually or consistently apply certain heuristics? More precisely, to what extent should we delegate ethical autonomy to individual consciousness moments in order to optimise outcomes? Different moral theories provide distinct answers to this question. Depending on the moral theory one subscribes to, optimal outcomes result from following certain rules (deontology), cultivating certain character traits in people (virtue ethics), adhering to social contracts (contractualism), maximising utility (utilitarianism) etc. In light of these considerations, is every moral theory inherently consequentialist?
You might say that as a psychological reality, a moral theory is unlikely to be successful unless people believe its adoption tends to promote good consequences.
But there’s nothing logically that would require moral theories to ultimately dissolve into actions that promote good consequences… Kant’s categorical imperative famously forbids lying or sparing murderers capital punishment regardless of whether the whole world burns as a result.
While Kant’s ethics doesn’t logically reduce to consequentialism, the categorical imperative seems to rest on assumptions about long-term outcomes. Kant’s insistence on a universal prohibition against lying appears grounded in the belief that a strict norm of truth-telling creates a more stable and morally reliable society, even if it leads to worse outcomes in rare cases. So while consequences aren’t the explicit justification, they seem to determine the principles we find reasonable and can will to become a universal law.
If the claim is that every moral theory is equivalent to ‘rule consequentialism’, maybe you have more of a case. But ‘act consequentialism’ is very distinct I think.
You might find the pages 7-8 in this pdf helpful.
I really like the way Derek Parfit distinguishes between consequentialist and non-cosequentialist theories in ‘Reasons and Persons’.
All moral theories give people aims. A consequentialist theory gives everyone the same aims (e.g. maximize total happiness). A non-consequentialist theory gives different people different aims (e.g. look after your own family).
There is a real important difference there. Not all moral theories are consequentialist.
Parfit’s distinction between agent-neutral aims (e.g. maximise happiness) and agent-relative aims (e.g. care for your family) strikes me as more semantic than substantive. All moral reasoning depends on the agent’s situation, and identity (like being a parent) can be viewed as part of that situation. Take Peter Singer’s drowning child for example: the moral responsibility to act arises because you are standing there and you can save the child. That situational fact is decisive, much like being a parent is. In this sense, even utilitarianism relies on agent-specific facts, making it functionally agent-relative. I’m not sure any moral theory is truly agent-neutral in practice.
I disagree, I think the difference is substantive.
A utilitarian form of consequentialism might tell Alice to save the drowning child in front of her, while it tells Bob to donate to the AMF, but despite acting differently, both Alice and Bob are pursuing the same ultimate agent-neutral aim: to maximize welfare. The agent-relative ‘aims’ of saving the child or making the donation are merely instrumental aims. They exist only as a means to an end, the end being the fundamental agent-neutral aim that both both Alice and Bob have in common.
This might sound like semantics, but I think the difference can be made clearer by considering situations involving conflict.
Suppose that Alice and Bob are in complete agreement about what the correct theory of ethics is. They are also in complete agreement on every question of fact (and wherever they are uncertain about a question of fact, they are in agreement on how to model this uncertainty e.g. maybe they are Bayesians with identical subjective probabilities for every concievable proposition). This does not imply that they will act identically, because they may still have different capacities. As you point out, Alice might have greater capacity to help the child drowning in front of her than Bob does, and so any sensible theory will tell her to do that, instead of Bob. But still, there is an important difference between the case where they are consequentialists and the case where they are not.
If Alice and Bob subscribe to a consequentialist theory of ethics, then there can be no conflict between them. If Alice realises that saving the child is going to interfere with Bob’s pursuit of donating, or vice versa, then they should be able to figure this conflict out between them and come up with an agreed way of coordinating to achieve the best outcome, as judged by their common shared ultimate aims. This is possible because their shared aims are agent-neutral.
But if Alice and Bob subscribe to a non-consequentialist theory (e.g. one that says we should give priority to our own family) then it is still possible for them to end up in conflict with one another, despite being in complete agreement on the answer to every normative and empirical question. For example, they might each pursue an outcome which is best for their respective families, and this may involve competing over the same resources.
If i recall correctly, in Reasons+Persons, Parfit examines this difference around conflicts in detail. He considers the particular case of prioners-dilemma style conflicts (where each party acting in their own interests leaves them worse off than if they had cooperated) and claims this gives a decisive argument against non-consequentialist theories which do not at least switch to more agent-neutral recommendations in such circumstances (and he argues this includes ‘common-sense morality’).