I think EAs should care more about debates around which ethical theory is true and why. The EA community is really invested in problems of applied consequentialist ethics such as “how should we think about low probability / high, or infinite, magnitude risks”, “how should we discount future utility”, “ought the magnitudes of positive and negative utility be weighed equally”, etc.[1] The answers to questions in population ethics and other applied ethical areas, which are largely accepted by EAs as important questions, turn on what the rest of your ethical stack looks like (ie on your metaethics and normative ethics). You can’t determine how many points you get for scoring without resolving whether you’re playing basketball or football. EA needs more clarity on the most foundational questions in order to answer downstream questions. The answer to “how should we discount future utility” depends on what you think we “should” do generally.
This is not even just a question of consequentialism vs Kantianism: whether you start from a Nagelian altruism or Parfitian one or Millian one will probably have implications for your takes on practical questions of what causes to actually prioritize. EAs should also be open to the possibility that consequentialism is false, and so understanding the best argument for consequentialism might highlight the relative attractiveness of competing ethical theories.
In this post, I will critique what I identify as central EA arguments for consequentialism in the hope of spurring greater debate among the EA community as to which normative theory is correct and therefore ought to be used to prioritize causes.
Towards the end of the post, I will also sketch a way for EAs to combine what I see as the fundamental thesis of effective altruism with non-consequentialist moral theories, allowing consequentialist apostates to remain in someway committed to the EA project.
Using MacAskill’s 80,000 Hours Podcast Appearance as Source Material
In order to evaluate the best EA arguments for consequentialism, I’m going to focus on Will MacAskill’s case for consequentialism on the 80,000 hours podcast. I like this as source material because MacAskill is as close as you can get to a canonical EA philosopher, and the 80,000 hours podcast is one of the most popular EA media outlets, meaning that MacAskill’s arguments on the podcast will probably reflect the broad EA understanding of the philosophical case for consequentialism. The podcast is also conveniently succinct, making it much easier to efficiently address the relevant arguments than it would be to review the entirety of a philosophy book. Furthermore, it seems more charitable to only attribute philosophical views to EA in particular if they have been formally adopted by people in the community – just reviewing Mill’s Utilitarianism, a book written over a century before the term “EA” was coined, could therefore misfire as a criticism of EA. Lastly, part of the point of this post is not just to criticize the philosophical foundations of EA but also to criticize how EAs communicate about their philosophical views, making an EA philosopher’s podcast appearance an especially good subject to focus on.
This approach obviously comes with its own limitations. Though I think MacAskill is responsible for the outlines of the arguments he gives, it would be silly to criticize specific word choice or the details of the justifications of steps in his arguments given his space and preparatory constraints. I will interpret MacAskill to be fundamentally gesturing towards more complete arguments instead of giving complete arguments himself, meaning that some reconstruction and inference will be required on my part.
MacAskill’s Track Record Argument
Diving right in then, here’s how MacAskill begins his defense of utilitarianism on Wilbin’s podcast:
Robert Wiblin: Alright, straight out. What are the arguments for classical utilitarianism?
Will MacAskill: I think there’s at least half a dozen that are very strong.
Robert Wiblin: They don’t all have to work then.
Will MacAskill: True, yeah.
Robert Wiblin: Got a bunch of options.
Will MacAskill: One that I think doesn’t often get talked about, but I think actually is very compelling is the track record. When you look at scientific theories, how you decide whether they’re good or not, well significant part by the predictions they made. We can do that to some extent, got much smaller sample size, you can do it to some extent with moral theories as well. For example, we can look at what the predictions, the bold claims that were going against common sense at the time, that Bentham and Mill made. Compare it to the predictions, bold moral claims, that Kant made.
When you look at Bentham and Mill they were extremely progressive. They campaigned and argued for women’s right to vote and the importance of women getting a good education. They were very positive on sexual liberal attitudes. In fact, some of Bentham’s writings on the topic were so controversial that they weren’t even published 200 years later.
Robert Wiblin: I think, Bentham thought that homosexuality was fine. At the time he’s basically the only person who thought this.
Will MacAskill: Yeah. Absolutely. Yeah. He’s far ahead of his time on that.
Also, with respect to animal welfare as well. Progressive even with respect to now. Both Bentham and Mill emphasized greatly the importance of treating animal… They weren’t perfect. Mill and Bentham’s views on colonialism, completely distasteful. Completely distasteful from perspective for the day.
Robert Wiblin: But they were against slavery, right?
Will MacAskill: My understanding is yeah. They did have pretty regressive attitudes towards colonialism judged from today. It was common at the time. That was not something on the right side of history.
Robert Wiblin: Yeah. Mill actually worked in the colonial office for India, right?
Will MacAskill: That’s right, yeah.
Robert Wiblin: And he thought it was fine.
Will MacAskill: Yeah, that’s right.
Robert Wiblin: Not so great. That’s not a winner there.
Will MacAskill: Yeah. I don’t think he defended it at length, but in casual conversations thought it was fine.
Contrast that with Kant. Here are some of the views that Kant believed. One was that suicide was wrong. One was that masturbation was even more wrong than suicide. Another was that organ donation is impermissible, and even that cutting your hair off to give it to someone else is not without some degree of moral error.
Robert Wiblin: Not an issue that we’re terribly troubled by today.
Will MacAskill: Exactly, not really the thing that you would stake a lot of moral credit on.
He thought that women have no place in civil society. He thought that illegitimate children, it was permissible to kill them. He thought that there was a ranking in the moral worth of different races, with, unsurprisingly, white people at the top. Then, I think, Asians, then Africans and Native Americans.
Robert Wiblin: He was white, right?
Will MacAskill: Yes. What a coincidence.
Robert Wiblin: Fortunate coincidence I suppose for him.
Will MacAskill: I don’t want this to be a pure ad-hominem attack on Kant because there’s an underlying lesson to this which is when we look at a history of moral thought and we look at all the abominable things that people have believed and even felt very strongly about, we should think, “Well it’d be extremely unlikely if we’re not in the same circumstance.” We probably as common sense believe lots of truly abominable things. That means that if we have a moral view that’s all about catering to our common sense intuitions we’re probably just enshrining these biases and these moral errors.
What we want to have instead is a moral view that criticizes common sense so that we can move beyond it. Then when you look at how utilitarianism has fared historically it seems to have done that in general, not always, but in general done that very well. That suggests that that progress might continue into the future.
Robert Wiblin: And that in as much as the conclusions are surprising to us now, well the conclusions from the past were surprising people in the past, but we agree with them now. So, we shouldn’t be too surprised
Should We Care About Track Records?
MacAskill says the track records of moral theories have implications for their plausibility. Taken literally, this approach seems to misfire insofar as moral theories do not make predictions about what the future will value but about what ought to be valued. It should not count against a moral theory if its conclusions do not become widely accepted.
I think what MacAskill is getting at here though is just how well the revisionist conclusions of a theory align with our current moral intuitions, not because it is important that moral theories exert causal power over our beliefs but because our current moral intuitions contain insight. However, what MacAskill says towards the end of the argument seems to undermine the viability of this approach. If some of our intuitions today are objectively abominable, then how can we judge past theories’ conclusions based on our intuitions? How should we know in any given case whether we should be revising our intuitions in light of the views of others or rejecting a theory because it conflicts with our intuitions? Without a substantive moral framework, who’s to say whether it was people hundreds of years ago or animal rights activists today with the bad views on animal welfare?
Against Moral Authorities
MacAskill also seems to apply his method for judging track records not to the conclusions of ethical theories but to the beliefs of the theories’ originators. This feature of the approach seems problematic. Newton had whacky beliefs about alchemy and the occult. Should we reject Newtonian mechanics because of his views on alchemy? If I could identify contemporaries of his who rejected alchemy but believed in Aristotelian physics, would that increase your credence in Aristotelian physics? Or should the inference go the other way, and should we increase our credence in alchemy in light of Newton’s predictions about physics?
If you are concerned about Kant’s views on race or gender, why not evaluate similar arguments as made by a different author? Christine Korsgaard, a leading Kantian philosopher, is a woman who believes in gender equality and is a practicing vegan. Indeed, where Korsgaard believes in ethical veganism, Bentham thought killing animals for food was permissible. Does that count as an argument against consequentialism? It is easy to separate moral theories from the beliefs of their proponents, and we should do so.
Overall, I don’t think we should worry too much about the views of philosophers when judging their theories, and it seems unsatisfying to try to haphazardly compare moral theories against our (my? effective altruists’? America’s? the world’s?) intuitions, especially when we acknowledge that some of our intuitions are probably grossly incorrect. Let’s be bold and develop substantive theories of what we ought to do!
Harsayni’s Veil of Ignorance Argument
MacAskill’s subsequent arguments are considerably stronger and more philosophically interesting:
Robert Wiblin: Okay. That was argument 1 of 6. I might have to keep you to 3 so we can finish today.
Will MacAskill: Yeah.
Robert Wiblin: What are the other best 2 arguments for utilitarianism?
Will MacAskill: The other best 2 I think are, one is Harsayni’s Veil of Ignorance argument. The second is the argument that moves from rejecting the notion of personhood. We can go into the first one, Harsayni’s Veil of Ignorance. John Harsayni was an economist but also a philosopher. He suggested the following thought experiment: Morality’s about being impartial. It’s about taking a perspective that’s beyond just your own personal perspective, somehow from the point of view of everyone, or society, or point of view of the universe.
The way he made that more precise is by saying, “Assume you didn’t know who you were going to be in society. Assume you had an equal chance of being anyone. Assume, now, that you’re trying to act in a rational self-interested way. You’re just trying to do whatever’s best for yourself. How would you structure society? What’s the principle that you would use in order to decide how people do things as this perspective of the social planner.” He proved that if you’re using expected utility theory, which we said in the past earlier is really well justified as a view of how to make decisions under empirical uncertainty, and you’re making this decision, the rule you’ll come to is utilitarianism. You’ll try and maximize the welfare of everyone, of the sum total of welfare in society.
Robert Wiblin: Because you care about each of those people equally because you could be each of them with equal probability.
Will MacAskill: Exactly. That’s right.
Did Harsayni Load the Deck?
MacAskill mentions that Harsayni is an economist which I think is relevant. Doesn’t Harsayani’s leading question (“How would you structure society?”) already seem kinda off from the moral project? Morality asks “what should I do” tout court. Harsayni wants to motivate intuitions around how society should be structured and implicitly how resources should be distributed. But, individuals don’t choose how resources are globally distributed or how society is structured. We can impact how society is structured and how resources are distributed through our actions, but the set of questions we can ask about how society ought to be structured and the set of questions we can ask about how we should act do not seem to be subsets of each other.
At first blush, this discrepancy might merely look like a cute idiosyncrasy of an interdisciplinary thinker, but the way Harsayni frames the question ultimately makes the argument circular. In arguing for a particular moral theory by answering the prompt “how would someone from behind the veil of ignorance choose to distribute utility among people”, Harsayni assumes that the correct ethical theory is, or ought to be, primarily concerned with societal welfare. But, it is not obvious that Kantianism or virtue ethics or contractarianism or any other moral theory would describe the central question of morality this way. MacAskill says that using expected utility theory makes sense when you’re operating under empirical uncertainty. But non-consequentialist ethical theories don’t face empirical uncertainty in this way. The Kantian does not have to probabilistically weight whether their action is using someone as a mere means or not. By presupposing that morality is concerned with empirical results like distributions of welfare, Harsayni unjustifiably limits the range of viable moral theories to consequentialist ones, and then argues for utilitarianism (presumably against competing consequentialist theories like egoism) on the grounds of impartiality.
To see how much Harsayni has stacked the deck in favor of consequentialism, imagine the same thought experiment described slightly differently. “Suppose you were behind the veil of ignorance and did not know who you would be in society. How would you structure society? Of course societies are concerned with what rights and liabilities people have, so what rights would you enshrine in the world constitution? Given that you don’t know who you are, you would most likely opt for rights that treated everyone as equals. Because you might end up in a minority group, you would most likely choose rules that preclude the possibility of undue burdens on minority groups.” This seems like an analogously equally plausible version of Harsayni’s argument, but the line of reasoning here would suggest a more rights-based approach to moral philosophy in virtue of the initial framing.
Imagine a more dramatically different leading question. Suppose I argued “morality determines what kind of a person we will be. Therefore, the central question of morality is what virtues should you inculcate? Given that between any extreme personalities is an optimal one, you would prescribe courage over foolhardiness or cowardice, justice over mercy or retribution, temperance over asceticism or indulgence, and so on”. Obviously when we start with “what virtues should you inculcate”, it’s not going to be such a stretch to get to virtue ethics as the correct ethical theory.
Each of the starting questions I’ve imagined clearly load the deck in terms of the kinds of answers that are conceptually viable. And, you can’t just say that we should use each of the different moral theories to answer their respective leading questions (ie utilitarianism for welfare distributions, Kantianism for constitution writing, and virtue ethics for character formation) because each of those leading questions describes the same situation from a different perspective. One person’s welfare maximization problem is another’s rights adjudication problem and another’s test of character. One of the goals of moral philosophy is to determine which of those lenses is the best to view our moral dilemmas through, meaning Harsayni’s argument, in presupposing an answer to that question, cannot help us conclude which moral theory is correct.
Argument from Parfitian Personhood
MacAskill’s third argument for consequentialism is by far the most interesting.
Will MacAskill: Right. The third argument is… I should say in all of these cases there’s more work you need to do to show that this leads exactly to utilitarianism. What I’m saying, these are the first steps in that direction.
The third argument is rejecting the idea of personhood, or at least rejecting the idea that who is a person, and the distinction between persons is morally irrelevant. The key thing that utilitarianism does is say that the trade offs you make within a life are the same as the trade offs that you ought to make across lives. I will go to the dentist in order to have a nicer set of teeth, inflicting a harm upon myself, because I don’t enjoy the dentist, let’s say, in order to have a milder benefit over the rest of my life. You wouldn’t say you should inflict the harm of going to the dentist on one person intuitively in order to provide the benefit of having a nicer set of teeth to some other person. That seems weird intuitively.
Robert Wiblin: It would be a very weird dental office.
Will MacAskill: It would be a weird dental office.
Robert Wiblin: Setting that aside.
Will MacAskill: Setting that aside, yeah. Now suppose that we reject the idea that there is a fundamental difference between me now and you now, whereas there’s not a fundamental difference between me now and me age 70. Instead, maybe it’s just a matter of degree, or maybe it’s just the fact that I happen to have a bundle of conscious experiences that is more interrelated in various ways by memory and foresight than this bundle is with you. There are certain philosophical arguments you can give for that conclusion. One of which is what get called fission cases.
Imagine that you’re in a car accident with 2 of your siblings. In this car accident your body is completely destroyed, and the brains of your 2 siblings are completely destroyed, but they still have functioning bodies, are preserved. As you’ll see, this is a very philosophical thought experiment.
Robert Wiblin: One day maybe we can do this.
Will MacAskill: Maybe. Finally, let’s also suppose that it’s possible to take someone’s brain and split it in 2, and implant it into 2 other people’s skulls such that the brain will grow back fully and will have all the same memories as that first person did originally. In the same way I think it’s the case that you can split up a liver and the 2 separate livers will grow back, or you can split up an earthworm – I don’t know if this is true – split up an earth worm and they’ll both wiggle off.
Robert Wiblin: Maybe you could.
Will MacAskill: Maybe you could. You’ve got to imagine these somewhat outlandish possibilities, but that’s okay because we’re illustrating a philosophical point. Now you’ve got these 2 bodies that wake up and have all the same memories of you. From their perspective they were just in this car crash and then woke up in a different… The question is, who’s you? Supposing we think there’s this Cartesian soul that exists within one of us, the question would be into which body does the soul go? Or, even if you don’t think there’s a soul but you think, no, there’s something really fundamental about me. Who’s the me?
There’s 4 possible answers. One is that it’s one sibling. Second is it’s the other sibling. Third is it’s both. Fourth is it’s neither. It couldn’t be one brother or one sibling over the other because there’s a parity argument. Any argument you give for saying it’s the youngest sibling would also give an argument to the oldest sibling. That can’t be the case. It can’t be that it’s both people because, well, now I’ve got this person that consists of 2 other entities walking around? That seems very absurd indeed. It can’t be neither either.
Now imagine the case where you’re in a car crash and your brain just gets transplanted to one person. Then you would think, well, we continue. I was in this terrible car crash, I woke up with a different body, but it’s still me. I still have all the same memories. But, if it’s the case that I can survive in the case of my brain being transplanted into one other person, surely I can survive if my brain is transplanted into 2 people. It would seem weird that a double win, double success, is actually a failure.
And so, tons more philosophical argument goes into this. The conclusion that Derek Parfit ultimately makes is, there’s just no fact of the matter here. This actually shows that what we think of as this continued personal identity over time is just a kind of fiction. It’s like saying when the French Socialist party split into two, are there now two? Which one is really the French Socialist party? This is just a meaningless questions.
Robert Wiblin: What’s actually going on is that there are different parties, and some of them are more similar than others.
Will MacAskill: Exactly. That’s right. But, once you reject this idea that there’s any fundamental moral difference between persons, then the fact that it’s permissible for me to make a trade off where I inflict harm on myself now, or benefit myself now in order to perhaps harm Will age 70… Let’s suppose that that’s actually good for me overall. Well, I should make just the same trade offs within my own life as I make across lives. It would be okay to harm one person to benefit others. If you grant that, then, you end up with something that’s starting to look pretty similar to utilitarianism.
Robert Wiblin: Okay, so the basic idea is we have strong reasons to think that identity doesn’t exist in the way that we instinctively think it does, that in fact it’s just a continuum.
Will MacAskill: Mm-hmm (affirmative).
Robert Wiblin: This is exactly what utilitarianism always thought and was acting as though it was true.
Will MacAskill: Yes.
Robert Wiblin: But for deontological theories or virtue ethics theories, they really need a sense of identity and personhood to make sense to begin with.
Will MacAskill: That’s right. Another way of putting it is most non-utilitarian views require there to be personhood as a fundamental moral concept. If you think that concept is illusory, and there seem to be these arguments to show that it is illusory, you have to reject those moral views. It would be like saying we’re trying to do physics, but then denying that electrons exist or something. You have to reject the underlying theory that relies on this fundamental concept.
Korsgaard on Parfit
The main objection I want to give to the Parfit argument comes from Korsgaard’s book Creating the Kingdom of Ends. Given the argument’s promise and prominence within the EA community, Parfit’s peers’ objections to the argument seem pretty underrated. At the very least, I think it would be valuable for people on the EA forum interested in Parfit to be exposed to Korsgaard’s response.[2]
Korsgaard lays out her strategy for responding to Parfit, writing
Suppose Parfit has established that there is no deep sense in which I am identical to the subject of experiences who will occupy my body in the future… I will argue that I nevertheless have reasons for regarding myself as the same rational agent as the one who will occupy my body in the future. These reasons are not metaphysical, but practical. (369)
Note the different ways she describes her project and Parfit’s. Korsgaard describes Parfit as showing “there is no deep sense in which” personal identity exists. The impersonal way this is written gets at Parfit’s metaphysical approach to identity: he is interested in whether or not personhood is a “real” feature of the universe. As MacAskill puts it, Parfit is trying to show that there are no “facts of the matter” when it comes to identity. Korsgaard, on the other hand, describes her project as a practical one: “I nevertheless have reasons for regarding myself...”. For Korsgaard, the problem of personhood is not whether persons objectively exist as a category in the universe, it is not a question of facts of matter, but of how we should practically conceive of ourselves and others.
Because of the logical gap between descriptive and normative statements, it could be true that Parfit is right about distinct persons not existing in some metaphysical sense but that we still ought to act as if they do. So, the question becomes whether Korsgaard can meet her own bar of showing that as agents, we ought to conceive of the agents who will occupy our bodies in the future as the same person as us and as distinct from others. In order to show how our agency necessitates viewing ourselves as a unified person over time, Korsgaard begins by arguing that it is our agency that unifies our experience of the world at any single moment in time.
She prompts the line of reasoning by asking
To see this, first set aside the problem of identity over time, and think about the problem of identity at any given time. Why do you think of yourself as one person now? This problem should seem especially pressing if Parfit has convinced you that you are not unified by a Cartesian Ego… you have loves, interests, ambitions… What makes you one person even at one time? (369)
and then continues
Your conception of yourself as a unified agent is not based on a metaphysical theory, nor on a unity of which you are conscious. Its ground are practical… there is the raw necessity of eliminating conflict among your various motives. Like parties in Rawls’ original position, they must come to a unanimous decision somehow. You are a unified person at any given time because you must act, and you have only one body with which to act.… It may be that what actually happens when you make a choice is that the strongest of your conflicting desires wins. But that is not the way you think of it when you deliberate… it is as if there were something over and above all your desires, something that is you, and that chooses which one to act on. (369-370)
As the agent in charge of my body, I have to choose what it will do in any given moment. But, I cannot do so without resolving the conflict between my different thoughts and feelings about the world that pull me in all sorts of different directions. Because my body cannot pursue contradictory pursuits – I cannot shirk responsibility and embrace it – my agential self has to unify these conflicting proposals into a single course of action for the whole body. I have to conceive of myself as a deliberator for whom these different experiences are all considerations that resolve in a certain action. In this way, our agency requires seeing ourselves as unified selves at any moment in time. But, this does not imply that I have to identify or unify myself with my body at other points in time, as the person going to the dentist in MacAskill’s example might not. The task remains for Korsgaard to show that agency requires that I conceive of myself as a unified self over time as well.
She then argues:
In choosing our careers, and pursuing our friendships and family lives, we both presuppose and construct a continuity of identity and of agency… In order to carry out a rational plan of life, you need to be one continuing person. You normally think you lead one continuing life because you are one person, but according to this argument the truth is the reverse. You are one continuing person because you have one life to lead. (372)
All humans are tasked with deliberating about what to do and this process necessarily projects our thinking into the future. Though we may act in moments of time, we do not only act in light of considerations at that moment. As an agent, I do not choose to merely slightly lower one knee to the ground: I choose to propose to my love because of the future I envision with them. The kinds of questions that are posed to us as agents are not just momentary movements like how many millimeters to move a limb but larger questions of what we find meaningful and worth pursuing – and these question pull us into the future. Because agency requires we act on what we want our future to be like, we have to conceive of the agent in our body in the future as us. It is not a metaphysical fact about our continuity that compels us to care about what happens to our body in the future – it is our innate concern for our future that creates the need for a persistent identity over time. Therefore, Korsgaard concludes, personhood is an inescapable component of our normative landscape.[3]
Both the Harsayani and Parfit arguments – and my responses – get at two different perspectives from which to understand ethics. I keep coming back to the distinction between thinking impersonally vs practically. This tension suggests what I think is the best overall argument against consequentialism and the topic I’ll turn my attention to now. Even if the specific problems I pointed out with these arguments don’t land, I think there’s a more fundamental issue at play, not an issue of the logical validity of any given argument, but of the foundational assumptions of many consequentialist arguments.
Best Argument Against Consequentialism
MacAskill ends the section of the podcast by describing what he thinks is the best argument against consequentialism:
Robert Wiblin: Okay. Those are your 3 best arguments for utilitarianism. What are the best arguments against it?
Will MacAskill: The best arguments against are how it conflicts with common sense intuitions. Sometimes you get utilitarian apologists who try to argue… Henry Sidgwick was like this. Try to argue that, actually, utilitarianism doesn’t differ so much from common sense at all. I think that’s badly wrong. I think you can come up with all sorts of elaborate thought experiments, like, what if you can kill 1 person to save 5, and there’s no other consequences. You’ll get away, and so on.
I think you should take those thought experiments seriously, and they do just conflict with common sense. I think it also conflicts in practice as well. In particular on the beneficent side where most people think it’s not obligatory to spend money on yourself. They think that’s fine.
Robert Wiblin: But it’s not prohibited.
Will MacAskill: Yeah, that’s right, sorry. It’s not prohibited to spend money on yourself. Whereas, utilitarianism says, “No, you have very strong obligations, given the situation you’re in at the moment, at least if you’re an affluent member of a rich country, to do as much good as you can, basically.
Robert Wiblin: Which may well involve giving away a lot of your money.
Will MacAskill: A lot of your money, or dedicating your career to doing as much good as possible. Yeah, it’s a very demanding moral view. That’s quite strictly in disagreement with common sense, even more so when you think about you’re doing this to improve the lives of distant future people, and so on.
Philosophically, I don’t find this view very persuasive. Or at least the objection seems to presuppose that ethics should not be revisionist, which is not a very interesting starting place for a movement committed to revising our understanding of our ethical obligations. EA does not purport to leave everything the way it was. To get people to donate their kidneys to strangers or to give large portions of their income to people far away or to care about seemingly remote longterm issues is a radical project. To suggest to EAs that their moral views are sometimes incompatible with common sense intuitions is probably not so threatening to them. Arguing that consequentialism is unintuitive is not well-suited to steelmanning non-consequentialist views to the EA community given their prior intellectual and moral commitments.
Instead of focusing on a “bottoms-up” problem with utilitarianism, I’d urge utilitarians to look more deeply at their foundations. One place Kantians excel is in their metaethics. The Kantian project, at a high level, identifies the fundamental moral question as a first-personal one: what should I do? Kantians navigate the many hazards of metaethics by locating what we ought to do in the kinds of things we are. Just as a senator, in virtue of who they are and the role they occupy, ought to draft a bill to make something happen instead of issuing a proclamation, Kantians try to derive how we ought to act from an analysis of our role as an agent. All that we can choose in light of moral considerations, after all, are the actions we take.
On the contrary, many consequentialist theories seem to start from an impersonal perspective, asserting the objective value of utility and the goodness of there being more of it. The persistent passive voice in these kinds of arguments highlights the gap between the “goodness” of more utility and what considerations I should take into account when I choose what to do. Perhaps it would be better, in some sense, for you to win our chess match – maybe an underdog victory is more inspiring or interesting or beautiful than the continued success of a champion. Does that imply that as a player, the best move for me is to knock my king over and resign instead of moving my queen to achieve checkmate? Not obviously. Similarly, as commonly defended, consequentialism fails to overcome this gap between describing how the world objectively should be (in some sense) and how I as an individual ought to act.
The best argument against consequentialism then is just that it is confused about what morality is. Morality is not an objective attribute that inheres in states of affairs. Morality is at its core a guide for individuals to choose what to do. Insofar as a consequentialist theory is not rooted in the subjective experience of deliberation, of an individual trying to make sense of what they ought to do, it will not be answering the fundamental questions of morality.
This argument isn’t lethal for consequentialism because you can conceivably derive a consequentialist ethic from the nature of an individual moral agent. Nagel in his book The Possibility of Altruism tries something like this. But, this objection is problematic for many arguments for consequentialism and for the ways in which many consequentialists conceive of morality.
I would encourage consequentialist EAs interested in philosophy to take the individual deliberator as the primary locus of moral concern. Perhaps it will come out that what the individual should care about is the amount of welfare in the universe, but I’d think the exact implications of such a view would be somewhat incongruous with those of more typical consequentialist theories.
Persuading Non-Consequentialists On Their Own Terms
Thus far, I’ve argued that EAs should care more about the specific arguments for consequentialism and have rebutted some possible arguments. In doing so, I hope to have demonstrated some deficiency in the state of (what I’d argue is) an important part of the EA intellectual stack and hopefully will have encouraged more philosophically-inclined EAs to work on those foundations. In that spirit of trying to redirect EA philosophical resources, I’d also like to suggest that EAs work on arguing for “the numbers counting” for non-consequentialist ethical theories.
To me, the fundamental thesis of EA is something like “when we are being altruistic,‘the numbers matter’, and we should do more good rather than less”. This thesis is not obviously incompatible with non-consequentialist theories that permit or demand some kind of altruism (eg Kantian duty of beneficence or the virtue of charity). Though these theories ground altruism differently from how consequentialism does, they both make room for the importance of altruism in the good life.
As someone pretty sympathetic to the Kantian ethical picture, I’ve long been bothered by the under-theorized Kantian duty of beneficence. Many EAs might not even realize that Kant identified a proactive duty to help others; though the duty of beneficence sometimes feels like a cop out that is just shoved into the Kantian system to satisfy our common sense intuitions that we should broadly try to help others. Given the dearth of good arguments on the duty of beneficence, there’s an opportunity for an entrepreneurial philosophically-inclined EA to argue that even the most committed Kantian should be giving to GiveWell on their own terms. Consequentialism’s main competitors already recognize some kind of altruistic requirement and also have not really developed a proprietary theory of what that entails. EAs should be explaining why an indirect duty to be altruistic derived from the categorical imperative would obligate us to care about the efficacy and impact of our altruism – which doesn’t seem like such a crazy stretch!
Note that convincing a Kantian to be effective in their altruism does not necessarily collapse them into being a consequentialist. Actualizing the duty of beneficence by giving to GiveWell instead of the Harvard endowment does not imply that one should push the fat man to stop the runaway trolley (as consequentialism might require). I think this opens up a whole interesting world though. What does EA look like when it distinguishes foreseen and intended harms, as an EA Kantian might? I could imagine the EA calculus on non-meat animal products like dairy and eggs changing under such a theory, for example. These possibilities should be exhilarating to moral entrepreneurs excited about envisioning the future of ethics.
I want to highlight Korsgaard’s argument, but I think it’s worth also spelling out as an aside the seeming absurdity of how the Parfitian perspective would work. Imagine pitching someone on Parfit as a solution to a real-life trolly problem. Imagine you had to tell the one person why you were sacrificing them for the sake of the 5. “Well when you think about it, personal identity isn’t really real. You shouldn’t see such thick boundaries between yourself and the five. We’re all just loci of qualia that contribute to universal levels of utility. Isn’t it better for the universe (and therefore kind of for you) for me to kill ‘you’ to save the five? Don’t be too tied to your ‘personhood’, it’s a myth that distracts you from your connection to all the other loci of qualia. Ok, ready for the train to come crush you?” Like, what?! You sound like a bizarre cult leader trying to persuade someone of the irreality of their identity as you try to kill them.
And, note: this intuition pump doesn’t work against consequentialism in general. Different philosophical justifications for consequentialism can have different practical implications, which makes it all the more important for EAs to get the foundational questions right. You could totally imagine a different pitch based on a different foundation for consequentialism that would sound kind of sane: “This is a hard decision to make, but I have to do throw the lever for the greater good. It’s an impossible sacrifice to demand of you, but what can I do? Let five others die?” Maybe this doesn’t resonate perfectly, but it sounds like it’s in the ballpark of reasonability. Part of the gap between the two scenarios might be that the latter acknowledges the individuality of the victim as having some distinctive value. As I will explain later, moral philosophy should be willing to be revisionist, so just asserting “denying personal identity is unintuitive” is not a great argument, but I thought it would be worthwhile to tease that out a bit.
Does the Kantian sound any less crazy trying to justify themselves to the members of the five in this trolley problem? I kinda think so: “I’m so sorry, I just don’t think I can make another suffer to save you” Surely each of the 5 would find that reasonable, as any of them could have found themselves on the other track. AND, importantly, I don’t have to justify myself to them “as the five” or even to the universe who according to consequentialism is the victim of having lower total utility than it might otherwise. Each individual in “the five” could understand that another individual is equal to them as an individual.
I’m very much abbreviating Korsgaard’s argument here, but I think the whole thing is really great. If you’re interested in Parfit, definitely check out the chapter “Personal identity and unity of agency” in her book.
On the Philosophical Foundations of EA
Intro
EAs Should Care More About Philosophical Ethics
I think EAs should care more about debates around which ethical theory is true and why. The EA community is really invested in problems of applied consequentialist ethics such as “how should we think about low probability / high, or infinite, magnitude risks”, “how should we discount future utility”, “ought the magnitudes of positive and negative utility be weighed equally”, etc.[1] The answers to questions in population ethics and other applied ethical areas, which are largely accepted by EAs as important questions, turn on what the rest of your ethical stack looks like (ie on your metaethics and normative ethics). You can’t determine how many points you get for scoring without resolving whether you’re playing basketball or football. EA needs more clarity on the most foundational questions in order to answer downstream questions. The answer to “how should we discount future utility” depends on what you think we “should” do generally.
This is not even just a question of consequentialism vs Kantianism: whether you start from a Nagelian altruism or Parfitian one or Millian one will probably have implications for your takes on practical questions of what causes to actually prioritize. EAs should also be open to the possibility that consequentialism is false, and so understanding the best argument for consequentialism might highlight the relative attractiveness of competing ethical theories.
In this post, I will critique what I identify as central EA arguments for consequentialism in the hope of spurring greater debate among the EA community as to which normative theory is correct and therefore ought to be used to prioritize causes.
Towards the end of the post, I will also sketch a way for EAs to combine what I see as the fundamental thesis of effective altruism with non-consequentialist moral theories, allowing consequentialist apostates to remain in someway committed to the EA project.
Using MacAskill’s 80,000 Hours Podcast Appearance as Source Material
In order to evaluate the best EA arguments for consequentialism, I’m going to focus on Will MacAskill’s case for consequentialism on the 80,000 hours podcast. I like this as source material because MacAskill is as close as you can get to a canonical EA philosopher, and the 80,000 hours podcast is one of the most popular EA media outlets, meaning that MacAskill’s arguments on the podcast will probably reflect the broad EA understanding of the philosophical case for consequentialism. The podcast is also conveniently succinct, making it much easier to efficiently address the relevant arguments than it would be to review the entirety of a philosophy book. Furthermore, it seems more charitable to only attribute philosophical views to EA in particular if they have been formally adopted by people in the community – just reviewing Mill’s Utilitarianism, a book written over a century before the term “EA” was coined, could therefore misfire as a criticism of EA. Lastly, part of the point of this post is not just to criticize the philosophical foundations of EA but also to criticize how EAs communicate about their philosophical views, making an EA philosopher’s podcast appearance an especially good subject to focus on.
This approach obviously comes with its own limitations. Though I think MacAskill is responsible for the outlines of the arguments he gives, it would be silly to criticize specific word choice or the details of the justifications of steps in his arguments given his space and preparatory constraints. I will interpret MacAskill to be fundamentally gesturing towards more complete arguments instead of giving complete arguments himself, meaning that some reconstruction and inference will be required on my part.
MacAskill’s Track Record Argument
Diving right in then, here’s how MacAskill begins his defense of utilitarianism on Wilbin’s podcast:
Should We Care About Track Records?
MacAskill says the track records of moral theories have implications for their plausibility. Taken literally, this approach seems to misfire insofar as moral theories do not make predictions about what the future will value but about what ought to be valued. It should not count against a moral theory if its conclusions do not become widely accepted.
I think what MacAskill is getting at here though is just how well the revisionist conclusions of a theory align with our current moral intuitions, not because it is important that moral theories exert causal power over our beliefs but because our current moral intuitions contain insight. However, what MacAskill says towards the end of the argument seems to undermine the viability of this approach. If some of our intuitions today are objectively abominable, then how can we judge past theories’ conclusions based on our intuitions? How should we know in any given case whether we should be revising our intuitions in light of the views of others or rejecting a theory because it conflicts with our intuitions? Without a substantive moral framework, who’s to say whether it was people hundreds of years ago or animal rights activists today with the bad views on animal welfare?
Against Moral Authorities
MacAskill also seems to apply his method for judging track records not to the conclusions of ethical theories but to the beliefs of the theories’ originators. This feature of the approach seems problematic. Newton had whacky beliefs about alchemy and the occult. Should we reject Newtonian mechanics because of his views on alchemy? If I could identify contemporaries of his who rejected alchemy but believed in Aristotelian physics, would that increase your credence in Aristotelian physics? Or should the inference go the other way, and should we increase our credence in alchemy in light of Newton’s predictions about physics?
If you are concerned about Kant’s views on race or gender, why not evaluate similar arguments as made by a different author? Christine Korsgaard, a leading Kantian philosopher, is a woman who believes in gender equality and is a practicing vegan. Indeed, where Korsgaard believes in ethical veganism, Bentham thought killing animals for food was permissible. Does that count as an argument against consequentialism? It is easy to separate moral theories from the beliefs of their proponents, and we should do so.
Overall, I don’t think we should worry too much about the views of philosophers when judging their theories, and it seems unsatisfying to try to haphazardly compare moral theories against our (my? effective altruists’? America’s? the world’s?) intuitions, especially when we acknowledge that some of our intuitions are probably grossly incorrect. Let’s be bold and develop substantive theories of what we ought to do!
Harsayni’s Veil of Ignorance Argument
MacAskill’s subsequent arguments are considerably stronger and more philosophically interesting:
Did Harsayni Load the Deck?
MacAskill mentions that Harsayni is an economist which I think is relevant. Doesn’t Harsayani’s leading question (“How would you structure society?”) already seem kinda off from the moral project? Morality asks “what should I do” tout court. Harsayni wants to motivate intuitions around how society should be structured and implicitly how resources should be distributed. But, individuals don’t choose how resources are globally distributed or how society is structured. We can impact how society is structured and how resources are distributed through our actions, but the set of questions we can ask about how society ought to be structured and the set of questions we can ask about how we should act do not seem to be subsets of each other.
At first blush, this discrepancy might merely look like a cute idiosyncrasy of an interdisciplinary thinker, but the way Harsayni frames the question ultimately makes the argument circular. In arguing for a particular moral theory by answering the prompt “how would someone from behind the veil of ignorance choose to distribute utility among people”, Harsayni assumes that the correct ethical theory is, or ought to be, primarily concerned with societal welfare. But, it is not obvious that Kantianism or virtue ethics or contractarianism or any other moral theory would describe the central question of morality this way. MacAskill says that using expected utility theory makes sense when you’re operating under empirical uncertainty. But non-consequentialist ethical theories don’t face empirical uncertainty in this way. The Kantian does not have to probabilistically weight whether their action is using someone as a mere means or not. By presupposing that morality is concerned with empirical results like distributions of welfare, Harsayni unjustifiably limits the range of viable moral theories to consequentialist ones, and then argues for utilitarianism (presumably against competing consequentialist theories like egoism) on the grounds of impartiality.
To see how much Harsayni has stacked the deck in favor of consequentialism, imagine the same thought experiment described slightly differently. “Suppose you were behind the veil of ignorance and did not know who you would be in society. How would you structure society? Of course societies are concerned with what rights and liabilities people have, so what rights would you enshrine in the world constitution? Given that you don’t know who you are, you would most likely opt for rights that treated everyone as equals. Because you might end up in a minority group, you would most likely choose rules that preclude the possibility of undue burdens on minority groups.” This seems like an analogously equally plausible version of Harsayni’s argument, but the line of reasoning here would suggest a more rights-based approach to moral philosophy in virtue of the initial framing.
Imagine a more dramatically different leading question. Suppose I argued “morality determines what kind of a person we will be. Therefore, the central question of morality is what virtues should you inculcate? Given that between any extreme personalities is an optimal one, you would prescribe courage over foolhardiness or cowardice, justice over mercy or retribution, temperance over asceticism or indulgence, and so on”. Obviously when we start with “what virtues should you inculcate”, it’s not going to be such a stretch to get to virtue ethics as the correct ethical theory.
Each of the starting questions I’ve imagined clearly load the deck in terms of the kinds of answers that are conceptually viable. And, you can’t just say that we should use each of the different moral theories to answer their respective leading questions (ie utilitarianism for welfare distributions, Kantianism for constitution writing, and virtue ethics for character formation) because each of those leading questions describes the same situation from a different perspective. One person’s welfare maximization problem is another’s rights adjudication problem and another’s test of character. One of the goals of moral philosophy is to determine which of those lenses is the best to view our moral dilemmas through, meaning Harsayni’s argument, in presupposing an answer to that question, cannot help us conclude which moral theory is correct.
Argument from Parfitian Personhood
MacAskill’s third argument for consequentialism is by far the most interesting.
Korsgaard on Parfit
The main objection I want to give to the Parfit argument comes from Korsgaard’s book Creating the Kingdom of Ends. Given the argument’s promise and prominence within the EA community, Parfit’s peers’ objections to the argument seem pretty underrated. At the very least, I think it would be valuable for people on the EA forum interested in Parfit to be exposed to Korsgaard’s response.[2]
Korsgaard lays out her strategy for responding to Parfit, writing
Note the different ways she describes her project and Parfit’s. Korsgaard describes Parfit as showing “there is no deep sense in which” personal identity exists. The impersonal way this is written gets at Parfit’s metaphysical approach to identity: he is interested in whether or not personhood is a “real” feature of the universe. As MacAskill puts it, Parfit is trying to show that there are no “facts of the matter” when it comes to identity. Korsgaard, on the other hand, describes her project as a practical one: “I nevertheless have reasons for regarding myself...”. For Korsgaard, the problem of personhood is not whether persons objectively exist as a category in the universe, it is not a question of facts of matter, but of how we should practically conceive of ourselves and others.
Because of the logical gap between descriptive and normative statements, it could be true that Parfit is right about distinct persons not existing in some metaphysical sense but that we still ought to act as if they do. So, the question becomes whether Korsgaard can meet her own bar of showing that as agents, we ought to conceive of the agents who will occupy our bodies in the future as the same person as us and as distinct from others. In order to show how our agency necessitates viewing ourselves as a unified person over time, Korsgaard begins by arguing that it is our agency that unifies our experience of the world at any single moment in time.
She prompts the line of reasoning by asking
and then continues
As the agent in charge of my body, I have to choose what it will do in any given moment. But, I cannot do so without resolving the conflict between my different thoughts and feelings about the world that pull me in all sorts of different directions. Because my body cannot pursue contradictory pursuits – I cannot shirk responsibility and embrace it – my agential self has to unify these conflicting proposals into a single course of action for the whole body. I have to conceive of myself as a deliberator for whom these different experiences are all considerations that resolve in a certain action. In this way, our agency requires seeing ourselves as unified selves at any moment in time. But, this does not imply that I have to identify or unify myself with my body at other points in time, as the person going to the dentist in MacAskill’s example might not. The task remains for Korsgaard to show that agency requires that I conceive of myself as a unified self over time as well.
She then argues:
All humans are tasked with deliberating about what to do and this process necessarily projects our thinking into the future. Though we may act in moments of time, we do not only act in light of considerations at that moment. As an agent, I do not choose to merely slightly lower one knee to the ground: I choose to propose to my love because of the future I envision with them. The kinds of questions that are posed to us as agents are not just momentary movements like how many millimeters to move a limb but larger questions of what we find meaningful and worth pursuing – and these question pull us into the future. Because agency requires we act on what we want our future to be like, we have to conceive of the agent in our body in the future as us. It is not a metaphysical fact about our continuity that compels us to care about what happens to our body in the future – it is our innate concern for our future that creates the need for a persistent identity over time. Therefore, Korsgaard concludes, personhood is an inescapable component of our normative landscape.[3]
Both the Harsayani and Parfit arguments – and my responses – get at two different perspectives from which to understand ethics. I keep coming back to the distinction between thinking impersonally vs practically. This tension suggests what I think is the best overall argument against consequentialism and the topic I’ll turn my attention to now. Even if the specific problems I pointed out with these arguments don’t land, I think there’s a more fundamental issue at play, not an issue of the logical validity of any given argument, but of the foundational assumptions of many consequentialist arguments.
Best Argument Against Consequentialism
MacAskill ends the section of the podcast by describing what he thinks is the best argument against consequentialism:
Philosophically, I don’t find this view very persuasive. Or at least the objection seems to presuppose that ethics should not be revisionist, which is not a very interesting starting place for a movement committed to revising our understanding of our ethical obligations. EA does not purport to leave everything the way it was. To get people to donate their kidneys to strangers or to give large portions of their income to people far away or to care about seemingly remote longterm issues is a radical project. To suggest to EAs that their moral views are sometimes incompatible with common sense intuitions is probably not so threatening to them. Arguing that consequentialism is unintuitive is not well-suited to steelmanning non-consequentialist views to the EA community given their prior intellectual and moral commitments.
Instead of focusing on a “bottoms-up” problem with utilitarianism, I’d urge utilitarians to look more deeply at their foundations. One place Kantians excel is in their metaethics. The Kantian project, at a high level, identifies the fundamental moral question as a first-personal one: what should I do? Kantians navigate the many hazards of metaethics by locating what we ought to do in the kinds of things we are. Just as a senator, in virtue of who they are and the role they occupy, ought to draft a bill to make something happen instead of issuing a proclamation, Kantians try to derive how we ought to act from an analysis of our role as an agent. All that we can choose in light of moral considerations, after all, are the actions we take.
On the contrary, many consequentialist theories seem to start from an impersonal perspective, asserting the objective value of utility and the goodness of there being more of it. The persistent passive voice in these kinds of arguments highlights the gap between the “goodness” of more utility and what considerations I should take into account when I choose what to do. Perhaps it would be better, in some sense, for you to win our chess match – maybe an underdog victory is more inspiring or interesting or beautiful than the continued success of a champion. Does that imply that as a player, the best move for me is to knock my king over and resign instead of moving my queen to achieve checkmate? Not obviously. Similarly, as commonly defended, consequentialism fails to overcome this gap between describing how the world objectively should be (in some sense) and how I as an individual ought to act.
The best argument against consequentialism then is just that it is confused about what morality is. Morality is not an objective attribute that inheres in states of affairs. Morality is at its core a guide for individuals to choose what to do. Insofar as a consequentialist theory is not rooted in the subjective experience of deliberation, of an individual trying to make sense of what they ought to do, it will not be answering the fundamental questions of morality.
This argument isn’t lethal for consequentialism because you can conceivably derive a consequentialist ethic from the nature of an individual moral agent. Nagel in his book The Possibility of Altruism tries something like this. But, this objection is problematic for many arguments for consequentialism and for the ways in which many consequentialists conceive of morality.
I would encourage consequentialist EAs interested in philosophy to take the individual deliberator as the primary locus of moral concern. Perhaps it will come out that what the individual should care about is the amount of welfare in the universe, but I’d think the exact implications of such a view would be somewhat incongruous with those of more typical consequentialist theories.
Persuading Non-Consequentialists On Their Own Terms
Thus far, I’ve argued that EAs should care more about the specific arguments for consequentialism and have rebutted some possible arguments. In doing so, I hope to have demonstrated some deficiency in the state of (what I’d argue is) an important part of the EA intellectual stack and hopefully will have encouraged more philosophically-inclined EAs to work on those foundations. In that spirit of trying to redirect EA philosophical resources, I’d also like to suggest that EAs work on arguing for “the numbers counting” for non-consequentialist ethical theories.
To me, the fundamental thesis of EA is something like “when we are being altruistic, ‘the numbers matter’, and we should do more good rather than less”. This thesis is not obviously incompatible with non-consequentialist theories that permit or demand some kind of altruism (eg Kantian duty of beneficence or the virtue of charity). Though these theories ground altruism differently from how consequentialism does, they both make room for the importance of altruism in the good life.
As someone pretty sympathetic to the Kantian ethical picture, I’ve long been bothered by the under-theorized Kantian duty of beneficence. Many EAs might not even realize that Kant identified a proactive duty to help others; though the duty of beneficence sometimes feels like a cop out that is just shoved into the Kantian system to satisfy our common sense intuitions that we should broadly try to help others. Given the dearth of good arguments on the duty of beneficence, there’s an opportunity for an entrepreneurial philosophically-inclined EA to argue that even the most committed Kantian should be giving to GiveWell on their own terms. Consequentialism’s main competitors already recognize some kind of altruistic requirement and also have not really developed a proprietary theory of what that entails. EAs should be explaining why an indirect duty to be altruistic derived from the categorical imperative would obligate us to care about the efficacy and impact of our altruism – which doesn’t seem like such a crazy stretch!
Note that convincing a Kantian to be effective in their altruism does not necessarily collapse them into being a consequentialist. Actualizing the duty of beneficence by giving to GiveWell instead of the Harvard endowment does not imply that one should push the fat man to stop the runaway trolley (as consequentialism might require). I think this opens up a whole interesting world though. What does EA look like when it distinguishes foreseen and intended harms, as an EA Kantian might? I could imagine the EA calculus on non-meat animal products like dairy and eggs changing under such a theory, for example. These possibilities should be exhilarating to moral entrepreneurs excited about envisioning the future of ethics.
Matt Yglesias argues EA simply is applied consequentialism
I want to highlight Korsgaard’s argument, but I think it’s worth also spelling out as an aside the seeming absurdity of how the Parfitian perspective would work. Imagine pitching someone on Parfit as a solution to a real-life trolly problem. Imagine you had to tell the one person why you were sacrificing them for the sake of the 5. “Well when you think about it, personal identity isn’t really real. You shouldn’t see such thick boundaries between yourself and the five. We’re all just loci of qualia that contribute to universal levels of utility. Isn’t it better for the universe (and therefore kind of for you) for me to kill ‘you’ to save the five? Don’t be too tied to your ‘personhood’, it’s a myth that distracts you from your connection to all the other loci of qualia. Ok, ready for the train to come crush you?” Like, what?! You sound like a bizarre cult leader trying to persuade someone of the irreality of their identity as you try to kill them.
And, note: this intuition pump doesn’t work against consequentialism in general. Different philosophical justifications for consequentialism can have different practical implications, which makes it all the more important for EAs to get the foundational questions right. You could totally imagine a different pitch based on a different foundation for consequentialism that would sound kind of sane: “This is a hard decision to make, but I have to do throw the lever for the greater good. It’s an impossible sacrifice to demand of you, but what can I do? Let five others die?” Maybe this doesn’t resonate perfectly, but it sounds like it’s in the ballpark of reasonability. Part of the gap between the two scenarios might be that the latter acknowledges the individuality of the victim as having some distinctive value. As I will explain later, moral philosophy should be willing to be revisionist, so just asserting “denying personal identity is unintuitive” is not a great argument, but I thought it would be worthwhile to tease that out a bit.
Does the Kantian sound any less crazy trying to justify themselves to the members of the five in this trolley problem? I kinda think so: “I’m so sorry, I just don’t think I can make another suffer to save you” Surely each of the 5 would find that reasonable, as any of them could have found themselves on the other track. AND, importantly, I don’t have to justify myself to them “as the five” or even to the universe who according to consequentialism is the victim of having lower total utility than it might otherwise. Each individual in “the five” could understand that another individual is equal to them as an individual.
I’m very much abbreviating Korsgaard’s argument here, but I think the whole thing is really great. If you’re interested in Parfit, definitely check out the chapter “Personal identity and unity of agency” in her book.