Would killing one be in line with EA if it can save 10?
Please let me know the flaw in my logic, or if it is sound. I’m a big fan of EA but this was brought up in a discussion with a friend and I’ve been mulling over it.
1. If donating $x to cancer research saves 1, but donating $x to against malaria saves 10, then against malaria would be the correct choice based on the belief in effective altruism, correct?
1.1 Assume that without that $x, the 1 or 10 people (in the group that the money was not given to) will die.
1.2 Assume also that you are fully aware of both options in 1 as well as the repercussions in item 1.1.
2. By donating to against malaria, by the opportunity cost you’re indirectly killing that one person with cancer.
3. Assume you had the option to directly kill 1 person to save 10 or directly kill the 10 to save 1. (Just a hypothetical, not advocated for as per forum rules).
4. In item 1, you are indirectly killing 1 to save 10. In point 3, if you choose the first option you are directly killing 1 to save 10.
5. In both situations, you had knowledge that saving the 10 would result in killing the 1.
6. The only difference is in the intent to kill that one person present in the second direct situation (item 3) but not the first (item 1).
7. If you are aware that you will be indirectly killing them in the second situation, then what different does the actual intent to kill make?
The first option in item 3 intuitively seems wrong, but it seems to fall in line with the beliefs of effective altruism, so can someone help me identify my flaw?
A short answer might be “In real life, people view these two scenarios very differently, and ignoring psychology and sociology may get you some interesting thought experiments but will not lead you anywhere useful when working with actual humans.”
Or to quote the EA Guiding Principles:
“Because we believe that trust, cooperation, and accurate information are essential to doing good, we strive to be honest and trustworthy. More broadly, we strive to follow those rules of good conduct that allow communities (and the people within them) to thrive.” Not killing people seems like a pretty basic requirement for trust and cooperation. There are socially-agreed-upon exceptions like governments having use of force, but even those are divisive.
Some other writing that’s addressed this:
https://www.lesswrong.com/posts/prb8raC4XGJiRWs5n/consequentialism-need-not-be-nearsighted
https://www.lesswrong.com/posts/K9ZaZXDnL3SEmYZqB/ends-don-t-justify-means-among-humans
Thank you very much! The links were especially helpful—the doctor scenario in the first example is pretty much what I was talking about above, so their explanation of why it would be unwise to kill makes a lot of sense!
Here are two other considerations that haven’t yet been mentioned:
1. EA is supposed to be largely neutral between ethical theories. In practice, most EAs tend to be consequentialists, specifically utilitarians, and a utilitarian might plausibly think that killing one to save ten was the right thing to do (though others in this thread have given reasons why that might not be the case even under utilitarianism), but in theory one could unite EA principles with most ethical systems. So if the ethical system you think is most likely to be correct includes side constraints/deontological rules against killing, then EA doesn’t require you to violate those side constraints in the service of doing good; one can simply do the most good one can do within those side constraints.
2. Many EAs are interested in taking into account moral uncertainty, i.e. uncertainty about which moral system is correct. Even if you think the most likely theory is consequentialism, it can be rational to act as if there is a side constraint against killing if you place some amount of credence in a theory (e.g. a deontological theory) on which killing is always quite seriously wrong. The thought is this: if there’s some chance that your house will be damaged by a flood, it can be worth it to buy flood insurance, even if that chance is quite small, since the damage if it does happen will be very great. By the same token, even if the theory you think is most probably recommends killing in a particular case, it can still be worth it to refrain, if you also place some small credence in another theory that thinks killing is always seriously wrong. Will MacAskill discusses this in his podcast with Rob Wiblin.
Tl;dr: you might think killing one to save ten is wrong because you’re a nonconsequentialist, and this is perfectly compatible with EA. Or, even if you are consequentialist, and even if you think consequentialism sometimes recommends killing one to save ten, it might still be rational not to kill in those cases, if you place even a small credence in some other theory on which this would be seriously wrong.
Thank you very much for explaining this! I appreciate the analogy of the flood damage and tiny risks with great reward, that’s such an interesting point that I never considered. After researching that, it seems like what you’re describing is Pascal’s mugging, so I’ll read up on that also. Thanks again.
What I was describing wasn’t exactly Pascal’s mugging. Pascal’s mugging is an attempted argument *against* this sort of reasoning, by arguing that it leads to pathological conclusions (like that you ought to pay the mugger, when all he’s told you is some ridiculous story about how, if you don’t, there’s a tiny chance that something catastrophic will happen). Of course, some people bite the bullet and say that you just should pay the mugger, others claim that this sort of uncertainty reasoning doesn’t actually lead you to pay the mugger, and so on. I don’t really have a thought-out view on Pascal’s mugging myself. The reason what I’m describing is different is that [this sort of reasoning leading you to *not* kill someone] wouldn’t be considered a pathological conclusion by most people (same with buying flood insurance).
I’m not sure many EAs will agree with your intuition (If I’m understanding your question correctly) that it’s morally wrong to kill one person to save 10. There are certainly some moral philosophers who do, however. This dilemma is often referred to as the “trolley problem”, and has had plenty of discussion over the years.
You may find this interesting reading, it turns out people’s intuitions about similar problems vary quite a lot based on culture.
Interesting, so what are the bounds of the views of effective altruism with regards to maximizing impact? For example, if a eugenics program was found to be the #1 way to increase humanity’s chance of survival, then would that be an acceptable/ideal program to donate to from the lens of effective altruism?
Thanks, that article is interesting. Interesting that the views mainly fell into three groups of western, southern, and eastern.
People’s beliefs differ widely on questions like that, even within EA. But it’s helpful to keep in mind that things like “eugenics programs” in the Nazism sense (or various other forms of crime) are highly unlikely to be the best way to increase humanity’s chances of survival, because they have many flow-through effects that are bad in a variety of ways.
To quote Holden Karnofsky:
Stealing money to save lives may seem moral in the short run, but there are so many ways theft can backfire that it’s probably a terrible strategy even if you’re focused on the total utility of your actions and ignoring commonsense prohibitions against stealing. You could be caught and jailed, reducing your ability to do good for a long time; your actions could hurt the reputation of EA as a movement and/or the reputation of the charity you supported; your victim could become an outspoken advocate against EA; and so on.
The general strategy that seems likely to most improve the future involves building a large, thriving community of people who care a lot about doing good and also care about using reason and evidence. Advocating crime, or behavior that a vast majority of people would view as actively immoral, makes it very hard to build such a community.
Thanks so much for the explanation! That makes a lot of sense, especially the third paragraph!