Here are two other considerations that haven’t yet been mentioned:
1. EA is supposed to be largely neutral between ethical theories. In practice, most EAs tend to be consequentialists, specifically utilitarians, and a utilitarian might plausibly think that killing one to save ten was the right thing to do (though others in this thread have given reasons why that might not be the case even under utilitarianism), but in theory one could unite EA principles with most ethical systems. So if the ethical system you think is most likely to be correct includes side constraints/deontological rules against killing, then EA doesn’t require you to violate those side constraints in the service of doing good; one can simply do the most good one can do within those side constraints.
2. Many EAs are interested in taking into account moral uncertainty, i.e. uncertainty about which moral system is correct. Even if you think the most likely theory is consequentialism, it can be rational to act as if there is a side constraint against killing if you place some amount of credence in a theory (e.g. a deontological theory) on which killing is always quite seriously wrong. The thought is this: if there’s some chance that your house will be damaged by a flood, it can be worth it to buy flood insurance, even if that chance is quite small, since the damage if it does happen will be very great. By the same token, even if the theory you think is most probably recommends killing in a particular case, it can still be worth it to refrain, if you also place some small credence in another theory that thinks killing is always seriously wrong. Will MacAskill discusses this in his podcast with Rob Wiblin.
Tl;dr: you might think killing one to save ten is wrong because you’re a nonconsequentialist, and this is perfectly compatible with EA. Or, even if you are consequentialist, and even if you think consequentialism sometimes recommends killing one to save ten, it might still be rational not to kill in those cases, if you place even a small credence in some other theory on which this would be seriously wrong.
Thank you very much for explaining this! I appreciate the analogy of the flood damage and tiny risks with great reward, that’s such an interesting point that I never considered. After researching that, it seems like what you’re describing is Pascal’s mugging, so I’ll read up on that also. Thanks again.
What I was describing wasn’t exactly Pascal’s mugging. Pascal’s mugging is an attempted argument *against* this sort of reasoning, by arguing that it leads to pathological conclusions (like that you ought to pay the mugger, when all he’s told you is some ridiculous story about how, if you don’t, there’s a tiny chance that something catastrophic will happen). Of course, some people bite the bullet and say that you just should pay the mugger, others claim that this sort of uncertainty reasoning doesn’t actually lead you to pay the mugger, and so on. I don’t really have a thought-out view on Pascal’s mugging myself. The reason what I’m describing is different is that [this sort of reasoning leading you to *not* kill someone] wouldn’t be considered a pathological conclusion by most people (same with buying flood insurance).
Here are two other considerations that haven’t yet been mentioned:
1. EA is supposed to be largely neutral between ethical theories. In practice, most EAs tend to be consequentialists, specifically utilitarians, and a utilitarian might plausibly think that killing one to save ten was the right thing to do (though others in this thread have given reasons why that might not be the case even under utilitarianism), but in theory one could unite EA principles with most ethical systems. So if the ethical system you think is most likely to be correct includes side constraints/deontological rules against killing, then EA doesn’t require you to violate those side constraints in the service of doing good; one can simply do the most good one can do within those side constraints.
2. Many EAs are interested in taking into account moral uncertainty, i.e. uncertainty about which moral system is correct. Even if you think the most likely theory is consequentialism, it can be rational to act as if there is a side constraint against killing if you place some amount of credence in a theory (e.g. a deontological theory) on which killing is always quite seriously wrong. The thought is this: if there’s some chance that your house will be damaged by a flood, it can be worth it to buy flood insurance, even if that chance is quite small, since the damage if it does happen will be very great. By the same token, even if the theory you think is most probably recommends killing in a particular case, it can still be worth it to refrain, if you also place some small credence in another theory that thinks killing is always seriously wrong. Will MacAskill discusses this in his podcast with Rob Wiblin.
Tl;dr: you might think killing one to save ten is wrong because you’re a nonconsequentialist, and this is perfectly compatible with EA. Or, even if you are consequentialist, and even if you think consequentialism sometimes recommends killing one to save ten, it might still be rational not to kill in those cases, if you place even a small credence in some other theory on which this would be seriously wrong.
Thank you very much for explaining this! I appreciate the analogy of the flood damage and tiny risks with great reward, that’s such an interesting point that I never considered. After researching that, it seems like what you’re describing is Pascal’s mugging, so I’ll read up on that also. Thanks again.
What I was describing wasn’t exactly Pascal’s mugging. Pascal’s mugging is an attempted argument *against* this sort of reasoning, by arguing that it leads to pathological conclusions (like that you ought to pay the mugger, when all he’s told you is some ridiculous story about how, if you don’t, there’s a tiny chance that something catastrophic will happen). Of course, some people bite the bullet and say that you just should pay the mugger, others claim that this sort of uncertainty reasoning doesn’t actually lead you to pay the mugger, and so on. I don’t really have a thought-out view on Pascal’s mugging myself. The reason what I’m describing is different is that [this sort of reasoning leading you to *not* kill someone] wouldn’t be considered a pathological conclusion by most people (same with buying flood insurance).