I don’t think this is the kind of “ends justify the means” reasoning that MacAskill is objecting to. @Vasco Grilo🔸is not arguing that we should break the law. He is just doing a fairly standard EA cause prioritization analysis. Arguing that people should not donate to global health doesn’t even contradict common-sense morality because as we see from the world around us, common-sense morality holds that it’s perfectly permissible to let hundreds or thousands of children die of preventable diseases. Utilitarians and other consequentialists are the ones who hold “weird” views here, because we reject the act/omission distinction in the first place.
(For my part, I try to donate in such a way that I’m net-positive from the perspective of someone like Vasco as well as global health advocates.)
I don’t think this is the kind of “ends justify the means” reasoning that MacAskill is objecting to.
There is also the question of what are the means and ends here. Does the end “increasing human welfare” justify the means “increasing nearterm suffering a lot”?
@Vasco Grilo🔸is not arguing that we should break the law.
Right. As I commented above, it would not make any sense for someone caring about animals to kill people.
As I commented above, it would not make any sense for someone caring about animals to kill people.
You only did so on the ground of not being an effective method, and because it would decrease support for animal welfare. Presumably, if you could press a button to kill many people without anyone attributing it to the animal welfare movement you would, then?
Presumably, if you could press a button to kill many people without anyone attributing it to the animal welfare movement you would, then?
No. I guess that would increase welfare neaterm, but could increase or decrease it overall due to uncertain longer term effects. More importantly, killing people would make me feel bad even if I was the only who would ever know about it. This would decrease my productivity and donations to the best animal welfare interventions, which would be the dominant consideration given my estimate that one can neutralise the negative effects on animals of one person in 2022 with just a few cents.
I strongly endorse impartiality. So, if forced to pick between X and Y, and it is stipulated that X increases impartial welfare more than Y despite involving killing people, I would pick X. However, I do not see anything in the real world coming anywhere close to that.
Do you not worry about moral uncertainty? Unless you’re certain about consequentialism, surely you should put some weight on avoiding killing even if it maximises impartial welfare?
I fully endorse expected total hedonistic utilitarianism (ETHU) in principle. However, I think it is often good to think about the implications of other moral theories as heuristics to follow ETHU well in practice.
Unless you’re certain about consequentialism, surely you should put some weight on avoiding killing even if it maximises impartial welfare?
I think saving human lives increases the number of beings killed via increasing the number of farmed and wild animals killed.
I agree that thinking about other moral theories is useful for working out what utilitarianism would actually recommend.
That’s an interesting point re increasing the total amount of killing, I hadn’t considered that! But I was actually picking up on your comment which seemed to say something more general—that you wouldn’t intrinsically take into account whether an option involved (you) killing people, you’d just look at the consequences (and killing can lead to worse consequences, including in indirect ways, of course).
But it sounds like maybe your response to that is you’re not worried about moral uncertainty / you’re sure about utilitarianism / you don’t have any reason to avoid killing people, other than the (normally very significant) utilitarian reasons not to kill?
But it sounds like maybe your response to that is you’re not worried about moral uncertainty / you’re sure about utilitarianism / you don’t have any reason to avoid killing people, other than the (normally very significant) utilitarian reasons not to kill?
I don’t think this is the kind of “ends justify the means” reasoning that MacAskill is objecting to. @Vasco Grilo🔸is not arguing that we should break the law. He is just doing a fairly standard EA cause prioritization analysis. Arguing that people should not donate to global health doesn’t even contradict common-sense morality because as we see from the world around us, common-sense morality holds that it’s perfectly permissible to let hundreds or thousands of children die of preventable diseases. Utilitarians and other consequentialists are the ones who hold “weird” views here, because we reject the act/omission distinction in the first place.
(For my part, I try to donate in such a way that I’m net-positive from the perspective of someone like Vasco as well as global health advocates.)
Thanks, JBentham.
There is also the question of what are the means and ends here. Does the end “increasing human welfare” justify the means “increasing nearterm suffering a lot”?
Right. As I commented above, it would not make any sense for someone caring about animals to kill people.
You only did so on the ground of not being an effective method, and because it would decrease support for animal welfare. Presumably, if you could press a button to kill many people without anyone attributing it to the animal welfare movement you would, then?
No. I guess that would increase welfare neaterm, but could increase or decrease it overall due to uncertain longer term effects. More importantly, killing people would make me feel bad even if I was the only who would ever know about it. This would decrease my productivity and donations to the best animal welfare interventions, which would be the dominant consideration given my estimate that one can neutralise the negative effects on animals of one person in 2022 with just a few cents.
I strongly endorse impartiality. So, if forced to pick between X and Y, and it is stipulated that X increases impartial welfare more than Y despite involving killing people, I would pick X. However, I do not see anything in the real world coming anywhere close to that.
Do you not worry about moral uncertainty? Unless you’re certain about consequentialism, surely you should put some weight on avoiding killing even if it maximises impartial welfare?
Hi Isaac.
I fully endorse expected total hedonistic utilitarianism (ETHU) in principle. However, I think it is often good to think about the implications of other moral theories as heuristics to follow ETHU well in practice.
I think saving human lives increases the number of beings killed via increasing the number of farmed and wild animals killed.
Thanks Vasco! :)
I agree that thinking about other moral theories is useful for working out what utilitarianism would actually recommend.
That’s an interesting point re increasing the total amount of killing, I hadn’t considered that! But I was actually picking up on your comment which seemed to say something more general—that you wouldn’t intrinsically take into account whether an option involved (you) killing people, you’d just look at the consequences (and killing can lead to worse consequences, including in indirect ways, of course). But it sounds like maybe your response to that is you’re not worried about moral uncertainty / you’re sure about utilitarianism / you don’t have any reason to avoid killing people, other than the (normally very significant) utilitarian reasons not to kill?
Yes.