Presumably, if you could press a button to kill many people without anyone attributing it to the animal welfare movement you would, then?
No. I guess that would increase welfare neaterm, but could increase or decrease it overall due to uncertain longer term effects. More importantly, killing people would make me feel bad even if I was the only who would ever know about it. This would decrease my productivity and donations to the best animal welfare interventions, which would be the dominant consideration given my estimate that one can neutralise the negative effects on animals of one person in 2022 with just a few cents.
I strongly endorse impartiality. So, if forced to pick between X and Y, and it is stipulated that X increases impartial welfare more than Y despite involving killing people, I would pick X. However, I do not see anything in the real world coming anywhere close to that.
Do you not worry about moral uncertainty? Unless you’re certain about consequentialism, surely you should put some weight on avoiding killing even if it maximises impartial welfare?
I fully endorse expected total hedonistic utilitarianism (ETHU) in principle. However, I think it is often good to think about the implications of other moral theories as heuristics to follow ETHU well in practice.
Unless you’re certain about consequentialism, surely you should put some weight on avoiding killing even if it maximises impartial welfare?
I think saving human lives increases the number of beings killed via increasing the number of farmed and wild animals killed.
I agree that thinking about other moral theories is useful for working out what utilitarianism would actually recommend.
That’s an interesting point re increasing the total amount of killing, I hadn’t considered that! But I was actually picking up on your comment which seemed to say something more general—that you wouldn’t intrinsically take into account whether an option involved (you) killing people, you’d just look at the consequences (and killing can lead to worse consequences, including in indirect ways, of course).
But it sounds like maybe your response to that is you’re not worried about moral uncertainty /​ you’re sure about utilitarianism /​ you don’t have any reason to avoid killing people, other than the (normally very significant) utilitarian reasons not to kill?
But it sounds like maybe your response to that is you’re not worried about moral uncertainty /​ you’re sure about utilitarianism /​ you don’t have any reason to avoid killing people, other than the (normally very significant) utilitarian reasons not to kill?
No. I guess that would increase welfare neaterm, but could increase or decrease it overall due to uncertain longer term effects. More importantly, killing people would make me feel bad even if I was the only who would ever know about it. This would decrease my productivity and donations to the best animal welfare interventions, which would be the dominant consideration given my estimate that one can neutralise the negative effects on animals of one person in 2022 with just a few cents.
I strongly endorse impartiality. So, if forced to pick between X and Y, and it is stipulated that X increases impartial welfare more than Y despite involving killing people, I would pick X. However, I do not see anything in the real world coming anywhere close to that.
Do you not worry about moral uncertainty? Unless you’re certain about consequentialism, surely you should put some weight on avoiding killing even if it maximises impartial welfare?
Hi Isaac.
I fully endorse expected total hedonistic utilitarianism (ETHU) in principle. However, I think it is often good to think about the implications of other moral theories as heuristics to follow ETHU well in practice.
I think saving human lives increases the number of beings killed via increasing the number of farmed and wild animals killed.
Thanks Vasco! :)
I agree that thinking about other moral theories is useful for working out what utilitarianism would actually recommend.
That’s an interesting point re increasing the total amount of killing, I hadn’t considered that! But I was actually picking up on your comment which seemed to say something more general—that you wouldn’t intrinsically take into account whether an option involved (you) killing people, you’d just look at the consequences (and killing can lead to worse consequences, including in indirect ways, of course). But it sounds like maybe your response to that is you’re not worried about moral uncertainty /​ you’re sure about utilitarianism /​ you don’t have any reason to avoid killing people, other than the (normally very significant) utilitarian reasons not to kill?
Yes.