Iād say that itās a (putative) instance of adversarial ethics rather than āends justify the meansā reasoning (in the usual sense of violating deontic constraints).
Sometimes that seems OK. Like, it seems reasonable to refrain from rescuing the large man in my status-quo-reversal of the Trolley Bridge case. (And to urge others to likewise refrain, for the sake of the five who would die if anyone acted to save the one.) So that makes me wonder if our disapproval of the present case reflects a kind of speciesismāeither our own, or the anticipated speciesism of a wider audience for whom this sort of reasoning would provide a PR problem?
OTOH, I think the meat-eater problem is misguided anyway, so another possibility is just that mistakenly urging against saving innocent peopleās lives is especially bad. I guess I do think the moral risk here is sufficient to be extra wary about how one expresses concerns like the meat-eater problem. Like Jason, I think itās much better to encourage AW offsets than to discourage GHD life-saving.
(Offsetting the potential downsides from helping others seems like a nice general solution to the problem of adversarial ethics, even if it isnāt strictly optimal.)
So that makes me wonder if our disapproval of the present case reflects a kind of speciesismāeither our own, or the anticipated speciesism of a wider audience for whom this sort of reasoning would provide a PR problem?
Trolley problems are sufficiently abstractāand presented in the context of an extraordinary set of circumstancesāthat they are less likely to trigger some of the concerns (psychological or otherwise) triggered by the present case. In contrast, lifesaving activity is pretty commonāitās hard to estimate how many times the median person would have died if most people would not engage in lifesaving action, but I imagine it is relatively significant.
If I am in mortal danger, I want other people to save my life (and the lives of my wife and child). I do not want other people deciding whether I get medical assistance against a deadly infectious disease based on their personal assessment of whether saving my life would be net-positive for the world. Thatās true whether the assessment would be based on assumptions about people like me at a population level, or about my personal value-add /ā value-subtract in the deciderās eyes. If I have that expectation of other people, but donāt honor the resulting implied social contract in return, that would seem rather hypocritical of me. And if Iām going to honor the deal with fellow Americans (mostly white), and not honor it with young children in Africa, that makes me rather uncomfortable too for presumably obvious reasons.
We sometimes talk about demandingness in EAāa theory under which I would need to encourage people not to save myself, my wife, and my son if they concluded our reference class (upper-middle class Americans, likely) was net negative for the world is simply too demanding for me and likely for 99.9% of the population too.
Finally, Iām skeptical that human civilization could meaningfully thrive if everyone applied this kind of logic when analyzing whether to engage in lifesaving activities throughout their lives. (I donāt see how it make sense if limited to charitable endeavors.) Especially if the group whose existence was calculated as negative is as large as people who eat meat! In contrast, I donāt have any concerns about societies and cultures functioning adequately depending on how people answer trolley-like problems.
So I think those kinds of considerations might well explain why the reaction is different here than the reaction to an academic problem.
I agree with most except perhaps the framing of the following paragraph.
Sometimes that seems OK. Like, it seems reasonable to refrain from rescuing the large man in my status-quo-reversal of the Trolley Bridge case. (And to urge others to likewise refrain, for the sake of the five who would die if anyone acted to save the one.) So that makes me wonder if our disapproval of the present case reflects a kind of speciesismāeither our own, or the anticipated speciesism of a wider audience for whom this sort of reasoning would provide a PR problem?
In my opinion the key difference is that here the bad outcome (eg animal suffering but any other, really), may happen because of decisions taken by the people you are saving. So, in a sense it is not an externally imposed mechanism. The key insight to me is that the children always have the chance to prevent the suffering that follows, people can reason and become convinced, as I was, that this suffering is important and should be prevented. Consequently, I feel strongly against letting innocent people die in these situations. So overall I do not think this has to do with speciesism or cause prioritisation.
Incidentally, this repeats with many cultural themes in films and books, that people can change their minds, and that they should be given the chance to. Similarly, it is a common theme that you should not kill innocent people to prevent some bad thing from happening (think Thanos and overpopulation, Herod convicting Jesus to die to prevent greater wrongdoingsā¦). Clearly these are not strong ethical arguments, but I think they contain a grain of truth; and one should probably have a very strong bias against (taboo level) endorsing (not discussing) conclusions that justifies letting innocent people die.
Iād say that itās a (putative) instance of adversarial ethics rather than āends justify the meansā reasoning (in the usual sense of violating deontic constraints).
Sometimes that seems OK. Like, it seems reasonable to refrain from rescuing the large man in my status-quo-reversal of the Trolley Bridge case. (And to urge others to likewise refrain, for the sake of the five who would die if anyone acted to save the one.) So that makes me wonder if our disapproval of the present case reflects a kind of speciesismāeither our own, or the anticipated speciesism of a wider audience for whom this sort of reasoning would provide a PR problem?
OTOH, I think the meat-eater problem is misguided anyway, so another possibility is just that mistakenly urging against saving innocent peopleās lives is especially bad. I guess I do think the moral risk here is sufficient to be extra wary about how one expresses concerns like the meat-eater problem. Like Jason, I think itās much better to encourage AW offsets than to discourage GHD life-saving.
(Offsetting the potential downsides from helping others seems like a nice general solution to the problem of adversarial ethics, even if it isnāt strictly optimal.)
Trolley problems are sufficiently abstractāand presented in the context of an extraordinary set of circumstancesāthat they are less likely to trigger some of the concerns (psychological or otherwise) triggered by the present case. In contrast, lifesaving activity is pretty commonāitās hard to estimate how many times the median person would have died if most people would not engage in lifesaving action, but I imagine it is relatively significant.
If I am in mortal danger, I want other people to save my life (and the lives of my wife and child). I do not want other people deciding whether I get medical assistance against a deadly infectious disease based on their personal assessment of whether saving my life would be net-positive for the world. Thatās true whether the assessment would be based on assumptions about people like me at a population level, or about my personal value-add /ā value-subtract in the deciderās eyes. If I have that expectation of other people, but donāt honor the resulting implied social contract in return, that would seem rather hypocritical of me. And if Iām going to honor the deal with fellow Americans (mostly white), and not honor it with young children in Africa, that makes me rather uncomfortable too for presumably obvious reasons.
We sometimes talk about demandingness in EAāa theory under which I would need to encourage people not to save myself, my wife, and my son if they concluded our reference class (upper-middle class Americans, likely) was net negative for the world is simply too demanding for me and likely for 99.9% of the population too.
Finally, Iām skeptical that human civilization could meaningfully thrive if everyone applied this kind of logic when analyzing whether to engage in lifesaving activities throughout their lives. (I donāt see how it make sense if limited to charitable endeavors.) Especially if the group whose existence was calculated as negative is as large as people who eat meat! In contrast, I donāt have any concerns about societies and cultures functioning adequately depending on how people answer trolley-like problems.
So I think those kinds of considerations might well explain why the reaction is different here than the reaction to an academic problem.
I agree with most except perhaps the framing of the following paragraph.
In my opinion the key difference is that here the bad outcome (eg animal suffering but any other, really), may happen because of decisions taken by the people you are saving. So, in a sense it is not an externally imposed mechanism. The key insight to me is that the children always have the chance to prevent the suffering that follows, people can reason and become convinced, as I was, that this suffering is important and should be prevented. Consequently, I feel strongly against letting innocent people die in these situations. So overall I do not think this has to do with speciesism or cause prioritisation.
Incidentally, this repeats with many cultural themes in films and books, that people can change their minds, and that they should be given the chance to. Similarly, it is a common theme that you should not kill innocent people to prevent some bad thing from happening (think Thanos and overpopulation, Herod convicting Jesus to die to prevent greater wrongdoingsā¦). Clearly these are not strong ethical arguments, but I think they contain a grain of truth; and one should probably have a very strong bias against (taboo level) endorsing (not discussing) conclusions that justifies letting innocent people die.