This project and many participants on this forum this week also seem to be neglecting the positive utilitarian perspective. 1 human saved has the potential to make an immense positive impact on the world, whereas animals do not.
This project and many participants on this forum this week also seem to be neglecting the positive utilitarian perspective. 1 human saved has the potential to make an immense positive impact on the world, whereas animals do not.
Possible effects like these are worth considering, but they’re not part of moral weights in the sense meant here. Moral weights are only one thing you should consider when comparing possible interventions, not an all-things-considered score, including all possible flow-through effects.
As such, what is at stake here is not “the positive utilitarian perspective”. Moral weights do include positive welfare of individuals, they just don’t include possible (positive or negative) side-effects of helping different individuals.
Thank you—that is helpful and does make more sense. I was under the false impression that moral weights were designed to be the only thing people ought to consider when comparing interventions, and I’m curious how many people on both sides of the argument have a similar misconception.
I understand the sentiment a about ripple effects, but I find this way of phrasing it unfortunate. Saving the life has positive impact, its the actions of the person after that which could be negative.
The possibility that the saved person’s future actions could be negative is of great delicacy.
One problem comes when we stack the deck by ascribing the positive effects of the saved person’s future actions to the lifesaving intervention but refuse to do the same with the saved person’s negative actions. That makes the lifesaving intervention look more attractive vis-a-vis other interventions that are on the table.
My own tentative thinking—at least for GiveWell-style beneficiaries[1] -- to balance this out is to count both the positive and negative downstream expected effects of the person’s choices in the calculus, but with a floor of zero / neutral impact from them. I could try to come up with some sort of philosophical rationale for this approach, but in the end counting a toddler’s expected future actions against them on net feels way too much like “playing God” for me. So the value I ascribe to saving a life (at least for GiveWell-style beneficiaries) won’t fall below the value I’d assign without consideration of the person’s expected future actions.
I say this to bracket out the case of, e.g., an adult who has caused great harm through their deliberate actions and would be expected to continue causing that harm if I rescued them from a pond.
This project and many participants on this forum this week also seem to be neglecting the positive utilitarian perspective. 1 human saved has the potential to make an immense positive impact on the world, whereas animals do not.
Possible effects like these are worth considering, but they’re not part of moral weights in the sense meant here. Moral weights are only one thing you should consider when comparing possible interventions, not an all-things-considered score, including all possible flow-through effects.
As such, what is at stake here is not “the positive utilitarian perspective”. Moral weights do include positive welfare of individuals, they just don’t include possible (positive or negative) side-effects of helping different individuals.
Thank you—that is helpful and does make more sense. I was under the false impression that moral weights were designed to be the only thing people ought to consider when comparing interventions, and I’m curious how many people on both sides of the argument have a similar misconception.
Well, to be fair, a human saved also has the potential to have an immense negative impact (for instance, actively supporting factory farming).
I understand the sentiment a about ripple effects, but I find this way of phrasing it unfortunate. Saving the life has positive impact, its the actions of the person after that which could be negative.
The possibility that the saved person’s future actions could be negative is of great delicacy.
One problem comes when we stack the deck by ascribing the positive effects of the saved person’s future actions to the lifesaving intervention but refuse to do the same with the saved person’s negative actions. That makes the lifesaving intervention look more attractive vis-a-vis other interventions that are on the table.
My own tentative thinking—at least for GiveWell-style beneficiaries[1] -- to balance this out is to count both the positive and negative downstream expected effects of the person’s choices in the calculus, but with a floor of zero / neutral impact from them. I could try to come up with some sort of philosophical rationale for this approach, but in the end counting a toddler’s expected future actions against them on net feels way too much like “playing God” for me. So the value I ascribe to saving a life (at least for GiveWell-style beneficiaries) won’t fall below the value I’d assign without consideration of the person’s expected future actions.
I say this to bracket out the case of, e.g., an adult who has caused great harm through their deliberate actions and would be expected to continue causing that harm if I rescued them from a pond.