The possibility that the saved person’s future actions could be negative is of great delicacy.
One problem comes when we stack the deck by ascribing the positive effects of the saved person’s future actions to the lifesaving intervention but refuse to do the same with the saved person’s negative actions. That makes the lifesaving intervention look more attractive vis-a-vis other interventions that are on the table.
My own tentative thinking—at least for GiveWell-style beneficiaries[1] -- to balance this out is to count both the positive and negative downstream expected effects of the person’s choices in the calculus, but with a floor of zero / neutral impact from them. I could try to come up with some sort of philosophical rationale for this approach, but in the end counting a toddler’s expected future actions against them on net feels way too much like “playing God” for me. So the value I ascribe to saving a life (at least for GiveWell-style beneficiaries) won’t fall below the value I’d assign without consideration of the person’s expected future actions.
I say this to bracket out the case of, e.g., an adult who has caused great harm through their deliberate actions and would be expected to continue causing that harm if I rescued them from a pond.
The possibility that the saved person’s future actions could be negative is of great delicacy.
One problem comes when we stack the deck by ascribing the positive effects of the saved person’s future actions to the lifesaving intervention but refuse to do the same with the saved person’s negative actions. That makes the lifesaving intervention look more attractive vis-a-vis other interventions that are on the table.
My own tentative thinking—at least for GiveWell-style beneficiaries[1] -- to balance this out is to count both the positive and negative downstream expected effects of the person’s choices in the calculus, but with a floor of zero / neutral impact from them. I could try to come up with some sort of philosophical rationale for this approach, but in the end counting a toddler’s expected future actions against them on net feels way too much like “playing God” for me. So the value I ascribe to saving a life (at least for GiveWell-style beneficiaries) won’t fall below the value I’d assign without consideration of the person’s expected future actions.
I say this to bracket out the case of, e.g., an adult who has caused great harm through their deliberate actions and would be expected to continue causing that harm if I rescued them from a pond.