I understand the sentiment a about ripple effects, but I find this way of phrasing it unfortunate. Saving the life has positive impact, its the actions of the person after that which could be negative.
The possibility that the saved personâs future actions could be negative is of great delicacy.
One problem comes when we stack the deck by ascribing the positive effects of the saved personâs future actions to the lifesaving intervention but refuse to do the same with the saved personâs negative actions. That makes the lifesaving intervention look more attractive vis-a-vis other interventions that are on the table.
My own tentative thinkingâat least for GiveWell-style beneficiaries[1] -- to balance this out is to count both the positive and negative downstream expected effects of the personâs choices in the calculus, but with a floor of zero /â neutral impact from them. I could try to come up with some sort of philosophical rationale for this approach, but in the end counting a toddlerâs expected future actions against them on net feels way too much like âplaying Godâ for me. So the value I ascribe to saving a life (at least for GiveWell-style beneficiaries) wonât fall below the value Iâd assign without consideration of the personâs expected future actions.
I say this to bracket out the case of, e.g., an adult who has caused great harm through their deliberate actions and would be expected to continue causing that harm if I rescued them from a pond.
Well, to be fair, a human saved also has the potential to have an immense negative impact (for instance, actively supporting factory farming).
I understand the sentiment a about ripple effects, but I find this way of phrasing it unfortunate. Saving the life has positive impact, its the actions of the person after that which could be negative.
The possibility that the saved personâs future actions could be negative is of great delicacy.
One problem comes when we stack the deck by ascribing the positive effects of the saved personâs future actions to the lifesaving intervention but refuse to do the same with the saved personâs negative actions. That makes the lifesaving intervention look more attractive vis-a-vis other interventions that are on the table.
My own tentative thinkingâat least for GiveWell-style beneficiaries[1] -- to balance this out is to count both the positive and negative downstream expected effects of the personâs choices in the calculus, but with a floor of zero /â neutral impact from them. I could try to come up with some sort of philosophical rationale for this approach, but in the end counting a toddlerâs expected future actions against them on net feels way too much like âplaying Godâ for me. So the value I ascribe to saving a life (at least for GiveWell-style beneficiaries) wonât fall below the value Iâd assign without consideration of the personâs expected future actions.
I say this to bracket out the case of, e.g., an adult who has caused great harm through their deliberate actions and would be expected to continue causing that harm if I rescued them from a pond.