I think it’s really hard to intuitively feel the same level of confidence that you would save a life 100 years from now compared to today.
If I were a time traveler and someone asked me “Would you rather save a life in 1800 or 1900?”, I could be confident that the life would actually be saved either way. But if a charity approaches me and offers to save 500 lives in 500 years for a small donation, that’s definitely a scam! So I think there are really good reasons why people’s intuitions on this don’t always match what mathematicians or philosophers might think.
but if a charity approaches me and offers to save 500 lives in 500 years for a small donation, that’s definitely a scam! So I think there are really good reasons why people’s intuitions on this don’t always match what mathematicians or philosophers might think.
I would note that the tradeoff question we asked didn’t ask about donating to a charity in order to save lives 500 years in the future, they asked whether it’s “morally better” to save 1 person now or x people in the future. I agree that degree of confidence in outcomes might influence people’s judgements about the charity cases though.
I think I share your concern. I don’t know to what extent people are discounting people in the far future for epistemic reasons (“do we really know that those lives will be saved 500 years from now?”) and to what extent it’s for moral reasons (“I just think that people who haven’t been born yet and are in no way linked to me or my grandchildren shouldn’t be given much moral value compared to people who are alive today”).
Interestingly this point didn’t come up in the so-called qual research that I mentioned in another comment, but perhaps with more discussion with more people it might have.
I think it’s really hard to intuitively feel the same level of confidence that you would save a life 100 years from now compared to today.
If I were a time traveler and someone asked me “Would you rather save a life in 1800 or 1900?”, I could be confident that the life would actually be saved either way. But if a charity approaches me and offers to save 500 lives in 500 years for a small donation, that’s definitely a scam! So I think there are really good reasons why people’s intuitions on this don’t always match what mathematicians or philosophers might think.
I would note that the tradeoff question we asked didn’t ask about donating to a charity in order to save lives 500 years in the future, they asked whether it’s “morally better” to save 1 person now or x people in the future. I agree that degree of confidence in outcomes might influence people’s judgements about the charity cases though.
I know you didn’t ask about it, and people might not even conciously think about it—I just think people are bad at thought experiments
I think I share your concern. I don’t know to what extent people are discounting people in the far future for epistemic reasons (“do we really know that those lives will be saved 500 years from now?”) and to what extent it’s for moral reasons (“I just think that people who haven’t been born yet and are in no way linked to me or my grandchildren shouldn’t be given much moral value compared to people who are alive today”).
Interestingly this point didn’t come up in the so-called qual research that I mentioned in another comment, but perhaps with more discussion with more people it might have.