The main problem I see with this thought experiment is assuming away replaceability. It is implausible that the 10 people involved in these love letters would have zero children in the absence of these love letters, but that is what we are asked to believe. I suspect that virtually all scenarios where an action causes someone to be born suffer from replaceability. Even abortion does: most people want children, they just don’t want them at the time they have an unintended pregnancy. So I’m not sure how much we learn from this thought experiment.
Is this kind of replaceability compatible with current practices in Longtermism?
What is the consequence of the claim that if I fail to take an action to preserve Future People, Other Future People likely replace them?
Let’s say I give money to MIRI instead of saving current people, based on some calculations of future people I might save. Are we discounting those Future People by the Other Future People? Why don’t we value Other Future People just as much as Future People? Of course we do.
Perhaps that is the point of this thought experiment. Perhaps “of course you don’t pull the switch” is the only right answer precisely because of replaceability.
The main problem I see with this thought experiment is assuming away replaceability. It is implausible that the 10 people involved in these love letters would have zero children in the absence of these love letters, but that is what we are asked to believe. I suspect that virtually all scenarios where an action causes someone to be born suffer from replaceability. Even abortion does: most people want children, they just don’t want them at the time they have an unintended pregnancy. So I’m not sure how much we learn from this thought experiment.
Is this kind of replaceability compatible with current practices in Longtermism?
What is the consequence of the claim that if I fail to take an action to preserve Future People, Other Future People likely replace them?
Let’s say I give money to MIRI instead of saving current people, based on some calculations of future people I might save. Are we discounting those Future People by the Other Future People? Why don’t we value Other Future People just as much as Future People? Of course we do.
Perhaps that is the point of this thought experiment. Perhaps “of course you don’t pull the switch” is the only right answer precisely because of replaceability.