I am generally not that familiar with the creating-more-persons arguments beyond what I’ve said so far, so it’s possible I’m about to say something that the person-affecting-viewers have a good rebuttal for, but to me the basic problem with “only caring about people who will definitely exist” is that nobody will definitely exist. We care about the effects of people born in 2024 because there’s a very high chance that lots of people will be born then, but it’s possible that an asteroid, comet, gamma ray burst, pandemic, rogue AI, or some other threat could wipe us out by then. We’re only, say, 99.9% sure these people will be born, but this doesn’t stop us from caring about them.
As we get further and further into the future, we get less confident that there will be people around to benefit or be harmed by our actions, and this seems like a perfectly good reason to discount these effects.
And if we’re okay with doing that across time, it seems like we should similarly be okay with doing it within a time. The UN projects a global population of 8.5 billion by 2030, but this is again not a guarantee. Maybe there’s a 98% chance that 8 billion people will exist then, an 80% chance that another 300 billion will exist, a 50% chance that another 200 billion will exist (getting us to a median of 8.5 billion), a 20% chance for 200 billion more, and a 2% chance that there will be another billion after that. I think it would be odd to count everybody who has a 50.01% chance of existing and nobody who’s at 49.99%. Instead, we should take both as having a ~50% chance of being around to be benefited/harmed by our actions and do the moral accounting accordingly.
Then, as you get further into the future, the error bars get a lot wider and you wind up starting to count people who only exist in like 0.1% of scenarios. This is less intuitive, but I think it makes more sense to count their interests as 0.1% as important as people who definitely exist today, just as we count the interests of people born in 2024 as 99.9% as important, rather than drawing the line somewhere and saying we shouldn’t consider them at all.
The question of whether these people born in 0.1% of future worlds are made better off by existing (provided that they have net-positive experiences) rather than not existing just returns us to my first reply to your comment: I don’t have super robust philosophical arguments but I have those intuitions.
To me it’s a practical matter. Do I believe or not that some set of people will exist?
To motivate that thinking, consider the possibility that ghosts exist, and that their interests deserve account. I consider its probability non-zero because I can imagine plausible scenarios in which ghosts will exist, especially ones in which science invents them. However, I don’t factor those ghosts into my ethical calculations with any discount rate. Then there’s travelers from parallel universes, again, a potentially huge population with nonzero probability of existing (or appearing) in future. They don’t get a discount rate either, in fact I don’t consider them at all.
As far as large numbers of future people in the far future, that future is not on the path that humanity walks right now. It’s still plausible, but I don’t believe in it. So no discount rate for trillions of future people. And, if I do believe in those trillions, still no discount rate. Instead, those people are actual future people having full moral status.
Lukas Gloor’s description of contractualism and minimal morality that is mentioned in a comment on your post appeals to me, and is similar to my intuitions about morality in context, but I am not sure my views on deciding altruistic value of actions match Gloor’s views.
I have a few technical requirements before I will accept that I affect other people, currently alive or not. Also, I only see those effects as present to future, not present to past. For example, I won’t feel concern myself about the moral impacts of a cheeseburger, no matter what suffering was caused by the production of it, unless I somehow caused that production. However, I will concern myself with what suffering my eating of that burger will cause (not could cause, will cause) in future. And I am accountable for what I caused after I ate cheeseburgers before.
Anyway, belief in a future is a binary thing to me. When I don’t know what the future holds, I just act as if I do. Being wrong in that scenario tends not to have much impact on my consequences, most of the time.
I am generally not that familiar with the creating-more-persons arguments beyond what I’ve said so far, so it’s possible I’m about to say something that the person-affecting-viewers have a good rebuttal for, but to me the basic problem with “only caring about people who will definitely exist” is that nobody will definitely exist. We care about the effects of people born in 2024 because there’s a very high chance that lots of people will be born then, but it’s possible that an asteroid, comet, gamma ray burst, pandemic, rogue AI, or some other threat could wipe us out by then. We’re only, say, 99.9% sure these people will be born, but this doesn’t stop us from caring about them.
As we get further and further into the future, we get less confident that there will be people around to benefit or be harmed by our actions, and this seems like a perfectly good reason to discount these effects.
And if we’re okay with doing that across time, it seems like we should similarly be okay with doing it within a time. The UN projects a global population of 8.5 billion by 2030, but this is again not a guarantee. Maybe there’s a 98% chance that 8 billion people will exist then, an 80% chance that another 300 billion will exist, a 50% chance that another 200 billion will exist (getting us to a median of 8.5 billion), a 20% chance for 200 billion more, and a 2% chance that there will be another billion after that. I think it would be odd to count everybody who has a 50.01% chance of existing and nobody who’s at 49.99%. Instead, we should take both as having a ~50% chance of being around to be benefited/harmed by our actions and do the moral accounting accordingly.
Then, as you get further into the future, the error bars get a lot wider and you wind up starting to count people who only exist in like 0.1% of scenarios. This is less intuitive, but I think it makes more sense to count their interests as 0.1% as important as people who definitely exist today, just as we count the interests of people born in 2024 as 99.9% as important, rather than drawing the line somewhere and saying we shouldn’t consider them at all.
The question of whether these people born in 0.1% of future worlds are made better off by existing (provided that they have net-positive experiences) rather than not existing just returns us to my first reply to your comment: I don’t have super robust philosophical arguments but I have those intuitions.
Thank you for the thorough answer.
To me it’s a practical matter. Do I believe or not that some set of people will exist?
To motivate that thinking, consider the possibility that ghosts exist, and that their interests deserve account. I consider its probability non-zero because I can imagine plausible scenarios in which ghosts will exist, especially ones in which science invents them. However, I don’t factor those ghosts into my ethical calculations with any discount rate. Then there’s travelers from parallel universes, again, a potentially huge population with nonzero probability of existing (or appearing) in future. They don’t get a discount rate either, in fact I don’t consider them at all.
As far as large numbers of future people in the far future, that future is not on the path that humanity walks right now. It’s still plausible, but I don’t believe in it. So no discount rate for trillions of future people. And, if I do believe in those trillions, still no discount rate. Instead, those people are actual future people having full moral status.
Lukas Gloor’s description of contractualism and minimal morality that is mentioned in a comment on your post appeals to me, and is similar to my intuitions about morality in context, but I am not sure my views on deciding altruistic value of actions match Gloor’s views.
I have a few technical requirements before I will accept that I affect other people, currently alive or not. Also, I only see those effects as present to future, not present to past. For example, I won’t feel concern myself about the moral impacts of a cheeseburger, no matter what suffering was caused by the production of it, unless I somehow caused that production. However, I will concern myself with what suffering my eating of that burger will cause (not could cause, will cause) in future. And I am accountable for what I caused after I ate cheeseburgers before.
Anyway, belief in a future is a binary thing to me. When I don’t know what the future holds, I just act as if I do. Being wrong in that scenario tends not to have much impact on my consequences, most of the time.