To me it’s a practical matter. Do I believe or not that some set of people will exist?
To motivate that thinking, consider the possibility that ghosts exist, and that their interests deserve account. I consider its probability non-zero because I can imagine plausible scenarios in which ghosts will exist, especially ones in which science invents them. However, I don’t factor those ghosts into my ethical calculations with any discount rate. Then there’s travelers from parallel universes, again, a potentially huge population with nonzero probability of existing (or appearing) in future. They don’t get a discount rate either, in fact I don’t consider them at all.
As far as large numbers of future people in the far future, that future is not on the path that humanity walks right now. It’s still plausible, but I don’t believe in it. So no discount rate for trillions of future people. And, if I do believe in those trillions, still no discount rate. Instead, those people are actual future people having full moral status.
Lukas Gloor’s description of contractualism and minimal morality that is mentioned in a comment on your post appeals to me, and is similar to my intuitions about morality in context, but I am not sure my views on deciding altruistic value of actions match Gloor’s views.
I have a few technical requirements before I will accept that I affect other people, currently alive or not. Also, I only see those effects as present to future, not present to past. For example, I won’t feel concern myself about the moral impacts of a cheeseburger, no matter what suffering was caused by the production of it, unless I somehow caused that production. However, I will concern myself with what suffering my eating of that burger will cause (not could cause, will cause) in future. And I am accountable for what I caused after I ate cheeseburgers before.
Anyway, belief in a future is a binary thing to me. When I don’t know what the future holds, I just act as if I do. Being wrong in that scenario tends not to have much impact on my consequences, most of the time.
Thank you for the thorough answer.
To me it’s a practical matter. Do I believe or not that some set of people will exist?
To motivate that thinking, consider the possibility that ghosts exist, and that their interests deserve account. I consider its probability non-zero because I can imagine plausible scenarios in which ghosts will exist, especially ones in which science invents them. However, I don’t factor those ghosts into my ethical calculations with any discount rate. Then there’s travelers from parallel universes, again, a potentially huge population with nonzero probability of existing (or appearing) in future. They don’t get a discount rate either, in fact I don’t consider them at all.
As far as large numbers of future people in the far future, that future is not on the path that humanity walks right now. It’s still plausible, but I don’t believe in it. So no discount rate for trillions of future people. And, if I do believe in those trillions, still no discount rate. Instead, those people are actual future people having full moral status.
Lukas Gloor’s description of contractualism and minimal morality that is mentioned in a comment on your post appeals to me, and is similar to my intuitions about morality in context, but I am not sure my views on deciding altruistic value of actions match Gloor’s views.
I have a few technical requirements before I will accept that I affect other people, currently alive or not. Also, I only see those effects as present to future, not present to past. For example, I won’t feel concern myself about the moral impacts of a cheeseburger, no matter what suffering was caused by the production of it, unless I somehow caused that production. However, I will concern myself with what suffering my eating of that burger will cause (not could cause, will cause) in future. And I am accountable for what I caused after I ate cheeseburgers before.
Anyway, belief in a future is a binary thing to me. When I don’t know what the future holds, I just act as if I do. Being wrong in that scenario tends not to have much impact on my consequences, most of the time.