Should we discount future people in proportion to the probability of them not existing?

This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished!
Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong.

Inspired by Common-sense cases where “hypothetical future people” matter.

I agree with the general idea that discounting temporally distant people due to a time preference doesn’t make sense, in the same way that discounting geographically distant people due to a location preference doesn’t seem justified. This seems to be a common perspective in EA, and I agree with it.

Does it make sense to discount future people in proportion to the probability that they will not exist? This seems ever-so-vaguely related to the idea of epistemic humility, and recognizing that we cannot know with certainty what the future will be like. It also seems vaguely related to the idea of acting on values rather than focusing on specific causes, as in the example of ScotsCare. The farther out in the future we project, the higher the uncertainty, and this the more we should discount. Thus, maybe from my stance in 2022 I should prioritize Alice (who was born in 2020) more than Bob (who is expected to be born in 2030), who is in turn prioritized more than Carl (who is expected to be born in 2040), simply because I know Alice exists, whereas Bob might end up never existing, and Carl has an even higher probability of never existing.

Short, vaguely related thought-experiments/​scenarios:

  • I do something to benefit a yet-to-be born child, but then the mother has a miscarriage and that child never comes into being.

  • I invest money in a 529 plan[1] for my child, but when my child is 18 he/​she decides to not go to college, and instead to work.

  • You promise to pay me $X in ten years, and I fully and completely trust you… but maybe you will be robbed, or maybe you will die, or maybe something else will occur that prevents you from fulfilling your promise. So I should value this promise at less than $X (assuming we ignore time value of money).

  • If I’m trying to improve any currently existing national system for a particular country, I should keep in mind that countries don’t last forever.

  • I can set up a investment fund to pay the money out for whatever health problem is the most severe in 200 years, but what if medical advances mean that there are no more health problem in 200 years.

  • I could focus on a project that will result in a lot of happiness on Earth in 4,000 years, but maybe earth will be uninhabited then.

  1. ^

    A tax-advantaged financial account in the USA, that is only allowed to be used on educational expenses.