All that’s required in all those cases is that you believe that some population will exist who benefits from your efforts.
It’s when the existence of those people is your choice that it no longer makes sense to consider them to have moral status pre-conception.
Or should I feel guilty that I deprived a number of beings of life by never conceiving children in situations that I could have?
It’s everyone else having children that create the population that I consider has moral status. So long as they keep doing it, the population of beings with moral status grows.
The real questions are whether:
it is moral to sustain the existence of a species past the point of causing harm to the species’ current members
Let me see if I can build on this reasoning. Please tell me if I’ve misunderstood your position.
Since we’re pretty sure there will indeed be people living in Bangladesh in the future, you’re saying it’s reasonable to take into account the future lives saved by the seawalls when comparing the choice of whether to invest in climate resiliency vs immediately spend on bednets.
But, your position implies, we can only consider the lives saved by the seawalls, not future children who would be had by the people saved by seawalls, right? Suppose we have considered every possible way to save lives, and narrowed it down to two options: give bednets to elderly folks in areas with endemic malaria, or invest in seawalls in Bangladesh. Saving the elderly from malaria is the most cost-effective present-day intervention you’ve found, and it would have a benefit of X. Alternatively, we’ve done some demographic studies of Bangladesh, and some detailed climate forecasting, and concluded that the seawalls would directly save some people who haven’t been born yet but will be, and would be killed by extreme weather, for a benefit of Y. Suppose further that we know those people will go on to have children, and the population will be counterfactually higher by an additional amount, for an additional benefit Z.
You’re saying that the correct comparison is not X vs 0 (which would be correct if you ignore all benefits to hypothetical future people), nor X vs Y+Z (which is if you include all benefits to hypothetical future people), but X vs Y (which is the appropriate comparison if you do include benefits to future people, but not ones who are brought about by your choosing between the options).
Yes, that’s right. (edit: thinking in terms of lives saved from populations here, not benefits accruing to those populatiions) X vs Y. If Y is chosen (ie, if the lives of Y are saved), and the seawall is built, then Y+Z (those populations) have moral status for me, assuming I am certain that (population) Y will conceive population Z. The details are below, but that’s my summary answer for you.
EDIT: Sorry, in the discussion below, my use of language confuses the original poster’s meaning of X as a benefit with the population rX receiving benefit X. Hopefully you can understand what I wrote. I address the spirit of the question, namely, is a choice between populations, one of which leads to additional contingent population, bound by my belief that only people who will exist have moral status. As a further summary, and maybe to untangle benefits from populations at some point, I believe in:
mathematical comparison: comparing benefits for size and multiplying population by benefit per capita (if meaningful)
actual people’s moral status: giving only actual (not potential) people moral status
smallness: of future populations having benefits, all other things equal.
inclusive solutions: whenever feasible (for example, saving all at-risk populations)
And now, continuing on with my original discussion...
...
So, to use your example, I have to believe a few things:
bednets extend lives of people who won’t have children.
seawalls extend lives of people who will have children.
a life extended is an altruistic benefit to the person who lives longer.
a life created is an altruistic benefit to the person created.
I will try to steelman where this argument goes with a caveat:
as part of beliefs 3 and 4, an extended or created life does not reduce quality of life for any other human. NOTE: in a climate change context 20-30 years from now, I don’t actually believe 3, 4, or this caveat will hold for the majority of the human global population.
I think your question is:
how I decide the benefit in terms of lives extended or created? For me, that is roughly the same as asking what consequences the actions of installing seawalls and providing bednets each have. In the case where each action is an exclusive alternative and my action to take, I might for altruistic reasons choose the action with greater altruistic consequences.
So, your scenario goes:
X = total years of lives saved by bednets.
Y= total years of lives saved by seawalls.
Z = total years of lives lived for children born behind seawalls if seawalls are built.
EDIT: below I will refer to X, Y, and Z as populations, not the benefits accrued by the populations, since my discussion makes no mention of differences in benefits, just differences in populations and counts of lives saved.
Lets assume further that:
in your scenario X and Y are based on the same number of people.
lives in population X and population Y are extended by the same amount of time.
people in populations X and Y each value their lives equally.
people in population X and Y experience the same amount of happiness.
My answer comes down to whether I believe that it is my choice whether to cause (savings of lives of) X or Y+Z. If it is my choice, then I would choose X over Y+Z out of personal preference and beliefs, because:
Z is a hypothetical population, while X and Y are not. Choosing against Z only means that Z are never conceived.
The numbers of people and the years given to them are the same for Y as they are for X. My impact on each population if I save them is the same.
Humans have less impact on the natural world and its creatures with a smaller population, and a future of X is smaller than a future of Y+Z.
A smaller population of humans causes less difficulty for those seeking altruistic ends for existing lives, for example, in case I want to be altruistic after saving one of the populations.
Aside from this scenario, however, what I calculate as altruistically beneficial is that X+Y are saved and children Z are never conceived because family planning and common-sense allows population Y to not have children Z. Returning to this scenario, though, I can only save one of X or Y and if I save Y, they will have children Z. Then, for the reasons I listed, I would choose X over Y.
I just went through your scenario in a context where I choose whether to save population X or population Y, but not both. Now I will go through the same scenario, but in a context where I do not choose between population X or population Y.
If:
it is not my choice whether X or Y+Z is chosen.
other existing people chose to save population Y with a seawall.
If population Y has a seawall, they will have children Z.
population Y have or will get a seawall.
Then:
population Y will have children Z.
the population Z is no longer hypothetical.
Y+Z have moral status even though Z are not conceived yet.
However, there is no way of choosing between X and Y+Z that ignores that the future occurrence of Z is contingent on the choice and thus hypothetical. Accordingly, population Z has no moral status unless population Y is saved by seawalls.
Notice that, even then, I must believe that population Y will go on to have children Z. This is not a question of whether children Z could be conceived or if I suspect that population Y will have children Z or if I believe that it is Y’s option to have children Z. I really have to know that Y will have children Z.
Also notice that, even if I remove beliefs 3 and 4, that does not mean that X or Y populations lose their moral status. A person stuck in suffering has moral status. However, decisions about how to help them will be different.
For example, if putting up seawalls saves Bangladesh from floods but not from drought and famine, I would say that their lives saved and their happiness while alive are in doubt. Similarly in the case of saving the elderly from malaria. If you save them from malaria but they now face worse conditions than suffering malaria, then your extending their lives or happiness while alive are in doubt. Well, in doubt from my perspective.
However, I see nothing wrong with adding to potential for a good life, all other things equal. I’d say that the “all other things equal” only applies when you know very little about the consequences of your actions and your choices are not driven by resource constraints that force difficult decisions.
If:
you are altruistically-minded
you have plenty of resources (for example, bednets and cement) so you don’t have to worry about triage
you don’t have beliefs about what else will happen when you save someone’s life
then it makes sense to help that person (or population). So yeah, supply the bednets and build the seawalls because why not? Who know who will have children or eat meat or cause others harm or suffer a worse disease or die from famine? Maybe everything turns out better, and even if it doesn’t, you’ve done no harm by preventing a disease or stopping flooding from sea level rise.
I basically just sidestep these issues in the post except for alluding to the “transitivityproblems” with views that are neutral to the creation of people whose experiences are good. That is, the question of whether future people matter and whether more future people is better than fewer are indeed distinct, so these examples do not fully justify longtermism or total utilitarianism.
Borrowing this point from Joe Carlsmith: I do think that like, my own existence has been pretty good, and I feel some gratitude towards the people who took actions to make it more likely and personal anger towards those who made it less likely (e.g. nuclear brinksmanship). To me, it does seem like if there are people who might or might not exist in the future who would be glad to exist (though of course would be neutral to nonexistence), it’s good to make them exist.
I also think the linked “transitivity problems” are pretty convincing.
I basically think the stuff about personally conceiving/raising children brings in lots of counterproductive baggage to the question, related to the other effects of these actions on others’ lives and my own. I think pretty much everything is a “moral act” in the sense that its good or bad foreseeable effects have moral significance, including like eating a cheeseburger, and conception isn’t an exception; I just don’t want to wade into the waters of whether particular decisions to conceive or not conceive are good or bad, which would depend on lots of context.
Levin, let me reassure you that, regardless of how far in the future they exist, future people that I believe will exist do have moral status to me, or should.
However, I see no reason to find more humans alive in the far future to be morally preferable to fewer humans alive in the far future above a population number in the lower millions.
Am I wrong to suspect that MacAskill’s idea of longtermism includes that a far future containing more people is morally preferable to a far future containing fewer people?
A listing of context-aware vs money-pump conditions
The money pump seems to demonstrate that maximizing moral value inside a particular person-affecting theory of moral value (one that is indifferent toward the existence of nonconceived future people) harms one’s own interests.
In context, I am indifferent to the moral status of nonconceived future people that I do not believe will ever exist. However, in the money pump, there is no distinction between people that could someday exist versus will someday exist.
In context, making people is morally dangerous. However, in the money pump, it is morally neutral.
In context, increasing the welfare of an individual is not purely altruistic (for example, wrt everyone else). However, in the money pump, it is purely altruistic.
In context, the harm of preventing conception of additional life is only what it causes those who will live, just like in the money pump.
The resource that you linked on transitivity problems includes a tree of valuable links for me to explore. The conceptual background information should be interesting, thank you.
About moral status meaning outside the context of existent beings
Levin, what are the nonconceived humans (for example, humans that you believe are never conceived) that do not have moral status in your ethical calculations?
Are there any conditions in which you do not believe that future beings will exist but you give them moral status anyway?
I am trying to answer whether my understanding of what moral status allows or requires is flawed.
For me, another being having moral status requires me to include effects on that being in my calculations of the altruism of my actions. A being that will never exist will not experience causes of my actions and so should be excluded from my moral calculations. However, I might use a different definition of moral status than you.
I am generally not that familiar with the creating-more-persons arguments beyond what I’ve said so far, so it’s possible I’m about to say something that the person-affecting-viewers have a good rebuttal for, but to me the basic problem with “only caring about people who will definitely exist” is that nobody will definitely exist. We care about the effects of people born in 2024 because there’s a very high chance that lots of people will be born then, but it’s possible that an asteroid, comet, gamma ray burst, pandemic, rogue AI, or some other threat could wipe us out by then. We’re only, say, 99.9% sure these people will be born, but this doesn’t stop us from caring about them.
As we get further and further into the future, we get less confident that there will be people around to benefit or be harmed by our actions, and this seems like a perfectly good reason to discount these effects.
And if we’re okay with doing that across time, it seems like we should similarly be okay with doing it within a time. The UN projects a global population of 8.5 billion by 2030, but this is again not a guarantee. Maybe there’s a 98% chance that 8 billion people will exist then, an 80% chance that another 300 billion will exist, a 50% chance that another 200 billion will exist (getting us to a median of 8.5 billion), a 20% chance for 200 billion more, and a 2% chance that there will be another billion after that. I think it would be odd to count everybody who has a 50.01% chance of existing and nobody who’s at 49.99%. Instead, we should take both as having a ~50% chance of being around to be benefited/harmed by our actions and do the moral accounting accordingly.
Then, as you get further into the future, the error bars get a lot wider and you wind up starting to count people who only exist in like 0.1% of scenarios. This is less intuitive, but I think it makes more sense to count their interests as 0.1% as important as people who definitely exist today, just as we count the interests of people born in 2024 as 99.9% as important, rather than drawing the line somewhere and saying we shouldn’t consider them at all.
The question of whether these people born in 0.1% of future worlds are made better off by existing (provided that they have net-positive experiences) rather than not existing just returns us to my first reply to your comment: I don’t have super robust philosophical arguments but I have those intuitions.
To me it’s a practical matter. Do I believe or not that some set of people will exist?
To motivate that thinking, consider the possibility that ghosts exist, and that their interests deserve account. I consider its probability non-zero because I can imagine plausible scenarios in which ghosts will exist, especially ones in which science invents them. However, I don’t factor those ghosts into my ethical calculations with any discount rate. Then there’s travelers from parallel universes, again, a potentially huge population with nonzero probability of existing (or appearing) in future. They don’t get a discount rate either, in fact I don’t consider them at all.
As far as large numbers of future people in the far future, that future is not on the path that humanity walks right now. It’s still plausible, but I don’t believe in it. So no discount rate for trillions of future people. And, if I do believe in those trillions, still no discount rate. Instead, those people are actual future people having full moral status.
Lukas Gloor’s description of contractualism and minimal morality that is mentioned in a comment on your post appeals to me, and is similar to my intuitions about morality in context, but I am not sure my views on deciding altruistic value of actions match Gloor’s views.
I have a few technical requirements before I will accept that I affect other people, currently alive or not. Also, I only see those effects as present to future, not present to past. For example, I won’t feel concern myself about the moral impacts of a cheeseburger, no matter what suffering was caused by the production of it, unless I somehow caused that production. However, I will concern myself with what suffering my eating of that burger will cause (not could cause, will cause) in future. And I am accountable for what I caused after I ate cheeseburgers before.
Anyway, belief in a future is a binary thing to me. When I don’t know what the future holds, I just act as if I do. Being wrong in that scenario tends not to have much impact on my consequences, most of the time.
All that’s required in all those cases is that you believe that some population will exist who benefits from your efforts.
It’s when the existence of those people is your choice that it no longer makes sense to consider them to have moral status pre-conception.
Or should I feel guilty that I deprived a number of beings of life by never conceiving children in situations that I could have?
It’s everyone else having children that create the population that I consider has moral status. So long as they keep doing it, the population of beings with moral status grows.
The real questions are whether:
it is moral to sustain the existence of a species past the point of causing harm to the species’ current members
the act of conceiving is a moral act
What do you think?
Let me see if I can build on this reasoning. Please tell me if I’ve misunderstood your position.
Since we’re pretty sure there will indeed be people living in Bangladesh in the future, you’re saying it’s reasonable to take into account the future lives saved by the seawalls when comparing the choice of whether to invest in climate resiliency vs immediately spend on bednets.
But, your position implies, we can only consider the lives saved by the seawalls, not future children who would be had by the people saved by seawalls, right? Suppose we have considered every possible way to save lives, and narrowed it down to two options: give bednets to elderly folks in areas with endemic malaria, or invest in seawalls in Bangladesh. Saving the elderly from malaria is the most cost-effective present-day intervention you’ve found, and it would have a benefit of X. Alternatively, we’ve done some demographic studies of Bangladesh, and some detailed climate forecasting, and concluded that the seawalls would directly save some people who haven’t been born yet but will be, and would be killed by extreme weather, for a benefit of Y. Suppose further that we know those people will go on to have children, and the population will be counterfactually higher by an additional amount, for an additional benefit Z.
You’re saying that the correct comparison is not X vs 0 (which would be correct if you ignore all benefits to hypothetical future people), nor X vs Y+Z (which is if you include all benefits to hypothetical future people), but X vs Y (which is the appropriate comparison if you do include benefits to future people, but not ones who are brought about by your choosing between the options).
Is this indeed your position?
Yes, that’s right. (edit: thinking in terms of lives saved from populations here, not benefits accruing to those populatiions) X vs Y. If Y is chosen (ie, if the lives of Y are saved), and the seawall is built, then Y+Z (those populations) have moral status for me, assuming I am certain that (population) Y will conceive population Z. The details are below, but that’s my summary answer for you.
EDIT: Sorry, in the discussion below, my use of language confuses the original poster’s meaning of X as a benefit with the population rX receiving benefit X. Hopefully you can understand what I wrote. I address the spirit of the question, namely, is a choice between populations, one of which leads to additional contingent population, bound by my belief that only people who will exist have moral status. As a further summary, and maybe to untangle benefits from populations at some point, I believe in:
mathematical comparison: comparing benefits for size and multiplying population by benefit per capita (if meaningful)
actual people’s moral status: giving only actual (not potential) people moral status
smallness: of future populations having benefits, all other things equal.
inclusive solutions: whenever feasible (for example, saving all at-risk populations)
And now, continuing on with my original discussion...
...
So, to use your example, I have to believe a few things:
bednets extend lives of people who won’t have children.
seawalls extend lives of people who will have children.
a life extended is an altruistic benefit to the person who lives longer.
a life created is an altruistic benefit to the person created.
I will try to steelman where this argument goes with a caveat:
as part of beliefs 3 and 4, an extended or created life does not reduce quality of life for any other human. NOTE: in a climate change context 20-30 years from now, I don’t actually believe 3, 4, or this caveat will hold for the majority of the human global population.
I think your question is:
how I decide the benefit in terms of lives extended or created?
For me, that is roughly the same as asking what consequences the actions of installing seawalls and providing bednets each have. In the case where each action is an exclusive alternative and my action to take, I might for altruistic reasons choose the action with greater altruistic consequences.
So, your scenario goes:
X = total years of lives saved by bednets.
Y= total years of lives saved by seawalls.
Z = total years of lives lived for children born behind seawalls if seawalls are built.
EDIT: below I will refer to X, Y, and Z as populations, not the benefits accrued by the populations, since my discussion makes no mention of differences in benefits, just differences in populations and counts of lives saved.
Lets assume further that:
in your scenario X and Y are based on the same number of people.
lives in population X and population Y are extended by the same amount of time.
people in populations X and Y each value their lives equally.
people in population X and Y experience the same amount of happiness.
My answer comes down to whether I believe that it is my choice whether to cause (savings of lives of) X or Y+Z. If it is my choice, then I would choose X over Y+Z out of personal preference and beliefs, because:
Z is a hypothetical population, while X and Y are not. Choosing against Z only means that Z are never conceived.
The numbers of people and the years given to them are the same for Y as they are for X. My impact on each population if I save them is the same.
Humans have less impact on the natural world and its creatures with a smaller population, and a future of X is smaller than a future of Y+Z.
A smaller population of humans causes less difficulty for those seeking altruistic ends for existing lives, for example, in case I want to be altruistic after saving one of the populations.
Aside from this scenario, however, what I calculate as altruistically beneficial is that X+Y are saved and children Z are never conceived because family planning and common-sense allows population Y to not have children Z. Returning to this scenario, though, I can only save one of X or Y and if I save Y, they will have children Z. Then, for the reasons I listed, I would choose X over Y.
I just went through your scenario in a context where I choose whether to save population X or population Y, but not both. Now I will go through the same scenario, but in a context where I do not choose between population X or population Y.
If:
it is not my choice whether X or Y+Z is chosen.
other existing people chose to save population Y with a seawall.
If population Y has a seawall, they will have children Z.
population Y have or will get a seawall.
Then:
population Y will have children Z.
the population Z is no longer hypothetical.
Y+Z have moral status even though Z are not conceived yet.
However, there is no way of choosing between X and Y+Z that ignores that the future occurrence of Z is contingent on the choice and thus hypothetical. Accordingly, population Z has no moral status unless population Y is saved by seawalls.
Notice that, even then, I must believe that population Y will go on to have children Z. This is not a question of whether children Z could be conceived or if I suspect that population Y will have children Z or if I believe that it is Y’s option to have children Z. I really have to know that Y will have children Z.
Also notice that, even if I remove beliefs 3 and 4, that does not mean that X or Y populations lose their moral status. A person stuck in suffering has moral status. However, decisions about how to help them will be different.
For example, if putting up seawalls saves Bangladesh from floods but not from drought and famine, I would say that their lives saved and their happiness while alive are in doubt. Similarly in the case of saving the elderly from malaria. If you save them from malaria but they now face worse conditions than suffering malaria, then your extending their lives or happiness while alive are in doubt. Well, in doubt from my perspective.
However, I see nothing wrong with adding to potential for a good life, all other things equal. I’d say that the “all other things equal” only applies when you know very little about the consequences of your actions and your choices are not driven by resource constraints that force difficult decisions.
If:
you are altruistically-minded
you have plenty of resources (for example, bednets and cement) so you don’t have to worry about triage
you don’t have beliefs about what else will happen when you save someone’s life
then it makes sense to help that person (or population). So yeah, supply the bednets and build the seawalls because why not? Who know who will have children or eat meat or cause others harm or suffer a worse disease or die from famine? Maybe everything turns out better, and even if it doesn’t, you’ve done no harm by preventing a disease or stopping flooding from sea level rise.
I basically just sidestep these issues in the post except for alluding to the “transitivity problems” with views that are neutral to the creation of people whose experiences are good. That is, the question of whether future people matter and whether more future people is better than fewer are indeed distinct, so these examples do not fully justify longtermism or total utilitarianism.
Borrowing this point from Joe Carlsmith: I do think that like, my own existence has been pretty good, and I feel some gratitude towards the people who took actions to make it more likely and personal anger towards those who made it less likely (e.g. nuclear brinksmanship). To me, it does seem like if there are people who might or might not exist in the future who would be glad to exist (though of course would be neutral to nonexistence), it’s good to make them exist.
I also think the linked “transitivity problems” are pretty convincing.
I basically think the stuff about personally conceiving/raising children brings in lots of counterproductive baggage to the question, related to the other effects of these actions on others’ lives and my own. I think pretty much everything is a “moral act” in the sense that its good or bad foreseeable effects have moral significance, including like eating a cheeseburger, and conception isn’t an exception; I just don’t want to wade into the waters of whether particular decisions to conceive or not conceive are good or bad, which would depend on lots of context.
About MacAskill’s Longtermism
Levin, let me reassure you that, regardless of how far in the future they exist, future people that I believe will exist do have moral status to me, or should.
However, I see no reason to find more humans alive in the far future to be morally preferable to fewer humans alive in the far future above a population number in the lower millions.
Am I wrong to suspect that MacAskill’s idea of longtermism includes that a far future containing more people is morally preferable to a far future containing fewer people?
A listing of context-aware vs money-pump conditions
The money pump seems to demonstrate that maximizing moral value inside a particular person-affecting theory of moral value (one that is indifferent toward the existence of nonconceived future people) harms one’s own interests.
In context, I am indifferent to the moral status of nonconceived future people that I do not believe will ever exist. However, in the money pump, there is no distinction between people that could someday exist versus will someday exist. In context, making people is morally dangerous. However, in the money pump, it is morally neutral. In context, increasing the welfare of an individual is not purely altruistic (for example, wrt everyone else). However, in the money pump, it is purely altruistic. In context, the harm of preventing conception of additional life is only what it causes those who will live, just like in the money pump.
The resource that you linked on transitivity problems includes a tree of valuable links for me to explore. The conceptual background information should be interesting, thank you.
About moral status meaning outside the context of existent beings
Levin, what are the nonconceived humans (for example, humans that you believe are never conceived) that do not have moral status in your ethical calculations?
Are there any conditions in which you do not believe that future beings will exist but you give them moral status anyway?
I am trying to answer whether my understanding of what moral status allows or requires is flawed.
For me, another being having moral status requires me to include effects on that being in my calculations of the altruism of my actions. A being that will never exist will not experience causes of my actions and so should be excluded from my moral calculations. However, I might use a different definition of moral status than you.
Thank you.
I am generally not that familiar with the creating-more-persons arguments beyond what I’ve said so far, so it’s possible I’m about to say something that the person-affecting-viewers have a good rebuttal for, but to me the basic problem with “only caring about people who will definitely exist” is that nobody will definitely exist. We care about the effects of people born in 2024 because there’s a very high chance that lots of people will be born then, but it’s possible that an asteroid, comet, gamma ray burst, pandemic, rogue AI, or some other threat could wipe us out by then. We’re only, say, 99.9% sure these people will be born, but this doesn’t stop us from caring about them.
As we get further and further into the future, we get less confident that there will be people around to benefit or be harmed by our actions, and this seems like a perfectly good reason to discount these effects.
And if we’re okay with doing that across time, it seems like we should similarly be okay with doing it within a time. The UN projects a global population of 8.5 billion by 2030, but this is again not a guarantee. Maybe there’s a 98% chance that 8 billion people will exist then, an 80% chance that another 300 billion will exist, a 50% chance that another 200 billion will exist (getting us to a median of 8.5 billion), a 20% chance for 200 billion more, and a 2% chance that there will be another billion after that. I think it would be odd to count everybody who has a 50.01% chance of existing and nobody who’s at 49.99%. Instead, we should take both as having a ~50% chance of being around to be benefited/harmed by our actions and do the moral accounting accordingly.
Then, as you get further into the future, the error bars get a lot wider and you wind up starting to count people who only exist in like 0.1% of scenarios. This is less intuitive, but I think it makes more sense to count their interests as 0.1% as important as people who definitely exist today, just as we count the interests of people born in 2024 as 99.9% as important, rather than drawing the line somewhere and saying we shouldn’t consider them at all.
The question of whether these people born in 0.1% of future worlds are made better off by existing (provided that they have net-positive experiences) rather than not existing just returns us to my first reply to your comment: I don’t have super robust philosophical arguments but I have those intuitions.
Thank you for the thorough answer.
To me it’s a practical matter. Do I believe or not that some set of people will exist?
To motivate that thinking, consider the possibility that ghosts exist, and that their interests deserve account. I consider its probability non-zero because I can imagine plausible scenarios in which ghosts will exist, especially ones in which science invents them. However, I don’t factor those ghosts into my ethical calculations with any discount rate. Then there’s travelers from parallel universes, again, a potentially huge population with nonzero probability of existing (or appearing) in future. They don’t get a discount rate either, in fact I don’t consider them at all.
As far as large numbers of future people in the far future, that future is not on the path that humanity walks right now. It’s still plausible, but I don’t believe in it. So no discount rate for trillions of future people. And, if I do believe in those trillions, still no discount rate. Instead, those people are actual future people having full moral status.
Lukas Gloor’s description of contractualism and minimal morality that is mentioned in a comment on your post appeals to me, and is similar to my intuitions about morality in context, but I am not sure my views on deciding altruistic value of actions match Gloor’s views.
I have a few technical requirements before I will accept that I affect other people, currently alive or not. Also, I only see those effects as present to future, not present to past. For example, I won’t feel concern myself about the moral impacts of a cheeseburger, no matter what suffering was caused by the production of it, unless I somehow caused that production. However, I will concern myself with what suffering my eating of that burger will cause (not could cause, will cause) in future. And I am accountable for what I caused after I ate cheeseburgers before.
Anyway, belief in a future is a binary thing to me. When I don’t know what the future holds, I just act as if I do. Being wrong in that scenario tends not to have much impact on my consequences, most of the time.