I’m not following the reasoning for most of your claims, so I’ll just address the main claims I understand and disagree with.
If longtermists truly believe that the future will contain a lot of people, then they consider that future inevitable.
This doesn’t follow. There’s a difference between saying “X will probably happen” and “X will inevitably happen.”
Compare: Joe will probably get into a car accident in the next 10 years, so he should buy car insurance.
This is analogous to the longtermist position: There will probably be events that test the resilience of humanity in the next 100 years, so we should take actions to prepare for them.
For me the action of conception (procreation), fun though it can be, has no moral weight.
Although some longtermists think that it’s good to bring additional people into the world, this is not something that longtermists need to commit to. It’s possible to say, “Given that many billions of people will (probably) exist in the future, it’s important to make sure they don’t live in poverty/under a totalitarian regime/at risk of deadly pandemics.” In other words, we don’t have an obligation to create more people, but we do have an obligation to ensure the wellbeing of the people who live in the future.
Moreover, there are actions we can take that would not require any sacrifice to present people’s wellbeing (e.g. pandemic prevention, reducing carbon emissions, etc.). In fact, these would benefit both present and future generations.
For a defense of why it’s good to make happy people, I’d just refer to the chapter in MacAskill’s book.
It is not contradictory for you or for longtermists to work against the extinction of the human race while you believe that the human race will continue, provided you think that those actions to prevent extinction are a cause of the continuation of the human race and that you believe those actions will be performed (not could be performed). A separate question is whether those actions should be performed.
I believe that longtermists believe that the future should contain many billions of people in a few hundred years, and that those hypothetical future people have moral status to longtermists. But why do longtermists think that the future should contain many billions of people and that it is our task to make those people’s lives happier?
I think the normal response is “But it is good to continue the human race. I mean, our survival is good, the survival of the species is good, procreating is good, we’re good to have in the universe. Taking action toward saving our species is good in the face of uncertainty even if the actions could fail, maybe some people would have to sacrifice so that our species continues, but our species is worth it. Eventually there can be trillions of us, and more of us is better provided humans are all doing well then” but those are not my morals.
I want to be clear: we current humans could all live long happy lives, existing children could grow up, and also live long happy lives, existing fetuses could mature to term and be born, and live to a ripe old human age, long and happily. So long as no one had any more children, because we all used contraception, our species would die out. I am morally ok with that scenario. I see no moral contradiction in it. If you do, let me know.
What is worrisome to me is that the above scenario, if it occurred in the context of hypothetical future people having moral status, would include the implication that those people who chose to live well but die childless, were all immoral. I worry that longtermists would claim that those childless humans ended the human species and prevented a huge number of people from coming into existence, people who have moral status. I don’t believe those childless humans were immoral, but my belief is that longtermists do, in some contexts.
There is the thought experiment about making people happy vs making happy people. Well, first off, I am not morally or personally neutral toward the making of future people. And why would someone concerned about improving the welfare of existing people consider a future of more people a neutral possibility? It’s fairly obvious that anyone interested in the happiness of the world’s population would prefer that the population were smaller because that population would be easier to help.
In the far future, a small but steady population of a few million is one that altruists within that far-future population would find reasonable. That’s my belief right now, but I haven’t explored the numbers in enough detail.
In practice, many scenarios of altruism do not satisfy standards of selfish interest. Serving an ever-growing population is one of those scenarios. You don’t have to like or prefer your moral standards or their requirements. A bigger population is therefore a scary thing to altruists who give each person moral status because they can’t decide to develop moral uncertainty or shift their moral standards whenever that’s more convenient.
I still have to read MacAskill’s book though, and will carefully read the chapter you referenced.
But why do longtermists think that the future should contain many billions of people and that it is our task to make those people’s lives happier?
Different longtermists will have different answers to this. For example, many people think they have an obligation to make sure their grandchildren’s lives go well. It’s a small step from there to say that other people in the future besides one’s grandchildren are worth helping.
Or consider someone who buries a bomb in a park and sets the timer to go off in 200 years. It seems like that’s wrong even though no one currently alive will be affected by that bomb. If you accept that, you might also accept that there are good things we can do to help future generations who don’t yet exist.
What is worrisome to me is that the above scenario, if it occurred in the context of hypothetical future people having moral status, would include the implication that those people who chose to live well but die childless, were all immoral.
No, this doesn’t follow. The mere fact that it’s good to do X doesn’t entail that anyone who doesn’t do X is immoral. Example: I think it’s good to grow crops to feed other people. But I don’t think everyone is morally obligated to be a farmer.
And again, longtermists are not committed to the claim that it’s good or necessary to create future people. It’s possible to be a longtermist and just say that it’s good to help the people who will be a live in the future, for example, by stopping the bomb that was placed in the park.
It’s fairly obvious that anyone interested in the happiness of the world’s population would prefer that the population were smaller because that population would be easier to help.
It seems like an important crux for you is that you think the world is overpopulated. I disagree. I think there are plenty of resources on Earth to support many billions more people, and the world would be better with a larger population. A larger population means more people to be friends with, more potential romantic partners, more taxpayers, more people generating new ideas, more people helping others.
Yes, well, perhaps it’s true that longtermists expect that the future will contain lots, many billions or trillions, of future people.
I do not believe:
that such a future is a good or moral outcome.
that such a future is a certain outcome.
I’m still wondering:
whether you believe that the future will contain future people.
whether people that you believe are hypothetical or possible future people have moral status
I think I’ve said this a few times already, but the implication of a possible future person having moral status is that the person has moral status comparable to people who are actually alive and people who will definitely be alive. Do you believe that a possible future person has moral status?
Yes, I do expect the future to contain future people. And I think it’s important to make sure their lives go well.
Another crux seems to be that you think helping future people will involve some kind of radical sacrifice of people currently alive. This also doesn’t follow.
Consider: People who are currently alive in Asia have moral status. People who are currently alive in Africa have moral status. It doesn’t follow that there’s any realistic scenario where we should sacrifice all the people in Asia for the sake of Africans or vice versa.
Likewise, there are actions we can take to help future generations without the kind of dramatic sacrifice of the present that you’re envisioning.
Yes, I do expect the future to contain future people. And I think it’s important to make sure their lives go well.
OK then! If you believe that the future will contain future people, then I have no argument with you giving those future people moral status equivalent to those alive today. I disagree with the certainty you express, I’m not so sure, but that’s a separate discussion, maybe for another time.
I do appreciate what you’ve offered here, and I applaud your optimistic certainty. That is what I call belief in a future.
I assume then that you feel assured that whatever steps you take to prevent human extinction are also steps that you feel certain will work, am I right? EDIT: Or you feel assured that one of the following holds:
whatever steps someone takes will prevent human extinction,
humanity will survive catastrophic events, no matter the events
existential risks will not actually cause human extinction, maybe because they are not as threatening as some think
I disagree with the certainty you express, I’m not so sure, but that’s a separate discussion, maybe for another time.
I haven’t expressed certainty. It’s possible to expect X to happen without being certain X will happen. Example: I expect for there to be another pandemic in the next century, but I’m not certain about it.
I assume then that you feel assured that whatever steps you take to prevent human extinction are also steps that you feel certain will work, am I right?
No, this is incorrect for the same reason as above.
The whole point of working on existential risk reduction is to decrease the probability of humanity’s extinction. If there were already a 0% chance of humanity dying out, then there would be no point in that work.
If you agree we should help those who will have moral status, that’s it. That’s one of the main pillars of longtermism. Whether or not present and future moral status are “comparable” in some sense is beside the point. The important point of comparison is whether they both deserve to be helped, and they do.
I agree that we should help those who have moral status now, whether those people are existing or just will exist someday . People who will exist someday are people who will exist in our beliefs about the pathway into the future that we are on.
There is a set of hypothetical future people on pathways into the future that we are not on. Those pathways are of two types:
pathways that we are too late to start down (impossible future people)
pathways that we could still start down (possible future people or plausible future people)
If you contextualize something with respect to a past time point, then it is trivial to make it impossible. For example, “The child I had when I was 30 is an impossible future person.” With that statement, I describe an impossible person because I contextualized its birth as occurring when I was 30. But I didn’t have a child when I was 30, and I am almost two decades older than 30. Therefore, that hypothetical future person is impossible.
Then there’s the other kind of hypothetical future person, for example, a person that I could still father. My question to you is whether that person should have moral status to me now, even though I don’t believe that the future will be welcoming and beneficial for a child of mine.
If you believe that a hypothetical future child does have moral status now, then you believe that I am behaving immorally by denying it opportunities for life because in your belief, the future is positive and my kid’s life will be a good one, if I have the kid. I don’t like to be seen as immoral in the estimation of others who use flawed reasoning.
The flaw in your reasoning is that the hypothetical future child that I won’t have has moral status and that I should act on its behalf even though I won’t conceive it. You could be right that the future is positive. You are wrong that the hypothetical future child has any moral status by virtue of its future existence when you agree that the child might not ever exist.
If I had plans to have a child, then that future child would immediately take on a moral status, contingent on those plans, and my beliefs in my influence over the future. However, I have no such plans. And, in fact, not much in the way of beliefs about my influence over the future.
I think you keep misinterpreting me, even when I make things explicit. For example, the mere fact that X is good doesn’t entail that people are immoral for not doing X.
Maybe it would be more productive to address arguments step by step.
Do you think it would be bad to hide a bomb in a populated area and set it to go off in 200 years?
I would like to avoid examples or discussion of the bomb example in this thread, thanks. I don’t like it.
We have talked about these issues quite a bit, so let me point out information to mark off directions this conversation could go:
I think that a belief can be established at any level of rational certainty about the belief’s assertion or with any amount of confidence in the cogency of arguments that support the belief.
I use the word belief as broadly as I can. Some people talk about the strength of beliefs, and I think there are certainly differences between types of belief and their evidence. I am not referencing subtypes of beliefs when I use the word belief.
I’m not a solipsist. I think the world is real and people are real and outside my mind and exist when I’m not perceiving them, etc. At the same time, I don’t think possible future people currently exist outside of any belief that I have in their future existence.
As I have said several times, I do not think people have to actually exist now in order to have moral status. They can exist in the future as well and still have moral status now, to me, provided that I believe that they will exist in the future.
An entity having moral status is not the only reason to have a concern about its (hypothetical) well-being. For me, moral status is not the only measure of concern a person can have for [or about] a possible future entity.
I take some actions using a heuristic: I take actions sometimes just because I don’t like being incorrect and then regretting it (in advance of its possible consequences). But I go on believing I’m correct in the meantime. If I thought a future entity could exist, and if it did, then it would have moral status ( meaning I would care that it’s there), then I might still act on its behalf now, just in case it comes into being, even though I don’t believe that it will come into being.
Yes, I could act on behalf of people that might exist in the far future, despite me not believing that they will exist. To a longtermist, it would seem like I heavily discounted the moral status of those future people but what they would be seeing is how much effort I feel like putting into my heuristic (of avoiding my failing to prevent regrettable possible consequences) at the time.
The heuristic is not based on a rational argument for possible future people’s moral status. However, I do use it to choose all kinds of actions where I think that if I were wrong and didn’t take action, it would matter to me. To convince me of possible future people’s moral status, all I would really need to think is that those people actually will exist. I believe people will be born in the next 60-70 years. They have moral status to me now.
In general, I care about people. At the same time, I qualify the level of care that I feel for others with thoughts about the character or behavior of those people.
If you are wondering whether I can uncover my beliefs about people in the future, people that will exist, the answer is, yes I can. Those people have moral status. There will be large numbers of people born over the next 30 years for example, and all those people have moral status in my thinking.
I can qualify assertions about the future as contingent, and then talk about counterfactuals to what I believe. For example, contingent on people doing a better job of taking care of our only home world and ourselves, Earth could be home to humans in X centuries from now.
I still haven’t read MacAskill’s book, and hope to get to that soon. Should we hold off further conversation until then?
I’m not following the reasoning for most of your claims, so I’ll just address the main claims I understand and disagree with.
This doesn’t follow. There’s a difference between saying “X will probably happen” and “X will inevitably happen.”
Compare: Joe will probably get into a car accident in the next 10 years, so he should buy car insurance.
This is analogous to the longtermist position: There will probably be events that test the resilience of humanity in the next 100 years, so we should take actions to prepare for them.
Although some longtermists think that it’s good to bring additional people into the world, this is not something that longtermists need to commit to. It’s possible to say, “Given that many billions of people will (probably) exist in the future, it’s important to make sure they don’t live in poverty/under a totalitarian regime/at risk of deadly pandemics.” In other words, we don’t have an obligation to create more people, but we do have an obligation to ensure the wellbeing of the people who live in the future.
Moreover, there are actions we can take that would not require any sacrifice to present people’s wellbeing (e.g. pandemic prevention, reducing carbon emissions, etc.). In fact, these would benefit both present and future generations.
For a defense of why it’s good to make happy people, I’d just refer to the chapter in MacAskill’s book.
It is not contradictory for you or for longtermists to work against the extinction of the human race while you believe that the human race will continue, provided you think that those actions to prevent extinction are a cause of the continuation of the human race and that you believe those actions will be performed (not could be performed). A separate question is whether those actions should be performed.
I believe that longtermists believe that the future should contain many billions of people in a few hundred years, and that those hypothetical future people have moral status to longtermists. But why do longtermists think that the future should contain many billions of people and that it is our task to make those people’s lives happier?
I think the normal response is “But it is good to continue the human race. I mean, our survival is good, the survival of the species is good, procreating is good, we’re good to have in the universe. Taking action toward saving our species is good in the face of uncertainty even if the actions could fail, maybe some people would have to sacrifice so that our species continues, but our species is worth it. Eventually there can be trillions of us, and more of us is better provided humans are all doing well then” but those are not my morals.
I want to be clear: we current humans could all live long happy lives, existing children could grow up, and also live long happy lives, existing fetuses could mature to term and be born, and live to a ripe old human age, long and happily. So long as no one had any more children, because we all used contraception, our species would die out. I am morally ok with that scenario. I see no moral contradiction in it. If you do, let me know.
What is worrisome to me is that the above scenario, if it occurred in the context of hypothetical future people having moral status, would include the implication that those people who chose to live well but die childless, were all immoral. I worry that longtermists would claim that those childless humans ended the human species and prevented a huge number of people from coming into existence, people who have moral status. I don’t believe those childless humans were immoral, but my belief is that longtermists do, in some contexts.
There is the thought experiment about making people happy vs making happy people. Well, first off, I am not morally or personally neutral toward the making of future people. And why would someone concerned about improving the welfare of existing people consider a future of more people a neutral possibility? It’s fairly obvious that anyone interested in the happiness of the world’s population would prefer that the population were smaller because that population would be easier to help.
In the far future, a small but steady population of a few million is one that altruists within that far-future population would find reasonable. That’s my belief right now, but I haven’t explored the numbers in enough detail.
In practice, many scenarios of altruism do not satisfy standards of selfish interest. Serving an ever-growing population is one of those scenarios. You don’t have to like or prefer your moral standards or their requirements. A bigger population is therefore a scary thing to altruists who give each person moral status because they can’t decide to develop moral uncertainty or shift their moral standards whenever that’s more convenient.
I still have to read MacAskill’s book though, and will carefully read the chapter you referenced.
Different longtermists will have different answers to this. For example, many people think they have an obligation to make sure their grandchildren’s lives go well. It’s a small step from there to say that other people in the future besides one’s grandchildren are worth helping.
Or consider someone who buries a bomb in a park and sets the timer to go off in 200 years. It seems like that’s wrong even though no one currently alive will be affected by that bomb. If you accept that, you might also accept that there are good things we can do to help future generations who don’t yet exist.
No, this doesn’t follow. The mere fact that it’s good to do X doesn’t entail that anyone who doesn’t do X is immoral. Example: I think it’s good to grow crops to feed other people. But I don’t think everyone is morally obligated to be a farmer.
And again, longtermists are not committed to the claim that it’s good or necessary to create future people. It’s possible to be a longtermist and just say that it’s good to help the people who will be a live in the future, for example, by stopping the bomb that was placed in the park.
It seems like an important crux for you is that you think the world is overpopulated. I disagree. I think there are plenty of resources on Earth to support many billions more people, and the world would be better with a larger population. A larger population means more people to be friends with, more potential romantic partners, more taxpayers, more people generating new ideas, more people helping others.
OK, thanks for the response.
Yes, well, perhaps it’s true that longtermists expect that the future will contain lots, many billions or trillions, of future people.
I do not believe:
that such a future is a good or moral outcome.
that such a future is a certain outcome.
I’m still wondering:
whether you believe that the future will contain future people.
whether people that you believe are hypothetical or possible future people have moral status
I think I’ve said this a few times already, but the implication of a possible future person having moral status is that the person has moral status comparable to people who are actually alive and people who will definitely be alive. Do you believe that a possible future person has moral status?
Yes, I do expect the future to contain future people. And I think it’s important to make sure their lives go well.
Another crux seems to be that you think helping future people will involve some kind of radical sacrifice of people currently alive. This also doesn’t follow.
Consider: People who are currently alive in Asia have moral status. People who are currently alive in Africa have moral status. It doesn’t follow that there’s any realistic scenario where we should sacrifice all the people in Asia for the sake of Africans or vice versa.
Likewise, there are actions we can take to help future generations without the kind of dramatic sacrifice of the present that you’re envisioning.
OK then! If you believe that the future will contain future people, then I have no argument with you giving those future people moral status equivalent to those alive today. I disagree with the certainty you express, I’m not so sure, but that’s a separate discussion, maybe for another time.
I do appreciate what you’ve offered here, and I applaud your optimistic certainty. That is what I call belief in a future.
I assume then that you feel assured that whatever steps you take to prevent human extinction are also steps that you feel certain will work, am I right?
EDIT: Or you feel assured that one of the following holds:
whatever steps someone takes will prevent human extinction,
humanity will survive catastrophic events, no matter the events
existential risks will not actually cause human extinction, maybe because they are not as threatening as some think
I haven’t expressed certainty. It’s possible to expect X to happen without being certain X will happen. Example: I expect for there to be another pandemic in the next century, but I’m not certain about it.
No, this is incorrect for the same reason as above.
The whole point of working on existential risk reduction is to decrease the probability of humanity’s extinction. If there were already a 0% chance of humanity dying out, then there would be no point in that work.
OK, so you aren’t so sure that lots of humans will live in the future, but those possible humans still have moral status, is that right?
I think they will have moral status once they exist, and that’s enough to justify acting for the sake of their welfare.
Do you believe that:
possible future people have moral status once they exist
it’s enough that future people with moral status are possible to justify acting on their behalf
I believe point 1.
If you believe point 2, is that because you believe that possible future people have moral status now?
No, it’s because future moral status also matters.
Huh. “future moral status” Is that comparable to present moral status in any way?
Longtermists think we should help those who do (or will) have moral status.
Oh, I agree with that, but is “future moral status” comparable to or the same as “present moral status”?
If you agree we should help those who will have moral status, that’s it. That’s one of the main pillars of longtermism. Whether or not present and future moral status are “comparable” in some sense is beside the point. The important point of comparison is whether they both deserve to be helped, and they do.
I agree that we should help those who have moral status now, whether those people are existing or just will exist someday . People who will exist someday are people who will exist in our beliefs about the pathway into the future that we are on.
There is a set of hypothetical future people on pathways into the future that we are not on. Those pathways are of two types:
pathways that we are too late to start down (impossible future people)
pathways that we could still start down (possible future people or plausible future people)
If you contextualize something with respect to a past time point, then it is trivial to make it impossible. For example, “The child I had when I was 30 is an impossible future person.” With that statement, I describe an impossible person because I contextualized its birth as occurring when I was 30. But I didn’t have a child when I was 30, and I am almost two decades older than 30. Therefore, that hypothetical future person is impossible.
Then there’s the other kind of hypothetical future person, for example, a person that I could still father. My question to you is whether that person should have moral status to me now, even though I don’t believe that the future will be welcoming and beneficial for a child of mine.
If you believe that a hypothetical future child does have moral status now, then you believe that I am behaving immorally by denying it opportunities for life because in your belief, the future is positive and my kid’s life will be a good one, if I have the kid. I don’t like to be seen as immoral in the estimation of others who use flawed reasoning.
The flaw in your reasoning is that the hypothetical future child that I won’t have has moral status and that I should act on its behalf even though I won’t conceive it. You could be right that the future is positive. You are wrong that the hypothetical future child has any moral status by virtue of its future existence when you agree that the child might not ever exist.
If I had plans to have a child, then that future child would immediately take on a moral status, contingent on those plans, and my beliefs in my influence over the future. However, I have no such plans. And, in fact, not much in the way of beliefs about my influence over the future.
I think you keep misinterpreting me, even when I make things explicit. For example, the mere fact that X is good doesn’t entail that people are immoral for not doing X.
Maybe it would be more productive to address arguments step by step.
Do you think it would be bad to hide a bomb in a populated area and set it to go off in 200 years?
I would like to avoid examples or discussion of the bomb example in this thread, thanks. I don’t like it.
We have talked about these issues quite a bit, so let me point out information to mark off directions this conversation could go:
I think that a belief can be established at any level of rational certainty about the belief’s assertion or with any amount of confidence in the cogency of arguments that support the belief.
I use the word belief as broadly as I can. Some people talk about the strength of beliefs, and I think there are certainly differences between types of belief and their evidence. I am not referencing subtypes of beliefs when I use the word belief.
I’m not a solipsist. I think the world is real and people are real and outside my mind and exist when I’m not perceiving them, etc. At the same time, I don’t think possible future people currently exist outside of any belief that I have in their future existence.
As I have said several times, I do not think people have to actually exist now in order to have moral status. They can exist in the future as well and still have moral status now, to me, provided that I believe that they will exist in the future.
An entity having moral status is not the only reason to have a concern about its (hypothetical) well-being. For me, moral status is not the only measure of concern a person can have for [or about] a possible future entity.
I take some actions using a heuristic: I take actions sometimes just because I don’t like being incorrect and then regretting it (in advance of its possible consequences). But I go on believing I’m correct in the meantime. If I thought a future entity could exist, and if it did, then it would have moral status ( meaning I would care that it’s there), then I might still act on its behalf now, just in case it comes into being, even though I don’t believe that it will come into being.
Yes, I could act on behalf of people that might exist in the far future, despite me not believing that they will exist. To a longtermist, it would seem like I heavily discounted the moral status of those future people but what they would be seeing is how much effort I feel like putting into my heuristic (of avoiding my failing to prevent regrettable possible consequences) at the time.
The heuristic is not based on a rational argument for possible future people’s moral status. However, I do use it to choose all kinds of actions where I think that if I were wrong and didn’t take action, it would matter to me. To convince me of possible future people’s moral status, all I would really need to think is that those people actually will exist. I believe people will be born in the next 60-70 years. They have moral status to me now.
If you are wondering whether I can uncover my beliefs about people in the future, people that will exist, the answer is, yes I can. Those people have moral status. There will be large numbers of people born over the next 30 years for example, and all those people have moral status in my thinking.
I can qualify assertions about the future as contingent, and then talk about counterfactuals to what I believe. For example, contingent on people doing a better job of taking care of our only home world and ourselves, Earth could be home to humans in X centuries from now.
I still haven’t read MacAskill’s book, and hope to get to that soon. Should we hold off further conversation until then?
Yes, I think it would be best to hold off. I think you’ll find MacAskill addresses most of your concerns in his book.