I think you keep misinterpreting me, even when I make things explicit. For example, the mere fact that X is good doesn’t entail that people are immoral for not doing X.
Maybe it would be more productive to address arguments step by step.
Do you think it would be bad to hide a bomb in a populated area and set it to go off in 200 years?
I would like to avoid examples or discussion of the bomb example in this thread, thanks. I don’t like it.
We have talked about these issues quite a bit, so let me point out information to mark off directions this conversation could go:
I think that a belief can be established at any level of rational certainty about the belief’s assertion or with any amount of confidence in the cogency of arguments that support the belief.
I use the word belief as broadly as I can. Some people talk about the strength of beliefs, and I think there are certainly differences between types of belief and their evidence. I am not referencing subtypes of beliefs when I use the word belief.
I’m not a solipsist. I think the world is real and people are real and outside my mind and exist when I’m not perceiving them, etc. At the same time, I don’t think possible future people currently exist outside of any belief that I have in their future existence.
As I have said several times, I do not think people have to actually exist now in order to have moral status. They can exist in the future as well and still have moral status now, to me, provided that I believe that they will exist in the future.
An entity having moral status is not the only reason to have a concern about its (hypothetical) well-being. For me, moral status is not the only measure of concern a person can have for [or about] a possible future entity.
I take some actions using a heuristic: I take actions sometimes just because I don’t like being incorrect and then regretting it (in advance of its possible consequences). But I go on believing I’m correct in the meantime. If I thought a future entity could exist, and if it did, then it would have moral status ( meaning I would care that it’s there), then I might still act on its behalf now, just in case it comes into being, even though I don’t believe that it will come into being.
Yes, I could act on behalf of people that might exist in the far future, despite me not believing that they will exist. To a longtermist, it would seem like I heavily discounted the moral status of those future people but what they would be seeing is how much effort I feel like putting into my heuristic (of avoiding my failing to prevent regrettable possible consequences) at the time.
The heuristic is not based on a rational argument for possible future people’s moral status. However, I do use it to choose all kinds of actions where I think that if I were wrong and didn’t take action, it would matter to me. To convince me of possible future people’s moral status, all I would really need to think is that those people actually will exist. I believe people will be born in the next 60-70 years. They have moral status to me now.
In general, I care about people. At the same time, I qualify the level of care that I feel for others with thoughts about the character or behavior of those people.
If you are wondering whether I can uncover my beliefs about people in the future, people that will exist, the answer is, yes I can. Those people have moral status. There will be large numbers of people born over the next 30 years for example, and all those people have moral status in my thinking.
I can qualify assertions about the future as contingent, and then talk about counterfactuals to what I believe. For example, contingent on people doing a better job of taking care of our only home world and ourselves, Earth could be home to humans in X centuries from now.
I still haven’t read MacAskill’s book, and hope to get to that soon. Should we hold off further conversation until then?
I think you keep misinterpreting me, even when I make things explicit. For example, the mere fact that X is good doesn’t entail that people are immoral for not doing X.
Maybe it would be more productive to address arguments step by step.
Do you think it would be bad to hide a bomb in a populated area and set it to go off in 200 years?
I would like to avoid examples or discussion of the bomb example in this thread, thanks. I don’t like it.
We have talked about these issues quite a bit, so let me point out information to mark off directions this conversation could go:
I think that a belief can be established at any level of rational certainty about the belief’s assertion or with any amount of confidence in the cogency of arguments that support the belief.
I use the word belief as broadly as I can. Some people talk about the strength of beliefs, and I think there are certainly differences between types of belief and their evidence. I am not referencing subtypes of beliefs when I use the word belief.
I’m not a solipsist. I think the world is real and people are real and outside my mind and exist when I’m not perceiving them, etc. At the same time, I don’t think possible future people currently exist outside of any belief that I have in their future existence.
As I have said several times, I do not think people have to actually exist now in order to have moral status. They can exist in the future as well and still have moral status now, to me, provided that I believe that they will exist in the future.
An entity having moral status is not the only reason to have a concern about its (hypothetical) well-being. For me, moral status is not the only measure of concern a person can have for [or about] a possible future entity.
I take some actions using a heuristic: I take actions sometimes just because I don’t like being incorrect and then regretting it (in advance of its possible consequences). But I go on believing I’m correct in the meantime. If I thought a future entity could exist, and if it did, then it would have moral status ( meaning I would care that it’s there), then I might still act on its behalf now, just in case it comes into being, even though I don’t believe that it will come into being.
Yes, I could act on behalf of people that might exist in the far future, despite me not believing that they will exist. To a longtermist, it would seem like I heavily discounted the moral status of those future people but what they would be seeing is how much effort I feel like putting into my heuristic (of avoiding my failing to prevent regrettable possible consequences) at the time.
The heuristic is not based on a rational argument for possible future people’s moral status. However, I do use it to choose all kinds of actions where I think that if I were wrong and didn’t take action, it would matter to me. To convince me of possible future people’s moral status, all I would really need to think is that those people actually will exist. I believe people will be born in the next 60-70 years. They have moral status to me now.
If you are wondering whether I can uncover my beliefs about people in the future, people that will exist, the answer is, yes I can. Those people have moral status. There will be large numbers of people born over the next 30 years for example, and all those people have moral status in my thinking.
I can qualify assertions about the future as contingent, and then talk about counterfactuals to what I believe. For example, contingent on people doing a better job of taking care of our only home world and ourselves, Earth could be home to humans in X centuries from now.
I still haven’t read MacAskill’s book, and hope to get to that soon. Should we hold off further conversation until then?
Yes, I think it would be best to hold off. I think you’ll find MacAskill addresses most of your concerns in his book.