You say that care more about the preference of people than about total wellbeing, and that it’d change your mind if it turns out that people today prefer longtermist causes.
What do you think about the preferences of future people? You seem to take the “rather make people happy than to make happy people” point of view on population ethics, but future preferences extend beyond their preference to exist. Since you also aren’t interested in a world where trillions of people watch Netflix all day, I take it that you don’t take their preferences as that important.
What do you mean by this?
OP said, “I also care about people’s wellbeing regardless of when it happens.” Are you interpreting this concern about future people’s wellbeing as not including concern about their preferences? I think the bit about a Netflix world is consistent with caring about future people’s preferences contingent on future people existing. If we accept this kind of view in population ethics, we don’t have welfare-related reasons to ensure a future for humanity. But still, we might have quasi-aesthetic desires to create the sort of future that we find appealing. I think OP might just be saying that they lack such quasi-aesthetic desires.
(As an aside, I suspect that quasi-aesthetic desires motivate at least some of the focus on x-risks. We would expect that people who find futurology interesting would want the world to continue, even if they were indifferent to welfare-related reasons. I think this is basically what motivates a lot of environmentalism. People have a quasi-aesthetic desire for nature, purity, etc., so they care about the environment even if they never ground this in the effects of the environment on conscious beings.)
Perhaps you are referring to the value of creating and satisfying these future people’s preferences? If this is what you meant, a standard line for preference utilitarians is that preferences only matter once they are created. So the preferences of future people only matter contingent on the existence of these people (and their preferences).
There are several ways to motivate this, one of which is the following: would it be a good thing for me to create in you entirely new preferences just so I can satisfy them? We might think not.
This idea is captured in Singer’s Practical Ethics (from back when he espoused preference utilitarianism):
The creation of preferences which we then satisfy gains us nothing. We can think of the creation of the unsatisfied preferences as putting a debit in the moral ledger which satisfying them merely cancels out… Preference Utilitarians have grounds for seeking to satisfy their wishes, but they cannot say that the universe would have been a worse place if we had never come into existence at all.
Good points, thanks :) I agree with everything here.
One view on how we impact the future is asking how would we want to construct it assuming we had direct control over it. I think that this view lends more to the points you make and where population ethics feels to me much murkier.
However, there are some things that we might be able to put some credence on that we’d expect future people to value. For example, I think that it’s more likely than not that future people would value their own welfare. So while it’s not an argument for preventing x-risk (as that runs into the same population ethics problems), it is still an argument for other types of possible longtermist interventions and definitely points at where (a potentially enormous amount of) value lies. Say, I expect working on moral circle expansion to be very important from this perspective (although, I’m not sure about how interventions there are actually promising).
Regarding quasi-aesthetic desires, I agree and think that this is very important to understand further. Personally, I’m confused as to whether I should value these kinds of desires (even at the expense of something based on welfarism), or whether I should think of these as a bias to overcome. As you say, I also guess that this might be behind some of the reasons for differing stances on cause prioritization.
What do you mean by this?
OP said, “I also care about people’s wellbeing regardless of when it happens.” Are you interpreting this concern about future people’s wellbeing as not including concern about their preferences? I think the bit about a Netflix world is consistent with caring about future people’s preferences contingent on future people existing. If we accept this kind of view in population ethics, we don’t have welfare-related reasons to ensure a future for humanity. But still, we might have quasi-aesthetic desires to create the sort of future that we find appealing. I think OP might just be saying that they lack such quasi-aesthetic desires.
(As an aside, I suspect that quasi-aesthetic desires motivate at least some of the focus on x-risks. We would expect that people who find futurology interesting would want the world to continue, even if they were indifferent to welfare-related reasons. I think this is basically what motivates a lot of environmentalism. People have a quasi-aesthetic desire for nature, purity, etc., so they care about the environment even if they never ground this in the effects of the environment on conscious beings.)
Perhaps you are referring to the value of creating and satisfying these future people’s preferences? If this is what you meant, a standard line for preference utilitarians is that preferences only matter once they are created. So the preferences of future people only matter contingent on the existence of these people (and their preferences).
There are several ways to motivate this, one of which is the following: would it be a good thing for me to create in you entirely new preferences just so I can satisfy them? We might think not.
This idea is captured in Singer’s Practical Ethics (from back when he espoused preference utilitarianism):
Good points, thanks :) I agree with everything here.
One view on how we impact the future is asking how would we want to construct it assuming we had direct control over it. I think that this view lends more to the points you make and where population ethics feels to me much murkier.
However, there are some things that we might be able to put some credence on that we’d expect future people to value. For example, I think that it’s more likely than not that future people would value their own welfare. So while it’s not an argument for preventing x-risk (as that runs into the same population ethics problems), it is still an argument for other types of possible longtermist interventions and definitely points at where (a potentially enormous amount of) value lies. Say, I expect working on moral circle expansion to be very important from this perspective (although, I’m not sure about how interventions there are actually promising).
Regarding quasi-aesthetic desires, I agree and think that this is very important to understand further. Personally, I’m confused as to whether I should value these kinds of desires (even at the expense of something based on welfarism), or whether I should think of these as a bias to overcome. As you say, I also guess that this might be behind some of the reasons for differing stances on cause prioritization.