Thanks for this clear write-up in an important discussion :)
I’m not sure where exactly my own views lie, but let me engage with some of your points with the hope of clarifying my own views (and hopefully also help you or other readers).
You say that care more about the preference of people than about total wellbeing, and that it’d change your mind if it turns out that people today prefer longtermist causes.
What do you think about the preferences of future people? You seem to take the “rather make people happy than to make happy people” point of view on population ethics, but future preferences extend beyond their preference to exist. Since you also aren’t interested in a world where trillions of people watch Netflix all day, I take it that you don’t take their preferences as that important.
That said, you clearly do care about the shape of the future of humanity. Whether people have freedom, whether people suffer, whether they are morally righteous, etc. In fact, you seem to be pretty pessimistic about humanity’s future in those aspects. Also, it seems like you aren’t interested in transhumanist futures—at least, not how they are usually depicted.
Some thoughts on that. But first, please let me know if (where) I was off in any of the above. Sorry if I’ve misinterpreted your views.
I think that the length of the long-term future might be a strong double-crux here. If you’d expect the future to be mostly devoid of value, or even not many orders of magnitude more than the near future, then I’d find it very hard to justify working on longtermist causes (mostly due to traceability). Instead of addressing that, I’ll just respond to your other points conditional on there being a likely long-term future with lots of valuable life.
I feel some uneasiness about not considering future people’s preferences as mostly equal to people alive today. I think that the way I feel about it is somewhat like child-rearing: I’d want some sort of a balance between directing my children’s future to become “better people” and give them the freedom to make their own choices and binge on Netflix. Furthermore, I can already predict many of their preferences for which I can make some preparation (say, save up on money or buy an apartment in child-friendly areas). Another analogy here is that of colonialism, where one entity acts to shape the future of another (weaker) entity. Overall, I feel like we have a lot of responsibility for future people and we should take care not to enforce our own worldview too much.
Very relevant is the question of whether moral growth is possible (or even expected). I’m not sure of my own views here, but I definitely think that improving moral progress could be potentially a very important cause.
I think that some sort of a transhumanist future is inevitable. It’s hard for me to imagine economic/intellectual progress completely stopping or slowing down drastically forever without any major catastrophe, and it’s hard for me to imagine non-transhumanist futures with consistent exponential growth. Holden Karnofsky makes this case here in his recent The Most Important Century series.
Now, since you seem to disvalue transhumanist futures, I think this might be where our opinions might differ the most but maybe most malleable. I can imagine many potential futures where sentient beings are living in abundance and having meaningful lives. I don’t think that paperclip-maximizers and ruthless dictatorships are the most likely futures (although, I do think that these kinds of futures are important risks). For one thing, our values aren’t that weird. But other than that, a likely scenario is that of gradual moral change, rather than locking-in to some malign set of random values. I think that some discussions of Utopias are very relevant here, but they may be misleading. This is something I want to think more about, as I’m easily biased into believing weird futuristic scenarios.
You say that care more about the preference of people than about total wellbeing, and that it’d change your mind if it turns out that people today prefer longtermist causes.
What do you think about the preferences of future people? You seem to take the “rather make people happy than to make happy people” point of view on population ethics, but future preferences extend beyond their preference to exist. Since you also aren’t interested in a world where trillions of people watch Netflix all day, I take it that you don’t take their preferences as that important.
What do you mean by this?
OP said, “I also care about people’s wellbeing regardless of when it happens.” Are you interpreting this concern about future people’s wellbeing as not including concern about their preferences? I think the bit about a Netflix world is consistent with caring about future people’s preferences contingent on future people existing. If we accept this kind of view in population ethics, we don’t have welfare-related reasons to ensure a future for humanity. But still, we might have quasi-aesthetic desires to create the sort of future that we find appealing. I think OP might just be saying that they lack such quasi-aesthetic desires.
(As an aside, I suspect that quasi-aesthetic desires motivate at least some of the focus on x-risks. We would expect that people who find futurology interesting would want the world to continue, even if they were indifferent to welfare-related reasons. I think this is basically what motivates a lot of environmentalism. People have a quasi-aesthetic desire for nature, purity, etc., so they care about the environment even if they never ground this in the effects of the environment on conscious beings.)
Perhaps you are referring to the value of creating and satisfying these future people’s preferences? If this is what you meant, a standard line for preference utilitarians is that preferences only matter once they are created. So the preferences of future people only matter contingent on the existence of these people (and their preferences).
There are several ways to motivate this, one of which is the following: would it be a good thing for me to create in you entirely new preferences just so I can satisfy them? We might think not.
This idea is captured in Singer’s Practical Ethics (from back when he espoused preference utilitarianism):
The creation of preferences which we then satisfy gains us nothing. We can think of the creation of the unsatisfied preferences as putting a debit in the moral ledger which satisfying them merely cancels out… Preference Utilitarians have grounds for seeking to satisfy their wishes, but they cannot say that the universe would have been a worse place if we had never come into existence at all.
Good points, thanks :) I agree with everything here.
One view on how we impact the future is asking how would we want to construct it assuming we had direct control over it. I think that this view lends more to the points you make and where population ethics feels to me much murkier.
However, there are some things that we might be able to put some credence on that we’d expect future people to value. For example, I think that it’s more likely than not that future people would value their own welfare. So while it’s not an argument for preventing x-risk (as that runs into the same population ethics problems), it is still an argument for other types of possible longtermist interventions and definitely points at where (a potentially enormous amount of) value lies. Say, I expect working on moral circle expansion to be very important from this perspective (although, I’m not sure about how interventions there are actually promising).
Regarding quasi-aesthetic desires, I agree and think that this is very important to understand further. Personally, I’m confused as to whether I should value these kinds of desires (even at the expense of something based on welfarism), or whether I should think of these as a bias to overcome. As you say, I also guess that this might be behind some of the reasons for differing stances on cause prioritization.
Thanks for this clear write-up in an important discussion :)
I’m not sure where exactly my own views lie, but let me engage with some of your points with the hope of clarifying my own views (and hopefully also help you or other readers).
You say that care more about the preference of people than about total wellbeing, and that it’d change your mind if it turns out that people today prefer longtermist causes.
What do you think about the preferences of future people? You seem to take the “rather make people happy than to make happy people” point of view on population ethics, but future preferences extend beyond their preference to exist. Since you also aren’t interested in a world where trillions of people watch Netflix all day, I take it that you don’t take their preferences as that important.
That said, you clearly do care about the shape of the future of humanity. Whether people have freedom, whether people suffer, whether they are morally righteous, etc. In fact, you seem to be pretty pessimistic about humanity’s future in those aspects. Also, it seems like you aren’t interested in transhumanist futures—at least, not how they are usually depicted.
Some thoughts on that. But first, please let me know if (where) I was off in any of the above. Sorry if I’ve misinterpreted your views.
I think that the length of the long-term future might be a strong double-crux here. If you’d expect the future to be mostly devoid of value, or even not many orders of magnitude more than the near future, then I’d find it very hard to justify working on longtermist causes (mostly due to traceability). Instead of addressing that, I’ll just respond to your other points conditional on there being a likely long-term future with lots of valuable life.
I feel some uneasiness about not considering future people’s preferences as mostly equal to people alive today. I think that the way I feel about it is somewhat like child-rearing: I’d want some sort of a balance between directing my children’s future to become “better people” and give them the freedom to make their own choices and binge on Netflix. Furthermore, I can already predict many of their preferences for which I can make some preparation (say, save up on money or buy an apartment in child-friendly areas). Another analogy here is that of colonialism, where one entity acts to shape the future of another (weaker) entity. Overall, I feel like we have a lot of responsibility for future people and we should take care not to enforce our own worldview too much.
Very relevant is the question of whether moral growth is possible (or even expected). I’m not sure of my own views here, but I definitely think that improving moral progress could be potentially a very important cause.
I think that some sort of a transhumanist future is inevitable. It’s hard for me to imagine economic/intellectual progress completely stopping or slowing down drastically forever without any major catastrophe, and it’s hard for me to imagine non-transhumanist futures with consistent exponential growth. Holden Karnofsky makes this case here in his recent The Most Important Century series.
Now, since you seem to disvalue transhumanist futures, I think this might be where our opinions might differ the most but maybe most malleable. I can imagine many potential futures where sentient beings are living in abundance and having meaningful lives. I don’t think that paperclip-maximizers and ruthless dictatorships are the most likely futures (although, I do think that these kinds of futures are important risks). For one thing, our values aren’t that weird. But other than that, a likely scenario is that of gradual moral change, rather than locking-in to some malign set of random values. I think that some discussions of Utopias are very relevant here, but they may be misleading. This is something I want to think more about, as I’m easily biased into believing weird futuristic scenarios.
What do you mean by this?
OP said, “I also care about people’s wellbeing regardless of when it happens.” Are you interpreting this concern about future people’s wellbeing as not including concern about their preferences? I think the bit about a Netflix world is consistent with caring about future people’s preferences contingent on future people existing. If we accept this kind of view in population ethics, we don’t have welfare-related reasons to ensure a future for humanity. But still, we might have quasi-aesthetic desires to create the sort of future that we find appealing. I think OP might just be saying that they lack such quasi-aesthetic desires.
(As an aside, I suspect that quasi-aesthetic desires motivate at least some of the focus on x-risks. We would expect that people who find futurology interesting would want the world to continue, even if they were indifferent to welfare-related reasons. I think this is basically what motivates a lot of environmentalism. People have a quasi-aesthetic desire for nature, purity, etc., so they care about the environment even if they never ground this in the effects of the environment on conscious beings.)
Perhaps you are referring to the value of creating and satisfying these future people’s preferences? If this is what you meant, a standard line for preference utilitarians is that preferences only matter once they are created. So the preferences of future people only matter contingent on the existence of these people (and their preferences).
There are several ways to motivate this, one of which is the following: would it be a good thing for me to create in you entirely new preferences just so I can satisfy them? We might think not.
This idea is captured in Singer’s Practical Ethics (from back when he espoused preference utilitarianism):
Good points, thanks :) I agree with everything here.
One view on how we impact the future is asking how would we want to construct it assuming we had direct control over it. I think that this view lends more to the points you make and where population ethics feels to me much murkier.
However, there are some things that we might be able to put some credence on that we’d expect future people to value. For example, I think that it’s more likely than not that future people would value their own welfare. So while it’s not an argument for preventing x-risk (as that runs into the same population ethics problems), it is still an argument for other types of possible longtermist interventions and definitely points at where (a potentially enormous amount of) value lies. Say, I expect working on moral circle expansion to be very important from this perspective (although, I’m not sure about how interventions there are actually promising).
Regarding quasi-aesthetic desires, I agree and think that this is very important to understand further. Personally, I’m confused as to whether I should value these kinds of desires (even at the expense of something based on welfarism), or whether I should think of these as a bias to overcome. As you say, I also guess that this might be behind some of the reasons for differing stances on cause prioritization.