Curious why you think this first part? Seems plausible but not obvious to me.
I think, for example, itâs silly to create more people just so that we can instantiate autonomy/âfreedom in more people, and I doubt many people think of autonomy/âfreedom this way. I think the same is true for truth/âdiscovery (and my own example of justice). I wouldnât be surprised if it wasnât uncommon for people to want more people to be born for the sake of having more love or beauty in the world, although I still think itâs more natural to think of these things as only mattering conditionally on existence, not as a reason to bring them into existence (compared to non-existence, not necessarily compared to another person being born, if we give up the independence of irrelevant alternatives or transitivity).
I also think a view of preference satisfaction that assigns positive value to the creation and satisfaction of new preferences is perverse in a way, since it allows you to ignore a personâs existing preferences if you can create and satisfy a sufficiently strong preference in them, even against their wishes to do so.
I have trouble seeing how this is a meaningful claim. (Maybe itâs technically right if we assume that any claim about the elements of an empty set is true, but then itâs also true that, in an empty future, everyone is oppressed and miserable. So non-empty flourishing futures remain the only futures in which there is flourishing without misery.)
Sorry, I should have been more explicit. You wrote âIn the absence of a long, flourishing future, a wide range of values (not just happiness) would go for a very long time unfulfilledâ, but we can also have values that would go frustrated for a very long time too if we donât go extinct, and including even in a future that looks mostly utopian. I also think itâs likely the future will contain misery.
More people find extinction uniquely bad when [...] they are explicitly prompted to consider long-term consequences of the catastrophes. [...] Finally, we find that (d) laypeopleâin line with prominent philosophical argumentsâthink that the quality of the future is relevant: they do find extinction uniquely bad when this means forgoing a utopian future.
Thatâs fair. From the paper:
(Recall that the first difference was the difference between no catastrophe and a catastrophe killing 80%, and the second difference the difference between a catastrophe killing 80% and a catastrophe killing 100%.) We therefore asked participants who gave the expected ranking (but not the other participants) which difference they judged to be greater. We found that most people did not find extinction uniquely bad: only a relatively small minority (23.47%, 50â213 participants) judged the second difference to be greater than the first difference.
It is worth noting that this still doesnât tell us how much greater the difference between total extinction and a utopian future is compared an 80% loss of life in a utopian future. Furthermore, people are being asked to assume the future will be utopian (âa future which is better than today in every conceivable way. There are no longer any wars, any crimes, or any people experiencing depression or sadness. Human suffering is massively reduced, and people are much happier than they are today.â), which we may have reason to doubt.
When they were just asked to consider the very long-term consequences in the salience condition, only about 50% in the UK sample thought extinction was uniquely bad and <40% did in the US sample. This is the salience condition:
When you do so, please remember to consider the long-term consequences each scenario will have for humanity. If humanity does not go extinct, it could go on to a long future. This is true even if many (but not all) humans die in a catastrophe, since that leaves open the possibility of recovery. However, if humanity goes extinct (if 100% are killed), there will be no future for humanity.
They were also not asked their views on futures that could be worse than now for the average person (or moral patient, generally).
Fair points. Your first paragraph seems like a good reason for me to take back the example of freedom/âautonomy, although I think the other examples remain relevant, at least for nontrivial minority views. (I imagine, for example, that many people wouldnât be too concerned about adding more people to a loving future, but they would be sad about a future having no love at all, e.g. due to extinction.)
(Maybe thereâs some asymmetry in peopleâs views toward autonomy? I share your intuition that most people would see it as silly to create people so they can have autonomy. But I also imagine that many people would see extinction as a bad affront to the autonomy that future people otherwise would have had, since extinction would be choosing for them that their lives arenât worthwhile.)
only about 50% in the UK sample thought extinction was uniquely bad
This seems like more than enough to support the claim that a wide variety of groups disvalue extinction, on (some) reflection.
I think youâre generally right that a significant fraction of non-utilitarian views wouldnât be extremely concerned by extinction, especially under pessimistic empirical assumptions about the future. (Iâd be more hesitant to say that many would see it as an actively good thing, at least since many common views seem like theyâd strongly disapprove of the harm that would be involved in many plausible extinction scenarios.) So Iâd weaken my original claim to something like: a significant fraction of non-utilitarian views would see extinction as very bad, especially under somewhat optimistic assumptions about the future (much weaker assumptions than e.g. âhumanity is inherently super awesomeâ).
I think, for example, itâs silly to create more people just so that we can instantiate autonomy/âfreedom in more people, and I doubt many people think of autonomy/âfreedom this way. I think the same is true for truth/âdiscovery (and my own example of justice). I wouldnât be surprised if it wasnât uncommon for people to want more people to be born for the sake of having more love or beauty in the world, although I still think itâs more natural to think of these things as only mattering conditionally on existence, not as a reason to bring them into existence (compared to non-existence, not necessarily compared to another person being born, if we give up the independence of irrelevant alternatives or transitivity).
I also think a view of preference satisfaction that assigns positive value to the creation and satisfaction of new preferences is perverse in a way, since it allows you to ignore a personâs existing preferences if you can create and satisfy a sufficiently strong preference in them, even against their wishes to do so.
Sorry, I should have been more explicit. You wrote âIn the absence of a long, flourishing future, a wide range of values (not just happiness) would go for a very long time unfulfilledâ, but we can also have values that would go frustrated for a very long time too if we donât go extinct, and including even in a future that looks mostly utopian. I also think itâs likely the future will contain misery.
Thatâs fair. From the paper:
It is worth noting that this still doesnât tell us how much greater the difference between total extinction and a utopian future is compared an 80% loss of life in a utopian future. Furthermore, people are being asked to assume the future will be utopian (âa future which is better than today in every conceivable way. There are no longer any wars, any crimes, or any people experiencing depression or sadness. Human suffering is massively reduced, and people are much happier than they are today.â), which we may have reason to doubt.
When they were just asked to consider the very long-term consequences in the salience condition, only about 50% in the UK sample thought extinction was uniquely bad and <40% did in the US sample. This is the salience condition:
They were also not asked their views on futures that could be worse than now for the average person (or moral patient, generally).
Fair points. Your first paragraph seems like a good reason for me to take back the example of freedom/âautonomy, although I think the other examples remain relevant, at least for nontrivial minority views. (I imagine, for example, that many people wouldnât be too concerned about adding more people to a loving future, but they would be sad about a future having no love at all, e.g. due to extinction.)
(Maybe thereâs some asymmetry in peopleâs views toward autonomy? I share your intuition that most people would see it as silly to create people so they can have autonomy. But I also imagine that many people would see extinction as a bad affront to the autonomy that future people otherwise would have had, since extinction would be choosing for them that their lives arenât worthwhile.)
This seems like more than enough to support the claim that a wide variety of groups disvalue extinction, on (some) reflection.
I think youâre generally right that a significant fraction of non-utilitarian views wouldnât be extremely concerned by extinction, especially under pessimistic empirical assumptions about the future. (Iâd be more hesitant to say that many would see it as an actively good thing, at least since many common views seem like theyâd strongly disapprove of the harm that would be involved in many plausible extinction scenarios.) So Iâd weaken my original claim to something like: a significant fraction of non-utilitarian views would see extinction as very bad, especially under somewhat optimistic assumptions about the future (much weaker assumptions than e.g. âhumanity is inherently super awesomeâ).