Yeah, I think the issue (for me) is not just about fanaticism. Give me Common-sense Eutopia or a gamble with a 90% chance of extinction and a 10% chance of Common-sense Eutopia 20 times the size, and it seems problematic to choose the gamble.
(To be clear—other views, on which value is diminishing, are also really problematic. We’re in impossibility theorem territory, and I see the whole things as a mess; I don’t have a positive view I’m excited about.)
Re WWOTF: You can (and should) think that there’s huge amounts of value at stake in the future, and even think that there’s much more value at stake in the future than there is in the present century, without thinking that value is linear in number of happy people. It diminishes the case a bit, but nowhere near enough for longtermism to not go through.
As you say you can block the obligation to gamble and risk Common-sense Eutopia for something better in different ways/for different reasons.
For me, Common-sense Eutopia sounds pretty appealing because it ensures continuity for existing people. Considering many people don’t have particularly resource-hungry life goals, Common-sense Eutopia would score pretty high on a perspective where it matters what existing people want for the future of themselves and their loved ones.
Even if we say that other considerations besides existing people also matter morally, we may not want those other considerations to just totally swamp/outweigh how good Common-sense Eutopia is from the perspective of existing people.
Re WWOTF: You can (and should) think that there’s huge amounts of value at stake in the future, and even think that there’s much more value at stake in the future than there is in the present century, without thinking that value is linear in number of happy people. It diminishes the case a bit, but nowhere near enough for longtermism to not go through.
Sure you could have a view that it’s great to have 10^12 people, but no more than that, but that seems like a really weird thing to have written in the stars. Or that all that matters is creating the Machine God, so we haven’t attained any value yet. But that doesn’t seem great.
Do you have a gloss on a kind of view that threads the needle nicely without being too crazy, even if it doesn’t ultimately withstand scrutiny?
How much do you think that having lots of mostly or entirely identical future lives is differently valuable than having vastly different positive lives? (Because that would create a reasonable view on which a more limited number of future people can saturate the possible future value.)
Bostrom discusses things like this in Deep Utopia, under the label of ‘interestingness’ (where even if we edit post-humans to never be subjectively bored, maybe they run out of ‘objectively interesting’ things to do and this leads to value not being nearly as high as it could otherwise be). I don’t think he takes a stance on whether or how much interestingness actually matters, but I am only ~half way through the book so far.
Yeah, I think the issue (for me) is not just about fanaticism. Give me Common-sense Eutopia or a gamble with a 90% chance of extinction and a 10% chance of Common-sense Eutopia 20 times the size, and it seems problematic to choose the gamble.
(To be clear—other views, on which value is diminishing, are also really problematic. We’re in impossibility theorem territory, and I see the whole things as a mess; I don’t have a positive view I’m excited about.)
Re WWOTF: You can (and should) think that there’s huge amounts of value at stake in the future, and even think that there’s much more value at stake in the future than there is in the present century, without thinking that value is linear in number of happy people. It diminishes the case a bit, but nowhere near enough for longtermism to not go through.
As you say you can block the obligation to gamble and risk Common-sense Eutopia for something better in different ways/for different reasons.
For me, Common-sense Eutopia sounds pretty appealing because it ensures continuity for existing people. Considering many people don’t have particularly resource-hungry life goals, Common-sense Eutopia would score pretty high on a perspective where it matters what existing people want for the future of themselves and their loved ones.
Even if we say that other considerations besides existing people also matter morally, we may not want those other considerations to just totally swamp/outweigh how good Common-sense Eutopia is from the perspective of existing people.
Sure you could have a view that it’s great to have 10^12 people, but no more than that, but that seems like a really weird thing to have written in the stars. Or that all that matters is creating the Machine God, so we haven’t attained any value yet. But that doesn’t seem great.
Do you have a gloss on a kind of view that threads the needle nicely without being too crazy, even if it doesn’t ultimately withstand scrutiny?
How much do you think that having lots of mostly or entirely identical future lives is differently valuable than having vastly different positive lives? (Because that would create a reasonable view on which a more limited number of future people can saturate the possible future value.)
Bostrom discusses things like this in Deep Utopia, under the label of ‘interestingness’ (where even if we edit post-humans to never be subjectively bored, maybe they run out of ‘objectively interesting’ things to do and this leads to value not being nearly as high as it could otherwise be). I don’t think he takes a stance on whether or how much interestingness actually matters, but I am only ~half way through the book so far.