If the true/best/my subjective axiology is linear in resources (e.g. total utilitarianism), lots of ‘good’ futures will probably capture a very small fraction of how good the optimal future could have been. Conversely, if axiology is not linear in resources (e.g. intuitive morality, average utilitarianism), good futures seem more likely to be nearly optimal. Therefore whether axiology is linear in resources is one of the cruxes for the debate week question.
The easiest way, in my view, to make a near-optimal future very likely, conditional on non-extinction, is if value is bounded above.
There’s an argument that this is the common sense view. E.g. consider:
Common-sense Eutopia: In the future, there is a very large population with very high well-being; those people are able to do almost anything they want as long as they don’t harm others. They have complete scientific and technological understanding. War and conflict are things of the past. Environmental destruction has been wholly reversed; Earth is now a natural paradise. However, society is limited only to the solar system, and will come to an end once the Sun has exited its red giant phase, in about five billion years.
Does this seem to capture less than one 10^22th of all possible value? (Because there are ~10^22 affectable stars, so civilisation could be over 10^22 times as big). On my common-sense moral intuitions, no.
Making this argument stronger: Normally, quantities of value are defined are in terms of the value of risky gambles. So what it means to say that Common-sense Eutopia is less than one 10^22th of all possible value is that a gamble with a one in 10^22 chance of producing an ideal-society-across-all-the-stars, and a (1 − 1/10^22) chance of near-term extinction, is better than producing Common-sense Eutopia for certain.
But that seems wild. Of all the issues facing classical utilitarianism, this seems the most problematic to me.
The easiest way, in my view, to make a near-optimal future very likely, conditional on non-extinction, is if value is bounded above.
Yes.
Of all the issues facing classical utilitarianism, this seems the most problematic to me.
So you doubt fanaticism, the view that a tiny chance of an astronomically good outcome can be more valuable than a certainty of a decent outcome. What about in the case of certainty? Do you doubt the utilitarian’s objection to Common-sense Eutopia? This kind of aggregation seems important for the case for longtermism. (See the pages of paper dolls in What We Owe the Future.)
Yeah, I think the issue (for me) is not just about fanaticism. Give me Common-sense Eutopia or a gamble with a 90% chance of extinction and a 10% chance of Common-sense Eutopia 20 times the size, and it seems problematic to choose the gamble.
(To be clear—other views, on which value is diminishing, are also really problematic. We’re in impossibility theorem territory, and I see the whole things as a mess; I don’t have a positive view I’m excited about.)
Re WWOTF: You can (and should) think that there’s huge amounts of value at stake in the future, and even think that there’s much more value at stake in the future than there is in the present century, without thinking that value is linear in number of happy people. It diminishes the case a bit, but nowhere near enough for longtermism to not go through.
Re WWOTF: You can (and should) think that there’s huge amounts of value at stake in the future, and even think that there’s much more value at stake in the future than there is in the present century, without thinking that value is linear in number of happy people. It diminishes the case a bit, but nowhere near enough for longtermism to not go through.
Sure you could have a view that it’s great to have 10^12 people, but no more than that, but that seems like a really weird thing to have written in the stars. Or that all that matters is creating the Machine God, so we haven’t attained any value yet. But that doesn’t seem great.
Do you have a gloss on a kind of view that threads the needle nicely without being too crazy, even if it doesn’t ultimately withstand scrutiny?
How much do you think that having lots of mostly or entirely identical future lives is differently valuable than having vastly different positive lives? (Because that would create a reasonable view on which a more limited number of future people can saturate the possible future value.)
Bostrom discusses things like this in Deep Utopia, under the label of ‘interestingness’ (where even if we edit post-humans to never be subjectively bored, maybe they run out of ‘objectively interesting’ things to do and this leads to value not being nearly as high as it could otherwise be). I don’t think he takes a stance on whether or how much interestingness actually matters, but I am only ~half way through the book so far.
As you say you can block the obligation to gamble and risk Common-sense Eutopia for something better in different ways/for different reasons.
For me, Common-sense Eutopia sounds pretty appealing because it ensures continuity for existing people. Considering many people don’t have particularly resource-hungry life goals, Common-sense Eutopia would score pretty high on a perspective where it matters what existing people want for the future of themselves and their loved ones.
Even if we say that other considerations besides existing people also matter morally, we may not want those other considerations to just totally swamp/outweigh how good Common-sense Eutopia is from the perspective of existing people.
Unpacking this: on linear-in-resources (LIR) views, we could lose out on most value if we (i) capture only a small fraction of resources that we could have done, and/or (ii) use resources in a less efficient way than we could have done. (Where on a LIR view, there is some use of resources that has the highest value/unit of resources, and everything should be used in that way.)
Plausibly at least, only a tiny % of possible ways of using resources are close to the value produced by the highest value/unit of resources use. So, the thinking goes, merely getting non-extinction isn’t yet getting you close to a near-best future—instead you really need to get from a non-extinction future to that optimally-used-resources future, and if you don’t then you lose out on almost all value.
Average utilitarianism is approx linear in resources as long as at least one possible individual’s wellbeing is linear in resources.
I.e. we create Mr Utility Monster, who has wellbeing that is linear in resources, and give all resources to benefiting Mr Monster. Total value is the same as it would be under total utilitarianism, just divided by a constant (namely, the number of people who’ve ever lived).
I wasn’t sure if it’s really useful to think about value being linear in resources on some views. If you have a fixed population and imagine increasing the resources they have available, I assume that the value of the outcome is a strictly concave function of the resource base. Doubling the population might double the value of the outcome, although it’s not clear that this constitutes a doubling of resources. And why should it matter if the relationship between value and resources is strictly concave? Isn’t the key question something like whether there are potentially realizable futures that are many orders of magnitude more valuable than the default or where we are now? Answering yes seems compatible with thinking that the function relating resources to value is strictly concave and asymptotes, so long as it asymptotes somewhere suitably high up on the scale of value.
If you have a fixed population and imagine increasing the resources they have available, I assume that the value of the outcome is a strictly concave function of the resource base.
Certainly given current levels of technology, but perhaps not given future technology (e.g. indefinite life-extension technology), at least if individual wellbeing is proportional to number of happy years lived.
”Doubling the population might double the value of the outcome, although it’s not clear that this constitutes a doubling of resources.”
I was thinking you’d need twice as many resources to have twice as many people?
”And why should it matter if the relationship between value and resources is strictly concave? Isn’t the key question something like whether there are potentially realizable futures that are many orders of magnitude more valuable than the default or where we are now? Answering yes seems compatible with thinking that the function relating resources to value is strictly concave and asymptotes, so long as it asymptotes somewhere suitably high up on the scale of value. “
Yes, in principle, but I think that if you have the upper-bound view, then you do so on the basis of common-sense intuition. But if so, then I think the upper bound is probably really low in cosmic scales—like, if we already have a Common Sense Eutopia within the solar system, I think we’d be more than 50% of the way from 0 to the upper bound.
Another reason you might have an upper bound is that the axioms of expected utility theory require your utility function to be bounded given the most natural generalization to the case of countably infinite gambles.
There are funky axiologies where value is superlinear in resources — basically any moral worldview that embraces holism. If you think that the whole, arranged a particular way, is more valuable than the parts, then you will be even more precious about how precisely the world should be arranged than the total utilitarian.
I worry that non-linear axiologies[1] end up endorsing egoism, helping only those whose moral patienthood you are most confident in or otherwise prioritizing them far too much over those of less certain moral patienthood. See Oesterheld, 2017 and Tarsney, 2023.
I also think average utilitarianism doesn’t seem very plausible. I was just using it as an example of a non-linear theory (though as Will notes if any individual is linear in resources so is the world as a whole, just with a smaller derivative).
If the true/best/my subjective axiology is linear in resources (e.g. total utilitarianism), lots of ‘good’ futures will probably capture a very small fraction of how good the optimal future could have been. Conversely, if axiology is not linear in resources (e.g. intuitive morality, average utilitarianism), good futures seem more likely to be nearly optimal. Therefore whether axiology is linear in resources is one of the cruxes for the debate week question.
Discuss.
The easiest way, in my view, to make a near-optimal future very likely, conditional on non-extinction, is if value is bounded above.
There’s an argument that this is the common sense view. E.g. consider:
Common-sense Eutopia: In the future, there is a very large population with very high well-being; those people are able to do almost anything they want as long as they don’t harm others. They have complete scientific and technological understanding. War and conflict are things of the past. Environmental destruction has been wholly reversed; Earth is now a natural paradise. However, society is limited only to the solar system, and will come to an end once the Sun has exited its red giant phase, in about five billion years.
Does this seem to capture less than one 10^22th of all possible value? (Because there are ~10^22 affectable stars, so civilisation could be over 10^22 times as big). On my common-sense moral intuitions, no.
Making this argument stronger: Normally, quantities of value are defined are in terms of the value of risky gambles. So what it means to say that Common-sense Eutopia is less than one 10^22th of all possible value is that a gamble with a one in 10^22 chance of producing an ideal-society-across-all-the-stars, and a (1 − 1/10^22) chance of near-term extinction, is better than producing Common-sense Eutopia for certain.
But that seems wild. Of all the issues facing classical utilitarianism, this seems the most problematic to me.
Yes.
So you doubt fanaticism, the view that a tiny chance of an astronomically good outcome can be more valuable than a certainty of a decent outcome. What about in the case of certainty? Do you doubt the utilitarian’s objection to Common-sense Eutopia? This kind of aggregation seems important for the case for longtermism. (See the pages of paper dolls in What We Owe the Future.)
Yeah, I think the issue (for me) is not just about fanaticism. Give me Common-sense Eutopia or a gamble with a 90% chance of extinction and a 10% chance of Common-sense Eutopia 20 times the size, and it seems problematic to choose the gamble.
(To be clear—other views, on which value is diminishing, are also really problematic. We’re in impossibility theorem territory, and I see the whole things as a mess; I don’t have a positive view I’m excited about.)
Re WWOTF: You can (and should) think that there’s huge amounts of value at stake in the future, and even think that there’s much more value at stake in the future than there is in the present century, without thinking that value is linear in number of happy people. It diminishes the case a bit, but nowhere near enough for longtermism to not go through.
Sure you could have a view that it’s great to have 10^12 people, but no more than that, but that seems like a really weird thing to have written in the stars. Or that all that matters is creating the Machine God, so we haven’t attained any value yet. But that doesn’t seem great.
Do you have a gloss on a kind of view that threads the needle nicely without being too crazy, even if it doesn’t ultimately withstand scrutiny?
How much do you think that having lots of mostly or entirely identical future lives is differently valuable than having vastly different positive lives? (Because that would create a reasonable view on which a more limited number of future people can saturate the possible future value.)
Bostrom discusses things like this in Deep Utopia, under the label of ‘interestingness’ (where even if we edit post-humans to never be subjectively bored, maybe they run out of ‘objectively interesting’ things to do and this leads to value not being nearly as high as it could otherwise be). I don’t think he takes a stance on whether or how much interestingness actually matters, but I am only ~half way through the book so far.
As you say you can block the obligation to gamble and risk Common-sense Eutopia for something better in different ways/for different reasons.
For me, Common-sense Eutopia sounds pretty appealing because it ensures continuity for existing people. Considering many people don’t have particularly resource-hungry life goals, Common-sense Eutopia would score pretty high on a perspective where it matters what existing people want for the future of themselves and their loved ones.
Even if we say that other considerations besides existing people also matter morally, we may not want those other considerations to just totally swamp/outweigh how good Common-sense Eutopia is from the perspective of existing people.
Unpacking this: on linear-in-resources (LIR) views, we could lose out on most value if we (i) capture only a small fraction of resources that we could have done, and/or (ii) use resources in a less efficient way than we could have done. (Where on a LIR view, there is some use of resources that has the highest value/unit of resources, and everything should be used in that way.)
Plausibly at least, only a tiny % of possible ways of using resources are close to the value produced by the highest value/unit of resources use. So, the thinking goes, merely getting non-extinction isn’t yet getting you close to a near-best future—instead you really need to get from a non-extinction future to that optimally-used-resources future, and if you don’t then you lose out on almost all value.
Average utilitarianism is approx linear in resources as long as at least one possible individual’s wellbeing is linear in resources.
I.e. we create Mr Utility Monster, who has wellbeing that is linear in resources, and give all resources to benefiting Mr Monster. Total value is the same as it would be under total utilitarianism, just divided by a constant (namely, the number of people who’ve ever lived).
I wasn’t sure if it’s really useful to think about value being linear in resources on some views. If you have a fixed population and imagine increasing the resources they have available, I assume that the value of the outcome is a strictly concave function of the resource base. Doubling the population might double the value of the outcome, although it’s not clear that this constitutes a doubling of resources. And why should it matter if the relationship between value and resources is strictly concave? Isn’t the key question something like whether there are potentially realizable futures that are many orders of magnitude more valuable than the default or where we are now? Answering yes seems compatible with thinking that the function relating resources to value is strictly concave and asymptotes, so long as it asymptotes somewhere suitably high up on the scale of value.
Certainly given current levels of technology, but perhaps not given future technology (e.g. indefinite life-extension technology), at least if individual wellbeing is proportional to number of happy years lived.
”Doubling the population might double the value of the outcome, although it’s not clear that this constitutes a doubling of resources.”
I was thinking you’d need twice as many resources to have twice as many people?
”And why should it matter if the relationship between value and resources is strictly concave? Isn’t the key question something like whether there are potentially realizable futures that are many orders of magnitude more valuable than the default or where we are now? Answering yes seems compatible with thinking that the function relating resources to value is strictly concave and asymptotes, so long as it asymptotes somewhere suitably high up on the scale of value. “
Yes, in principle, but I think that if you have the upper-bound view, then you do so on the basis of common-sense intuition. But if so, then I think the upper bound is probably really low in cosmic scales—like, if we already have a Common Sense Eutopia within the solar system, I think we’d be more than 50% of the way from 0 to the upper bound.
Another reason you might have an upper bound is that the axioms of expected utility theory require your utility function to be bounded given the most natural generalization to the case of countably infinite gambles.
Agreed!
There are funky axiologies where value is superlinear in resources — basically any moral worldview that embraces holism. If you think that the whole, arranged a particular way, is more valuable than the parts, then you will be even more precious about how precisely the world should be arranged than the total utilitarian.
I worry that non-linear axiologies[1] end up endorsing egoism, helping only those whose moral patienthood you are most confident in or otherwise prioritizing them far too much over those of less certain moral patienthood. See Oesterheld, 2017 and Tarsney, 2023.
Assuming completeness, transitivity and the independence of irrelevant alternatives.
I also think average utilitarianism doesn’t seem very plausible. I was just using it as an example of a non-linear theory (though as Will notes if any individual is linear in resources so is the world as a whole, just with a smaller derivative).