If the true/âbest/âmy subjective axiology is linear in resources (e.g. total utilitarianism), lots of âgoodâ futures will probably capture a very small fraction of how good the optimal future could have been. Conversely, if axiology is not linear in resources (e.g. intuitive morality, average utilitarianism), good futures seem more likely to be nearly optimal. Therefore whether axiology is linear in resources is one of the cruxes for the debate week question.
The easiest way, in my view, to make a near-optimal future very likely, conditional on non-extinction, is if value is bounded above.
Thereâs an argument that this is the common sense view. E.g. consider:
Common-sense Eutopia: In the future, there is a very large population with very high well-being; those people are able to do almost anything they want as long as they donât harm others. They have complete scientific and technological understanding. War and conflict are things of the past. Environmental destruction has been wholly reversed; Earth is now a natural paradise. However, society is limited only to the solar system, and will come to an end once the Sun has exited its red giant phase, in about five billion years.
Does this seem to capture less than one 10^22th of all possible value? (Because there are ~10^22 affectable stars, so civilisation could be over 10^22 times as big). On my common-sense moral intuitions, no.
Making this argument stronger: Normally, quantities of value are defined are in terms of the value of risky gambles. So what it means to say that Common-sense Eutopia is less than one 10^22th of all possible value is that a gamble with a one in 10^22 chance of producing an ideal-society-across-all-the-stars, and a (1 â 1/â10^22) chance of near-term extinction, is better than producing Common-sense Eutopia for certain.
But that seems wild. Of all the issues facing classical utilitarianism, this seems the most problematic to me.
The easiest way, in my view, to make a near-optimal future very likely, conditional on non-extinction, is if value is bounded above.
Yes.
Of all the issues facing classical utilitarianism, this seems the most problematic to me.
So you doubt fanaticism, the view that a tiny chance of an astronomically good outcome can be more valuable than a certainty of a decent outcome. What about in the case of certainty? Do you doubt the utilitarianâs objection to Common-sense Eutopia? This kind of aggregation seems important for the case for longtermism. (See the pages of paper dolls in What We Owe the Future.)
Yeah, I think the issue (for me) is not just about fanaticism. Give me Common-sense Eutopia or a gamble with a 90% chance of extinction and a 10% chance of Common-sense Eutopia 20 times the size, and it seems problematic to choose the gamble.
(To be clearâother views, on which value is diminishing, are also really problematic. Weâre in impossibility theorem territory, and I see the whole things as a mess; I donât have a positive view Iâm excited about.)
Re WWOTF: You can (and should) think that thereâs huge amounts of value at stake in the future, and even think that thereâs much more value at stake in the future than there is in the present century, without thinking that value is linear in number of happy people. It diminishes the case a bit, but nowhere near enough for longtermism to not go through.
As you say you can block the obligation to gamble and risk Common-sense Eutopia for something better in different ways/âfor different reasons.
For me, Common-sense Eutopia sounds pretty appealing because it ensures continuity for existing people. Considering many people donât have particularly resource-hungry life goals, Common-sense Eutopia would score pretty high on a perspective where it matters what existing people want for the future of themselves and their loved ones.
Even if we say that other considerations besides existing people also matter morally, we may not want those other considerations to just totally swamp/âoutweigh how good Common-sense Eutopia is from the perspective of existing people.
Re WWOTF: You can (and should) think that thereâs huge amounts of value at stake in the future, and even think that thereâs much more value at stake in the future than there is in the present century, without thinking that value is linear in number of happy people. It diminishes the case a bit, but nowhere near enough for longtermism to not go through.
Sure you could have a view that itâs great to have 10^12 people, but no more than that, but that seems like a really weird thing to have written in the stars. Or that all that matters is creating the Machine God, so we havenât attained any value yet. But that doesnât seem great.
Do you have a gloss on a kind of view that threads the needle nicely without being too crazy, even if it doesnât ultimately withstand scrutiny?
How much do you think that having lots of mostly or entirely identical future lives is differently valuable than having vastly different positive lives? (Because that would create a reasonable view on which a more limited number of future people can saturate the possible future value.)
Bostrom discusses things like this in Deep Utopia, under the label of âinterestingnessâ (where even if we edit post-humans to never be subjectively bored, maybe they run out of âobjectively interestingâ things to do and this leads to value not being nearly as high as it could otherwise be). I donât think he takes a stance on whether or how much interestingness actually matters, but I am only ~half way through the book so far.
This seems almost exactly like the repugnant conclusion. Taken to extremes, intuition disagrees with logic. When that happens, itâs usually the worse for intuition.
Iâm not a utilitarian, but I find the repugnant conclusion impossible to reject if you are.
If you want chose what is good for everyone, thereâs little argument what that is in those cases.
And if weâre talking about whatâs good for everyone, thatâs got to be a linear sum of whatâs good for each someone. If the sum is nonlinear, who exactly is worth less than the others? This leads to the repugnant conclusion and your conclusion here.
Other definitions of âgood for everyoneâ seem to always mean âwhat I idiosyncratically prefer for everyone else but meâ.
There are funky axiologies where value is superlinear in resources â basically any moral worldview that embraces holism. If you think that the whole, arranged a particular way, is more valuable than the parts, then you will be even more precious about how precisely the world should be arranged than the total utilitarian.
Since others have discussed the implications, I want to push a bit on the assumptions.
I worry that non-linear axiologies[1] end up endorsing egoism, helping only those whose moral patienthood you are most confident in or otherwise prioritizing them far too much over those of less certain moral patienthood. See Oesterheld, 2017 and Tarsney, 2023.
(I also think average utilitarianism in particular is pretty bad, because it would imply that if the average welfare is negative (even torturous), adding bad lives can be good, as long as theyâre even slightly less bad than average.)
Maybe you can get around this with non-aggregative or partially aggregative views. EDIT: Or, if youâre worried about fanaticism, difference-making views.
I also think average utilitarianism doesnât seem very plausible. I was just using it as an example of a non-linear theory (though as Will notes if any individual is linear in resources so is the world as a whole, just with a smaller derivative).
Unpacking this: on linear-in-resources (LIR) views, we could lose out on most value if we (i) capture only a small fraction of resources that we could have done, and/âor (ii) use resources in a less efficient way than we could have done. (Where on a LIR view, there is some use of resources that has the highest value/âunit of resources, and everything should be used in that way.)
Plausibly at least, only a tiny % of possible ways of using resources are close to the value produced by the highest value/âunit of resources use. So, the thinking goes, merely getting non-extinction isnât yet getting you close to a near-best futureâinstead you really need to get from a non-extinction future to that optimally-used-resources future, and if you donât then you lose out on almost all value.
Average utilitarianism is approx linear in resources as long as at least one possible individualâs wellbeing is linear in resources.
I.e. we create Mr Utility Monster, who has wellbeing that is linear in resources, and give all resources to benefiting Mr Monster. Total value is the same as it would be under total utilitarianism, just divided by a constant (namely, the number of people whoâve ever lived).
I wasnât sure if itâs really useful to think about value being linear in resources on some views. If you have a fixed population and imagine increasing the resources they have available, I assume that the value of the outcome is a strictly concave function of the resource base. Doubling the population might double the value of the outcome, although itâs not clear that this constitutes a doubling of resources. And why should it matter if the relationship between value and resources is strictly concave? Isnât the key question something like whether there are potentially realizable futures that are many orders of magnitude more valuable than the default or where we are now? Answering yes seems compatible with thinking that the function relating resources to value is strictly concave and asymptotes, so long as it asymptotes somewhere suitably high up on the scale of value.
If you have a fixed population and imagine increasing the resources they have available, I assume that the value of the outcome is a strictly concave function of the resource base.
Certainly given current levels of technology, but perhaps not given future technology (e.g. indefinite life-extension technology), at least if individual wellbeing is proportional to number of happy years lived.
âDoubling the population might double the value of the outcome, although itâs not clear that this constitutes a doubling of resources.â
I was thinking youâd need twice as many resources to have twice as many people?
âAnd why should it matter if the relationship between value and resources is strictly concave? Isnât the key question something like whether there are potentially realizable futures that are many orders of magnitude more valuable than the default or where we are now? Answering yes seems compatible with thinking that the function relating resources to value is strictly concave and asymptotes, so long as it asymptotes somewhere suitably high up on the scale of value. â
Yes, in principle, but I think that if you have the upper-bound view, then you do so on the basis of common-sense intuition. But if so, then I think the upper bound is probably really low in cosmic scalesâlike, if we already have a Common Sense Eutopia within the solar system, I think weâd be more than 50% of the way from 0 to the upper bound.
Another reason you might have an upper bound is that the axioms of expected utility theory require your utility function to be bounded given the most natural generalization to the case of countably infinite gambles.
If the true/âbest/âmy subjective axiology is linear in resources (e.g. total utilitarianism), lots of âgoodâ futures will probably capture a very small fraction of how good the optimal future could have been. Conversely, if axiology is not linear in resources (e.g. intuitive morality, average utilitarianism), good futures seem more likely to be nearly optimal. Therefore whether axiology is linear in resources is one of the cruxes for the debate week question.
Discuss.
The easiest way, in my view, to make a near-optimal future very likely, conditional on non-extinction, is if value is bounded above.
Thereâs an argument that this is the common sense view. E.g. consider:
Common-sense Eutopia: In the future, there is a very large population with very high well-being; those people are able to do almost anything they want as long as they donât harm others. They have complete scientific and technological understanding. War and conflict are things of the past. Environmental destruction has been wholly reversed; Earth is now a natural paradise. However, society is limited only to the solar system, and will come to an end once the Sun has exited its red giant phase, in about five billion years.
Does this seem to capture less than one 10^22th of all possible value? (Because there are ~10^22 affectable stars, so civilisation could be over 10^22 times as big). On my common-sense moral intuitions, no.
Making this argument stronger: Normally, quantities of value are defined are in terms of the value of risky gambles. So what it means to say that Common-sense Eutopia is less than one 10^22th of all possible value is that a gamble with a one in 10^22 chance of producing an ideal-society-across-all-the-stars, and a (1 â 1/â10^22) chance of near-term extinction, is better than producing Common-sense Eutopia for certain.
But that seems wild. Of all the issues facing classical utilitarianism, this seems the most problematic to me.
Yes.
So you doubt fanaticism, the view that a tiny chance of an astronomically good outcome can be more valuable than a certainty of a decent outcome. What about in the case of certainty? Do you doubt the utilitarianâs objection to Common-sense Eutopia? This kind of aggregation seems important for the case for longtermism. (See the pages of paper dolls in What We Owe the Future.)
Yeah, I think the issue (for me) is not just about fanaticism. Give me Common-sense Eutopia or a gamble with a 90% chance of extinction and a 10% chance of Common-sense Eutopia 20 times the size, and it seems problematic to choose the gamble.
(To be clearâother views, on which value is diminishing, are also really problematic. Weâre in impossibility theorem territory, and I see the whole things as a mess; I donât have a positive view Iâm excited about.)
Re WWOTF: You can (and should) think that thereâs huge amounts of value at stake in the future, and even think that thereâs much more value at stake in the future than there is in the present century, without thinking that value is linear in number of happy people. It diminishes the case a bit, but nowhere near enough for longtermism to not go through.
As you say you can block the obligation to gamble and risk Common-sense Eutopia for something better in different ways/âfor different reasons.
For me, Common-sense Eutopia sounds pretty appealing because it ensures continuity for existing people. Considering many people donât have particularly resource-hungry life goals, Common-sense Eutopia would score pretty high on a perspective where it matters what existing people want for the future of themselves and their loved ones.
Even if we say that other considerations besides existing people also matter morally, we may not want those other considerations to just totally swamp/âoutweigh how good Common-sense Eutopia is from the perspective of existing people.
Sure you could have a view that itâs great to have 10^12 people, but no more than that, but that seems like a really weird thing to have written in the stars. Or that all that matters is creating the Machine God, so we havenât attained any value yet. But that doesnât seem great.
Do you have a gloss on a kind of view that threads the needle nicely without being too crazy, even if it doesnât ultimately withstand scrutiny?
How much do you think that having lots of mostly or entirely identical future lives is differently valuable than having vastly different positive lives? (Because that would create a reasonable view on which a more limited number of future people can saturate the possible future value.)
Bostrom discusses things like this in Deep Utopia, under the label of âinterestingnessâ (where even if we edit post-humans to never be subjectively bored, maybe they run out of âobjectively interestingâ things to do and this leads to value not being nearly as high as it could otherwise be). I donât think he takes a stance on whether or how much interestingness actually matters, but I am only ~half way through the book so far.
This seems almost exactly like the repugnant conclusion. Taken to extremes, intuition disagrees with logic. When that happens, itâs usually the worse for intuition.
Iâm not a utilitarian, but I find the repugnant conclusion impossible to reject if you are.
If you want chose what is good for everyone, thereâs little argument what that is in those cases.
And if weâre talking about whatâs good for everyone, thatâs got to be a linear sum of whatâs good for each someone. If the sum is nonlinear, who exactly is worth less than the others? This leads to the repugnant conclusion and your conclusion here.
Other definitions of âgood for everyoneâ seem to always mean âwhat I idiosyncratically prefer for everyone else but meâ.
There are funky axiologies where value is superlinear in resources â basically any moral worldview that embraces holism. If you think that the whole, arranged a particular way, is more valuable than the parts, then you will be even more precious about how precisely the world should be arranged than the total utilitarian.
Since others have discussed the implications, I want to push a bit on the assumptions.
I worry that non-linear axiologies[1] end up endorsing egoism, helping only those whose moral patienthood you are most confident in or otherwise prioritizing them far too much over those of less certain moral patienthood. See Oesterheld, 2017 and Tarsney, 2023.
(I also think average utilitarianism in particular is pretty bad, because it would imply that if the average welfare is negative (even torturous), adding bad lives can be good, as long as theyâre even slightly less bad than average.)
Maybe you can get around this with non-aggregative or partially aggregative views. EDIT: Or, if youâre worried about fanaticism, difference-making views.
Assuming completeness, transitivity and the independence of irrelevant alternatives and each marginal moral patient matters less.
I also think average utilitarianism doesnât seem very plausible. I was just using it as an example of a non-linear theory (though as Will notes if any individual is linear in resources so is the world as a whole, just with a smaller derivative).
Unpacking this: on linear-in-resources (LIR) views, we could lose out on most value if we (i) capture only a small fraction of resources that we could have done, and/âor (ii) use resources in a less efficient way than we could have done. (Where on a LIR view, there is some use of resources that has the highest value/âunit of resources, and everything should be used in that way.)
Plausibly at least, only a tiny % of possible ways of using resources are close to the value produced by the highest value/âunit of resources use. So, the thinking goes, merely getting non-extinction isnât yet getting you close to a near-best futureâinstead you really need to get from a non-extinction future to that optimally-used-resources future, and if you donât then you lose out on almost all value.
Average utilitarianism is approx linear in resources as long as at least one possible individualâs wellbeing is linear in resources.
I.e. we create Mr Utility Monster, who has wellbeing that is linear in resources, and give all resources to benefiting Mr Monster. Total value is the same as it would be under total utilitarianism, just divided by a constant (namely, the number of people whoâve ever lived).
I wasnât sure if itâs really useful to think about value being linear in resources on some views. If you have a fixed population and imagine increasing the resources they have available, I assume that the value of the outcome is a strictly concave function of the resource base. Doubling the population might double the value of the outcome, although itâs not clear that this constitutes a doubling of resources. And why should it matter if the relationship between value and resources is strictly concave? Isnât the key question something like whether there are potentially realizable futures that are many orders of magnitude more valuable than the default or where we are now? Answering yes seems compatible with thinking that the function relating resources to value is strictly concave and asymptotes, so long as it asymptotes somewhere suitably high up on the scale of value.
Certainly given current levels of technology, but perhaps not given future technology (e.g. indefinite life-extension technology), at least if individual wellbeing is proportional to number of happy years lived.
âDoubling the population might double the value of the outcome, although itâs not clear that this constitutes a doubling of resources.â
I was thinking youâd need twice as many resources to have twice as many people?
âAnd why should it matter if the relationship between value and resources is strictly concave? Isnât the key question something like whether there are potentially realizable futures that are many orders of magnitude more valuable than the default or where we are now? Answering yes seems compatible with thinking that the function relating resources to value is strictly concave and asymptotes, so long as it asymptotes somewhere suitably high up on the scale of value. â
Yes, in principle, but I think that if you have the upper-bound view, then you do so on the basis of common-sense intuition. But if so, then I think the upper bound is probably really low in cosmic scalesâlike, if we already have a Common Sense Eutopia within the solar system, I think weâd be more than 50% of the way from 0 to the upper bound.
Another reason you might have an upper bound is that the axioms of expected utility theory require your utility function to be bounded given the most natural generalization to the case of countably infinite gambles.
Agreed!