In my understanding, [a confident focus on extinction risk] relies crucially on the assumption that the utility of the future cannot have exponential growth in the long term
I wanted to say thanks for spelling that out. It seems that this implicitly underlies some important disagreements. By contrast, I think this addition is somewhat counterproductive:
and will instead essentially reach a plateau.
The idea of a plateau brings to mind images of sub-linear growth, but all that is required is sub-exponential growth, a much weaker claim. I think this will cause confusion.
I also appreciated that the piece is consistently accurate. As I wrote this comment, there were several times where I was considering writing some response, then saw that the piece has a caveat for exactly the problem I was going to point out, or a footnote which explained what I was confused about.
A particular kind of accuracy is representing the views of others well. I don’t think the piece is always as charitable as it could be, but details like footnote 15 make it much easier to understand what exactly other people’s views are. Also, the simple absence of gross mischaracterisations of other people’s views made this piece much more useful to me than many critiques.
Here are a few thoughts on how the model or framing could be more useful:
‘Growth rate’
The concept of a ‘growth rate’ seems useful in many contexts. However, applying the concept to a long-run process locks the model of the process into the framework of an exponential curve, because only exponential curves have a meaningful long-run growth rate (as defined in this piece). The position that utility will grow like an exponential is just one of many possibilities. As such, it seems preferable to simply talk directly in terms of the shape of long-run utility.
Model decomposition
When discussing the shape of long-run utility, it might be easier to decompose total utility into population size and utility per capita. In particular, the ‘utility = log(GDP)’ model is actually ‘in a perfectly equal world, utility per capita = log(GDP per capita)’. i.e. in a perfectly equal world, utility = population size x log(GDP per capita).[1]
For example, this resolves the objection that
if we duplicate our world and create an identical copy of it, I would find it bizarre if our utility function only increases by a constant amount, and find it more reasonable if it is multiplied by some factor.
The proposed duplication doubles population size while keeping utility per capita fixed, so it is a doubling[2] of utility in a model of this form, as expected.
More broadly, I suspect that the feasibility of ways to gain at-least-exponentially greater resources over time (analogous to population size, e.g. baby-universes, reversible computation[3]) and ways to use those resources at-least-exponentially better (analogous to utility per capita, no known proposals?) might be debated quite separately.
How things relate to utility
Where I disagreed or thought the piece was less clear, it was usually because something seemed at risk of being confused for utility. For example, explosive growth in ‘the space of possible patterns of matter we can potentially explore’ is used as an argument for possible greater-than-exponential growth in utility, but the connection between these two things seems tenuous. Sharpening the argument there could make it more convincing.
More broadly, I can imagine any concrete proposal for how utility per capita might be able to rise exponentially over very long timescales being much more compelling for taking the idea seriously. For example, if the Christiano reversible computation piece Max Daniel links to turns out to be accurate, that naively seems more compelling.
Switching costs
My take is that these parts don’t get at the heart of any disagreements.
It already seems fairly common that, when faced with two approaches which look optimal under different answers to intractable questions, Effective Altruism-related teams and communities take both approaches simultaneously. For example, this is ongoing at the level of cause prioritisation and in how the AI alignment community works on multiple agendas simultaneously. It seems that the true disagreements are mostly around whether or not growth interventions are sufficiently plausible to add to the portfolio, rather than whether diversification can be valuable.
The piece also ties in some concerns about community health to switching costs. I particularly agree that we would not want to lose informed critics. However, similarly to the above, I don’t think this is a real point of disagreement. Discussed simultaneously are risk from being ‘surrounded by people who think that what I intend to do is of negligible importance’ and risks from people ‘being reminded that their work is of negligible importance’. I think this conflates what people believe with whether they treat those around them with respect, which I think are largely independent problems. It seems fairly clear that we should attempt to form accurate beliefs about what is best, and simultaneously be kind and supportive to other people trying to help others using evidence and reason.
---
[1] The standard log model is surely wrong but the point stands with any decomposition into population size multiplied by a function of GDP per capita.
[2] I think the part about creating identical copies is not the main point of the thought experiment and would be better separated out (by stipulating that a very similar but not identical population is created). However, I guess that in the case that we are actually creating identical people we can handle how much extra moral relevance we think this creates through the population factor.
[3] I guess it might be worth making super clear that these are hypothetical examples rather than things for which I have views on whether they are real.
Thanks for your detailed and kind comments! It’s true that naming this a “plateau” is not very accurate. It was my attempt to make the reader’s life a bit easier by using a notion that is relatively easier to grasp in the main text (with some math details in a footnote for those who want more precision). About the growth rate, mathematically a function is fully described by its growth rate (and initial condition), and here the crux is whether or not the growth rate will go to zero relatively quickly, so it seems like a useful concept to me.
(When you refer to footnote 15, that can make sense, but I wonder if you were meaning footnote 5 instead.)
I agree with all the other things you say. I may be overly worried about our community becoming more and more focused on one particular cause area, possibly because of a handful of disappointing personal experiences. One of the main goals of this post was to make people more aware of the fact that current recommendations are based in an important way on a certain belief on the trajectory of the far future, and maybe I should have focused on that goal only instead of trying to do several things at once and not doing them all very well :-)
Thanks!
I wanted to say thanks for spelling that out. It seems that this implicitly underlies some important disagreements. By contrast, I think this addition is somewhat counterproductive:
The idea of a plateau brings to mind images of sub-linear growth, but all that is required is sub-exponential growth, a much weaker claim. I think this will cause confusion.
I also appreciated that the piece is consistently accurate. As I wrote this comment, there were several times where I was considering writing some response, then saw that the piece has a caveat for exactly the problem I was going to point out, or a footnote which explained what I was confused about.
A particular kind of accuracy is representing the views of others well. I don’t think the piece is always as charitable as it could be, but details like footnote 15 make it much easier to understand what exactly other people’s views are. Also, the simple absence of gross mischaracterisations of other people’s views made this piece much more useful to me than many critiques.
Here are a few thoughts on how the model or framing could be more useful:
‘Growth rate’
The concept of a ‘growth rate’ seems useful in many contexts. However, applying the concept to a long-run process locks the model of the process into the framework of an exponential curve, because only exponential curves have a meaningful long-run growth rate (as defined in this piece). The position that utility will grow like an exponential is just one of many possibilities. As such, it seems preferable to simply talk directly in terms of the shape of long-run utility.
Model decomposition
When discussing the shape of long-run utility, it might be easier to decompose total utility into population size and utility per capita. In particular, the ‘utility = log(GDP)’ model is actually ‘in a perfectly equal world, utility per capita = log(GDP per capita)’. i.e. in a perfectly equal world, utility = population size x log(GDP per capita).[1]
For example, this resolves the objection that
The proposed duplication doubles population size while keeping utility per capita fixed, so it is a doubling[2] of utility in a model of this form, as expected.
More broadly, I suspect that the feasibility of ways to gain at-least-exponentially greater resources over time (analogous to population size, e.g. baby-universes, reversible computation[3]) and ways to use those resources at-least-exponentially better (analogous to utility per capita, no known proposals?) might be debated quite separately.
How things relate to utility
Where I disagreed or thought the piece was less clear, it was usually because something seemed at risk of being confused for utility. For example, explosive growth in ‘the space of possible patterns of matter we can potentially explore’ is used as an argument for possible greater-than-exponential growth in utility, but the connection between these two things seems tenuous. Sharpening the argument there could make it more convincing.
More broadly, I can imagine any concrete proposal for how utility per capita might be able to rise exponentially over very long timescales being much more compelling for taking the idea seriously. For example, if the Christiano reversible computation piece Max Daniel links to turns out to be accurate, that naively seems more compelling.
Switching costs
My take is that these parts don’t get at the heart of any disagreements.
It already seems fairly common that, when faced with two approaches which look optimal under different answers to intractable questions, Effective Altruism-related teams and communities take both approaches simultaneously. For example, this is ongoing at the level of cause prioritisation and in how the AI alignment community works on multiple agendas simultaneously. It seems that the true disagreements are mostly around whether or not growth interventions are sufficiently plausible to add to the portfolio, rather than whether diversification can be valuable.
The piece also ties in some concerns about community health to switching costs. I particularly agree that we would not want to lose informed critics. However, similarly to the above, I don’t think this is a real point of disagreement. Discussed simultaneously are risk from being ‘surrounded by people who think that what I intend to do is of negligible importance’ and risks from people ‘being reminded that their work is of negligible importance’. I think this conflates what people believe with whether they treat those around them with respect, which I think are largely independent problems. It seems fairly clear that we should attempt to form accurate beliefs about what is best, and simultaneously be kind and supportive to other people trying to help others using evidence and reason.
---
[1] The standard log model is surely wrong but the point stands with any decomposition into population size multiplied by a function of GDP per capita.
[2] I think the part about creating identical copies is not the main point of the thought experiment and would be better separated out (by stipulating that a very similar but not identical population is created). However, I guess that in the case that we are actually creating identical people we can handle how much extra moral relevance we think this creates through the population factor.
[3] I guess it might be worth making super clear that these are hypothetical examples rather than things for which I have views on whether they are real.
Thanks for your detailed and kind comments! It’s true that naming this a “plateau” is not very accurate. It was my attempt to make the reader’s life a bit easier by using a notion that is relatively easier to grasp in the main text (with some math details in a footnote for those who want more precision). About the growth rate, mathematically a function is fully described by its growth rate (and initial condition), and here the crux is whether or not the growth rate will go to zero relatively quickly, so it seems like a useful concept to me.
(When you refer to footnote 15, that can make sense, but I wonder if you were meaning footnote 5 instead.)
I agree with all the other things you say. I may be overly worried about our community becoming more and more focused on one particular cause area, possibly because of a handful of disappointing personal experiences. One of the main goals of this post was to make people more aware of the fact that current recommendations are based in an important way on a certain belief on the trajectory of the far future, and maybe I should have focused on that goal only instead of trying to do several things at once and not doing them all very well :-)