I broadly agree with the upshots you draw, but here are three points that make things a little more complicated:
Continued exponential growth
As you note: (i) if v(.) continues exponentially, then advancements can compete with existential risk reduction; (ii) such continued exponential growth seems very unlikely.
However, it seems above 0 probability that we could have continued exponential growth in v(.) forever, including at the end point (and perhaps even at a very fast rate, like doubling every year). And, if so, then the total value of the future would be dramatically greater than if v(.) increases cubically and/or eventually plateaus. So, one might argue: this is where most of the expected value is. So advancements, in expectation, are competitive with existential risk reduction.
Now, I hate this argument: it seems like it’s falling prey to “fanaticism” in the technical sense, letting our expected value calculations be driven by extremely small probabilities.
But it at least shows that, when thinking about longterm impact, we need to make some tough judgment calls about which possibilities we should ignore on the grounds of driving “fanatical”-seeming conclusions, even while only considering only finite amounts of value.
Aliens
Elsewhere, you note the loss of galaxies due to the expansion of the universe, which means that ~one five-billionth of the universe per year becomes inaccessible.
But if the “grabby aliens” model is correct, then that number is too low. By my calculation, if we meet grabby alien civilisations in, for example, one billion years (which I think is about the median estimate from the grabby aliens model), then we “lose” approximately 1 millionth of accessible resources to alien civilisations every year. This is still very small, but three orders of magnitude higher than what we get by just looking at the expansion of the universe.
(Then there’s a hard and relevant question about the value of alien civilisation versus the value of human-originating civilisation.)
Length of advancements / delays
“An advancement of an entire year would be very difficult to achieve: it may require something comparable to the entire effort of all currently existing humans working for a year.”
This is true when considering “normal” economic trajectories. But I think there are some things we could do that could cause much greater advancements or delays. A few examples:
Preventing the collapse of civilisation. If there were a catastrophe so severe that civilisation went back to pre-industrial or even pre-agricultural levels of technology, this could result in a delay, in expectation, of thousands of years.
Preventing long-term stagnation. Rapid developments in AI are making this seem increasingly unlikely, but it’s possible at least that the world enters a period of long-term stagnation. Combined, potentially, with other catastrophes, this could lead to a delay of hundreds or thousands of years.
The world institutes something like a Long Reflection, delaying space settlement by a thousand years.
Combining this with the “grabby aliens” point, there is potentially 0.1% of the value of the future that could be gained from preventing delays (1000 years * 1 millionth loss per year). Still much lower than the loss of value from anthropogenic existential risks, but higher than from non-anthropogenic risks. It’s enough that I think it’s not really action-relevant, but so at the same time not totally negligible.
Good point about the fact that I was focusing on some normal kind of economic trajectory when assessing the difficulty of advancements and delays. Your examples are good, as is MichaelStJules’ comment about how changing the timing of transformative AI might act as an advancement.
You are right that the presence or absence of alien civilisations (especially those that expand to settle very large regions) can change things. I didn’t address this explicitly because (1) I think it is more likely that we are alone in the affectable universe, and (2) there are many different possible dynamics for multiple interacting civilisations and it is not clear what is the best model. But it is still quite a plausible possibility and some of the possible dynamics are likely enough and simple enough that they are worth analysing.
I’m not sure about the details of your calculation, but have thought a bit about it in terms of Jay Olson’s model of cosmological expanding civilisations (which is roughly how Anders and I think of it, and similar to model Hanson et al independently came up with). On this model, if civilisations expand at a constant fraction of c (which we can call f), the average distance between independently arising civilisations is D light years, and civilisations permanently hold all locations they reach first, then delaying by 1 year loses roughly 3f/D of the resources they could have reached. So if D were 1 billion light years, and f were close to 1, then a year’s delay would lose roughly 1 part in 300 million of the resources. So on my calculation, it would need to be an average distance of about 3 million light years or less, to get the fraction lost down to 1 part in 1 million. And at that point, the arrangement of galaxies makes a big difference. But this was off-the-cuff and I could be overlooking something.
I agree that there is a kind of Pascallian possibility of very small probabilities of exponential growth in value going for extremely long times. If so, then advancements scale in value with v-bar and with τ. This isn’t enough to make them competitive with existential risk reduction ex ante as they are still down-weighted by the very small probability. But it is perhaps enough to cause some issues. Worse is that there is a possibility of growth in value that is faster than an exponential, and this can more than offset the very small probability. This feels very much like Pascal’s Mugging and I’m not inclined to bite the bullet and seek out or focus on outcomes like this. But nor do I have a principled answer to why not. I agree that it is probably useful to put under the label of ‘fanaticism’.
Advancements
I broadly agree with the upshots you draw, but here are three points that make things a little more complicated:
Continued exponential growth
As you note: (i) if v(.) continues exponentially, then advancements can compete with existential risk reduction; (ii) such continued exponential growth seems very unlikely.
However, it seems above 0 probability that we could have continued exponential growth in v(.) forever, including at the end point (and perhaps even at a very fast rate, like doubling every year). And, if so, then the total value of the future would be dramatically greater than if v(.) increases cubically and/or eventually plateaus. So, one might argue: this is where most of the expected value is. So advancements, in expectation, are competitive with existential risk reduction.
Now, I hate this argument: it seems like it’s falling prey to “fanaticism” in the technical sense, letting our expected value calculations be driven by extremely small probabilities.
But it at least shows that, when thinking about longterm impact, we need to make some tough judgment calls about which possibilities we should ignore on the grounds of driving “fanatical”-seeming conclusions, even while only considering only finite amounts of value.
Aliens
Elsewhere, you note the loss of galaxies due to the expansion of the universe, which means that ~one five-billionth of the universe per year becomes inaccessible.
But if the “grabby aliens” model is correct, then that number is too low. By my calculation, if we meet grabby alien civilisations in, for example, one billion years (which I think is about the median estimate from the grabby aliens model), then we “lose” approximately 1 millionth of accessible resources to alien civilisations every year. This is still very small, but three orders of magnitude higher than what we get by just looking at the expansion of the universe.
(Then there’s a hard and relevant question about the value of alien civilisation versus the value of human-originating civilisation.)
Length of advancements / delays
“An advancement of an entire year would be very difficult to achieve: it may require something comparable to the entire effort of all currently existing humans working for a year.”
This is true when considering “normal” economic trajectories. But I think there are some things we could do that could cause much greater advancements or delays. A few examples:
Preventing the collapse of civilisation. If there were a catastrophe so severe that civilisation went back to pre-industrial or even pre-agricultural levels of technology, this could result in a delay, in expectation, of thousands of years.
Preventing long-term stagnation. Rapid developments in AI are making this seem increasingly unlikely, but it’s possible at least that the world enters a period of long-term stagnation. Combined, potentially, with other catastrophes, this could lead to a delay of hundreds or thousands of years.
The world institutes something like a Long Reflection, delaying space settlement by a thousand years.
Combining this with the “grabby aliens” point, there is potentially 0.1% of the value of the future that could be gained from preventing delays (1000 years * 1 millionth loss per year). Still much lower than the loss of value from anthropogenic existential risks, but higher than from non-anthropogenic risks. It’s enough that I think it’s not really action-relevant, but so at the same time not totally negligible.
Good point about the fact that I was focusing on some normal kind of economic trajectory when assessing the difficulty of advancements and delays. Your examples are good, as is MichaelStJules’ comment about how changing the timing of transformative AI might act as an advancement.
>Aliens
You are right that the presence or absence of alien civilisations (especially those that expand to settle very large regions) can change things. I didn’t address this explicitly because (1) I think it is more likely that we are alone in the affectable universe, and (2) there are many different possible dynamics for multiple interacting civilisations and it is not clear what is the best model. But it is still quite a plausible possibility and some of the possible dynamics are likely enough and simple enough that they are worth analysing.
I’m not sure about the details of your calculation, but have thought a bit about it in terms of Jay Olson’s model of cosmological expanding civilisations (which is roughly how Anders and I think of it, and similar to model Hanson et al independently came up with). On this model, if civilisations expand at a constant fraction of c (which we can call f), the average distance between independently arising civilisations is D light years, and civilisations permanently hold all locations they reach first, then delaying by 1 year loses roughly 3f/D of the resources they could have reached. So if D were 1 billion light years, and f were close to 1, then a year’s delay would lose roughly 1 part in 300 million of the resources. So on my calculation, it would need to be an average distance of about 3 million light years or less, to get the fraction lost down to 1 part in 1 million. And at that point, the arrangement of galaxies makes a big difference. But this was off-the-cuff and I could be overlooking something.
>Continued exponential growth
I agree that there is a kind of Pascallian possibility of very small probabilities of exponential growth in value going for extremely long times. If so, then advancements scale in value with v-bar and with τ. This isn’t enough to make them competitive with existential risk reduction ex ante as they are still down-weighted by the very small probability. But it is perhaps enough to cause some issues. Worse is that there is a possibility of growth in value that is faster than an exponential, and this can more than offset the very small probability. This feels very much like Pascal’s Mugging and I’m not inclined to bite the bullet and seek out or focus on outcomes like this. But nor do I have a principled answer to why not. I agree that it is probably useful to put under the label of ‘fanaticism’.