Fwiw I commented on Thorstad’s linkpost for the paper when he first posted about it here. My impression is that he’s broadly sympathetic to my claim about multiplanetary resilience, but either doesn’t believe we’ll get that far or thinks that the AI counterconsideration dominates it.
In this light, I think that the claim that annual x-risk being lower than 1/(10^-9) being ‘implausible’ is much too strong if it’s being used to undermine EV reasoning. Like I said—if we become interstellar and no universe-ending doomsday technologies exist, then multiplicativity of risk gets you there pretty fast. If each planet has, say 1/(10^5) annual chance of extinction, then n planets have 1/(10^(5^n)) chance of all independently going extinct in a given year. For n=2 that’s already one in ten billion.
Obviously there’s a) a much higher chance that they could go extinct in different years and b) that they could go all extinct in any given period from non-independent events such as war. But even so, it’s hard to believe that increasing k, say to double digits, doesn’t rapidly outweigh such considerations, especially given that an advanced civilisation could probably create new self-sustaining settlements in a matter of years.
I feel it is highly speculative on the difficulties of making comebacks and on the likelihood of extreme climate change
I don’t understand how you think climate change is more speculative than AI risk. I think it’s reasonable to have higher credence in human extinction from the latter, but those scenarios are entirely speculative. Extreme climate change is possible if a couple of parameters turn out to have been mismeasured.
As for the probability of making comebacks, I’d like to write a post about this, but the narrative goes something like this:
to ‘flourish’ (in an Ordian sense), we need to reach a state of sufficiently low x-risk
per above, by far the mathematically plausible way of doing this is just increasing our number of self-sustaining settlements
you could theoretically do it with an exceptionally stable political/social system, but I’m with Thorstad that the level of political stability this requires seems implausible
to reach that state, we have to develop advanced technologies—well beyond what we have now. So the question about ‘comebacks’ is misplaced—the question is about our prospect of getting from the beginning to (a good) end of at least one time of perils without a catastrophe
Dating our current time of perils to 1945, it looks like we’re on course, barring global catastrophes, to develop a self-sustaining civilisation in maybe 120-200 years
Suppose there’s a constant k probability annual risk of a catastrophe that regresses us to pre-time-of perils technology. Then our outlook in 1945 was, approximately, (1-k)^160 chance of getting to a multiplanetary state. Since we’ve made it 80 years in, we have a substantially better ~(1-k)^80.
If we restart from pre-1945 levels of technology, we will do so with max 10% of the fossil fuel energy we had, as well as many other depleted resources (fertiliser, uranium, etc). This will slow any kind of reboot substantially. See e.g. comparisons of coal to photovoltaics here.
There’s huge uncertainty here, but when you multiply out the friction from all the different depleted resources, I think progress the second time around will be optimistically 1⁄2 the speed, and pessimistically 1/10x or worse. (Based on above link, if photovoltaics were to entirely substitute fossil fuels, that drag alone would be around a ~30/5.5 multiplier on the cost of generating energy, which seems like it could easily slow economic development by a comparable amount)
That means in a reboot, we have optimistically (1-k)^320 chance of getting to a good outcome, pessimistically (1-k)^1600
During that reboot, we can expect the new civilisation to preferentially use up the most efficient resources just as we do (it doesn’t have to destroy them, just move them to much higher entropy states, such as our current practice of flushing fertiliser into the ocean) - but they have 2x, 10x or however much longer doing so.
That means civilisation 3 would have as much a disadvantage over civilisation 2 as civilisation 2 would over us, giving it optimistically a (1-k)^640 chance of a good outcome, pessimistically a (1-k)^16000 chance.
If we plug in k=0.001, which seems to be a vaguely representative estimate among x-risk experts, then in 1945 we would have had an 85% chance, today we would have a 92% chance, after one backslide we would have optimistically 73%, pessimistically 20%, and after Backslide Two we would have optimistically 53%, pessimistically basically 0.
We can roughly convert these to units of ‘extinction’ by dividing the loss of probability by our current prospects. So going to probability 53%, would be losing 32% of our current prospects, which is 32%/85% as bad in the long term as extinction.
This is missing a lot of nuance, obviously, which I’ve written about in this sequence, so we certainly shouldn’t take these numbers very seriously. But I think they overall paint a pretty reasonable picture of a ‘minor’ catastrophe being, in long-run expectation and aside from any short-term suffering or change in human morality, perhaps in the range of 15-75% as bad as extinction. Lots of room for discussing particulars, but not something we should dismiss as extinction being ‘much worse’ than—and in particular, not sufficiently lower that we can in practice afford to ignore the relative probabilities of extinction vs lesser global catastrophe.
Fwiw I commented on Thorstad’s linkpost for the paper when he first posted about it here. My impression is that he’s broadly sympathetic to my claim about multiplanetary resilience, but either doesn’t believe we’ll get that far or thinks that the AI counterconsideration dominates it.
In this light, I think that the claim that annual x-risk being lower than 1/(10^-9) being ‘implausible’ is much too strong if it’s being used to undermine EV reasoning. Like I said—if we become interstellar and no universe-ending doomsday technologies exist, then multiplicativity of risk gets you there pretty fast. If each planet has, say 1/(10^5) annual chance of extinction, then n planets have 1/(10^(5^n)) chance of all independently going extinct in a given year. For n=2 that’s already one in ten billion.
Obviously there’s a) a much higher chance that they could go extinct in different years and b) that they could go all extinct in any given period from non-independent events such as war. But even so, it’s hard to believe that increasing k, say to double digits, doesn’t rapidly outweigh such considerations, especially given that an advanced civilisation could probably create new self-sustaining settlements in a matter of years.
I don’t understand how you think climate change is more speculative than AI risk. I think it’s reasonable to have higher credence in human extinction from the latter, but those scenarios are entirely speculative. Extreme climate change is possible if a couple of parameters turn out to have been mismeasured.
As for the probability of making comebacks, I’d like to write a post about this, but the narrative goes something like this:
to ‘flourish’ (in an Ordian sense), we need to reach a state of sufficiently low x-risk
per above, by far the mathematically plausible way of doing this is just increasing our number of self-sustaining settlements
you could theoretically do it with an exceptionally stable political/social system, but I’m with Thorstad that the level of political stability this requires seems implausible
to reach that state, we have to develop advanced technologies—well beyond what we have now. So the question about ‘comebacks’ is misplaced—the question is about our prospect of getting from the beginning to (a good) end of at least one time of perils without a catastrophe
Dating our current time of perils to 1945, it looks like we’re on course, barring global catastrophes, to develop a self-sustaining civilisation in maybe 120-200 years
Suppose there’s a constant k probability annual risk of a catastrophe that regresses us to pre-time-of perils technology. Then our outlook in 1945 was, approximately, (1-k)^160 chance of getting to a multiplanetary state. Since we’ve made it 80 years in, we have a substantially better ~(1-k)^80.
If we restart from pre-1945 levels of technology, we will do so with max 10% of the fossil fuel energy we had, as well as many other depleted resources (fertiliser, uranium, etc). This will slow any kind of reboot substantially. See e.g. comparisons of coal to photovoltaics here.
There’s huge uncertainty here, but when you multiply out the friction from all the different depleted resources, I think progress the second time around will be optimistically 1⁄2 the speed, and pessimistically 1/10x or worse. (Based on above link, if photovoltaics were to entirely substitute fossil fuels, that drag alone would be around a ~30/5.5 multiplier on the cost of generating energy, which seems like it could easily slow economic development by a comparable amount)
That means in a reboot, we have optimistically (1-k)^320 chance of getting to a good outcome, pessimistically (1-k)^1600
During that reboot, we can expect the new civilisation to preferentially use up the most efficient resources just as we do (it doesn’t have to destroy them, just move them to much higher entropy states, such as our current practice of flushing fertiliser into the ocean) - but they have 2x, 10x or however much longer doing so.
That means civilisation 3 would have as much a disadvantage over civilisation 2 as civilisation 2 would over us, giving it optimistically a (1-k)^640 chance of a good outcome, pessimistically a (1-k)^16000 chance.
If we plug in k=0.001, which seems to be a vaguely representative estimate among x-risk experts, then in 1945 we would have had an 85% chance, today we would have a 92% chance, after one backslide we would have optimistically 73%, pessimistically 20%, and after Backslide Two we would have optimistically 53%, pessimistically basically 0.
We can roughly convert these to units of ‘extinction’ by dividing the loss of probability by our current prospects. So going to probability 53%, would be losing 32% of our current prospects, which is 32%/85% as bad in the long term as extinction.
This is missing a lot of nuance, obviously, which I’ve written about in this sequence, so we certainly shouldn’t take these numbers very seriously. But I think they overall paint a pretty reasonable picture of a ‘minor’ catastrophe being, in long-run expectation and aside from any short-term suffering or change in human morality, perhaps in the range of 15-75% as bad as extinction. Lots of room for discussing particulars, but not something we should dismiss as extinction being ‘much worse’ than—and in particular, not sufficiently lower that we can in practice afford to ignore the relative probabilities of extinction vs lesser global catastrophe.