I don’t know if they’re making a mistake—my question wasn’t meant to be rhetorical.
I take your point about capacity constraints, but if no-one else is stepping up, it seems like it might be worth OP stepping up their capacity constraints.
I continue to think the EA movement systematically underestimates the x-riskiness of nonextinction events in general and nuclear risk in particular by ignoring much of the increased difficulty of becoming interstellar post-destruction/exploitation of key resources. I gave some example scenarios of this here (see also David’s results) - not intended to be taken too seriously, but nonetheless incorporating what I think are significant factors that other longtermist work omits (e.g. in The Precipice, Ord defines x-risk very broadly, but when he comes to estimate the x-riskiness of ‘conventional’ GCRs, he discusses them almost entirely in terms of their probability of making humans immediately go extinct, which I suspect constitutes a tiny fraction of their EV loss).
For what it’s worth, my working assumption for many risks (e.g. nuclear, supervolcanic eruption) was that their contribution to existential risk via ‘direct’ extinction was of a similar level to their contribution via civilisation collapse. e.g. that a civilisation collapse event was something like 10 times as likely, but that there was also a 90% chance of recovery. So in total, the consideration of non-direct pathways roughly doubled my estimates for a number of risks.
One thing I didn’t do was to include their roles as risk factors. e.g. the effect that being on the brink of nuclear war has on overall existential risk even if the nuclear war doesn’t occur.
For what it’s worth, my working assumption for many risks (e.g. nuclear, supervolcanic eruption) was that their contribution to existential risk via ‘direct’ extinction was of a similar level to their contribution via civilisation collapse
I was guessing you agreed the direct extinction risk from nuclear war and volcanoes was astronomically low, so I am very surprised by the above. I think it implies your annual extinction risk from:
Nuclear war is around 5*10^-6 (= 0.5*10^-3/100), which is 843 k (= 5*10^-6/(5.93*10^-12)) times mine.
Volcanoes is around 5*10^-7 (= 0.5*10^-4/100), which is 14.8 M (= 5*10^-7/(3.38*10^-14)) times mine.
I would be curious to know your thoughts on my estimates. Feel free to follow up in the comments on their posts (which I had also emailed to you around 3 and 2 months ago). In general, I think it would be great if you could explain how you got all your existential risk estimates shared in The Precipice (e.g. decomposing them into various factors as I did in my analyses, if that is how you got them).
Your comment above seems to imply that direct extinction would be an existential risk, but I actually think human extinction would be very unlikely to be an existential catastrophe if it was caused by nuclear war or volcanoes. For example, I think there would only be a 0.0513 % (= e^(-10^9/(132*10^6))) chance of a repetition of the last mass extinction 66 M years ago, the Cretaceous–Paleogene extinction event, being existential. I got my estimate assuming:
An exponential distribution with a mean of 132 M years (= 66*10^6*2) represents the time between i) human extinction in such catastrophe and ii) the evolution of an intelligent sentient species after such a catastrophe. I supposed this on the basis that:
Given the above, i) and ii) are equally likely. So the probability of an intelligent sentient species evolving after human extinction in such a catastrophe is 50 % (= 1⁄2).
Consequently, one should expect the time between i) and ii) to be 2 times (= 1⁄0.50) as long as that if there were no such catastrophes.
An intelligent sentient species has 1 billion years to evolve before the Earth becomes habitable.
Thanks Toby, that’s good to know. As I recall, your discussion (much of which was in footnotes) focussed very strongly on effects that might be extinction-oriented, though, so I would be inclined to put more weight on your estimates of the probability of extinction than your estimates of indirect effects.
E.g. a scenario you didn’t discuss that seems seem plausible to me is approximately “reduced resource availability slows future civilisations’ technical development enough that they have to spend a much greater period in the time of perils, and in practice become much less likely to ever successfully navigate through it”—even if we survive as a semitechnological species for hundreds of millions of years.
I discuss something similar to that a bit on page 41, but mainly focusing on whether depletion could make it harder for civilisation to re-emerge. Ultimately, it still looks to me like it would be easier and faster the second time around.
I’d be interested to reread that, but on my version p41 has the beginning of the ‘civilisational virtues’ section and end of ‘looking to our past’, and I can’t see anything relevant.
I may have forgotten something you said, but as I recall, the claim is largely that there’ll be leftover knowledge and technology which will speed up the process. If so, I think it’s highly optimistic to say it would be faster:
1) The blueprints leftover by the previous civilisation will at best get us as far as they did, but to succeed we’ll necessarily need to develop substantially more advanced technology than they had.
2) In practice they won’t get us that far—a lot of modern technology is highly contingent on the exigencies of currently available resources. E.g. computers would presumably need a very different design in a world without access to cheap plastics.
3) The second time around isn’t the end of the story—we might need to do this multiple times, creating a multiplicative drain on resources (e.g. if development is slowed by the absence of fossil fuels, we’ll spend that much longer using up rock phosphorus), whereas lessons available from previous civilisations will be at best additive and likely not as good as that—we’ll probably lose most of the technology of earlier civilisations when dissecting it to make the current one. So even if the second time would be faster, it would move us one civilisation closer to a state where it’s impossibly slow.
I don’t know if they’re making a mistake—my question wasn’t meant to be rhetorical.
I take your point about capacity constraints, but if no-one else is stepping up, it seems like it might be worth OP stepping up their capacity constraints.
I continue to think the EA movement systematically underestimates the x-riskiness of nonextinction events in general and nuclear risk in particular by ignoring much of the increased difficulty of becoming interstellar post-destruction/exploitation of key resources. I gave some example scenarios of this here (see also David’s results) - not intended to be taken too seriously, but nonetheless incorporating what I think are significant factors that other longtermist work omits (e.g. in The Precipice, Ord defines x-risk very broadly, but when he comes to estimate the x-riskiness of ‘conventional’ GCRs, he discusses them almost entirely in terms of their probability of making humans immediately go extinct, which I suspect constitutes a tiny fraction of their EV loss).
For what it’s worth, my working assumption for many risks (e.g. nuclear, supervolcanic eruption) was that their contribution to existential risk via ‘direct’ extinction was of a similar level to their contribution via civilisation collapse. e.g. that a civilisation collapse event was something like 10 times as likely, but that there was also a 90% chance of recovery. So in total, the consideration of non-direct pathways roughly doubled my estimates for a number of risks.
One thing I didn’t do was to include their roles as risk factors. e.g. the effect that being on the brink of nuclear war has on overall existential risk even if the nuclear war doesn’t occur.
Thanks for the context, Toby!
I was guessing you agreed the direct extinction risk from nuclear war and volcanoes was astronomically low, so I am very surprised by the above. I think it implies your annual extinction risk from:
Nuclear war is around 5*10^-6 (= 0.5*10^-3/100), which is 843 k (= 5*10^-6/(5.93*10^-12)) times mine.
Volcanoes is around 5*10^-7 (= 0.5*10^-4/100), which is 14.8 M (= 5*10^-7/(3.38*10^-14)) times mine.
I would be curious to know your thoughts on my estimates. Feel free to follow up in the comments on their posts (which I had also emailed to you around 3 and 2 months ago). In general, I think it would be great if you could explain how you got all your existential risk estimates shared in The Precipice (e.g. decomposing them into various factors as I did in my analyses, if that is how you got them).
Your comment above seems to imply that direct extinction would be an existential risk, but I actually think human extinction would be very unlikely to be an existential catastrophe if it was caused by nuclear war or volcanoes. For example, I think there would only be a 0.0513 % (= e^(-10^9/(132*10^6))) chance of a repetition of the last mass extinction 66 M years ago, the Cretaceous–Paleogene extinction event, being existential. I got my estimate assuming:
An exponential distribution with a mean of 132 M years (= 66*10^6*2) represents the time between i) human extinction in such catastrophe and ii) the evolution of an intelligent sentient species after such a catastrophe. I supposed this on the basis that:
An exponential distribution with a mean of 66 M years describes the time between:
2 consecutive such catastrophes.
i) and ii) if there are no such catastrophes.
Given the above, i) and ii) are equally likely. So the probability of an intelligent sentient species evolving after human extinction in such a catastrophe is 50 % (= 1⁄2).
Consequently, one should expect the time between i) and ii) to be 2 times (= 1⁄0.50) as long as that if there were no such catastrophes.
An intelligent sentient species has 1 billion years to evolve before the Earth becomes habitable.
Thanks Toby, that’s good to know. As I recall, your discussion (much of which was in footnotes) focussed very strongly on effects that might be extinction-oriented, though, so I would be inclined to put more weight on your estimates of the probability of extinction than your estimates of indirect effects.
E.g. a scenario you didn’t discuss that seems seem plausible to me is approximately “reduced resource availability slows future civilisations’ technical development enough that they have to spend a much greater period in the time of perils, and in practice become much less likely to ever successfully navigate through it”—even if we survive as a semitechnological species for hundreds of millions of years.
I discuss something similar to that a bit on page 41, but mainly focusing on whether depletion could make it harder for civilisation to re-emerge. Ultimately, it still looks to me like it would be easier and faster the second time around.
I’d be interested to reread that, but on my version p41 has the beginning of the ‘civilisational virtues’ section and end of ‘looking to our past’, and I can’t see anything relevant.
I may have forgotten something you said, but as I recall, the claim is largely that there’ll be leftover knowledge and technology which will speed up the process. If so, I think it’s highly optimistic to say it would be faster:
1) The blueprints leftover by the previous civilisation will at best get us as far as they did, but to succeed we’ll necessarily need to develop substantially more advanced technology than they had.
2) In practice they won’t get us that far—a lot of modern technology is highly contingent on the exigencies of currently available resources. E.g. computers would presumably need a very different design in a world without access to cheap plastics.
3) The second time around isn’t the end of the story—we might need to do this multiple times, creating a multiplicative drain on resources (e.g. if development is slowed by the absence of fossil fuels, we’ll spend that much longer using up rock phosphorus), whereas lessons available from previous civilisations will be at best additive and likely not as good as that—we’ll probably lose most of the technology of earlier civilisations when dissecting it to make the current one. So even if the second time would be faster, it would move us one civilisation closer to a state where it’s impossibly slow.