I would like to see more about ‘minor’ GCRs and our chance of actually becoming an interstellar civilisation given various forms of backslide. In practice, the EA movement seems to treat the probability as 1.
I don’t think this is remotely justified. The arguments I’ve seen are generally of the form ‘we’ll still be able to salvage enough resources to theoretically recreate any given technology’, which doesn’t mean we can get anywhere near the economies of scale needed to create global industry on today’s scale, let alone that we actually will given realistic political development. And the industry would need to reach the point where we’re a reliably spacefaring civilisation, well beyond today’s technology, in order to avoid the usual definition of being an existential catastrophe (drastic curtailment of life’s potential).
If the chance of recovery from any given backslide is 99%, then that’s only two orders of magnitude between its expected badness and the badness of outright extinction, even ignoring other negative effects. And given the uncertainty around various GCRs, a couple of orders of magnitude isn’t that big a deal (Toby Ord’s The Precipice puts an order of magnitude or two between the probability of many of the existential risks we’re typically concerned with).
Things I would like to see more discussion of in this area:
General principles for assessing the probability of reaching interstellar travel given specific backslide parameters and then, with reference to this:
Kessler syndrome
Solar storm disruption
CO2 emissions from fossil fuels and other climate change rendering the atmosphere unbreathable (this would be a good old fashioned X-risk, but seems like one that no-one has discussed—in Toby’s book he details some extreme scenarios where a lot of CO2 could be released that wouldn’t necessarily cause human extinction by global warming, but that some of my back-of-the-envelope maths based on his figures seemed consistent with this scenario)
CO2 emissions from fossil fuels and other climate change substantially reducing IQs
Congratulations on winning the comment award! I definitely agree we should broaden the scenarios at which we look. You can see some work on the long term future impact of lesser catastrophes here and here.
Solar storm disruption
Yes, and other catastrophes that could disrupt electricity/industry, such as high-altitude detonation of a nuclear weapon causing an electromagnetic pulse, coordinated cyber attack on electricity (perhaps narrow AI enabled), or an extreme pandemic causing the desertion of critical jobs may be important to work on.
CO2 emissions from fossil fuels and other climate change rendering the atmosphere unbreathable (this would be a good old fashioned X-risk, but seems like one that no-one has discussed—in Toby’s book he details some extreme scenarios where a lot of CO2 could be released that wouldn’t necessarily cause human extinction by global warming, but that some of my back-of-the-envelope maths based on his figures seemed consistent with this scenario)
CO2 emissions from fossil fuels and other climate change substantially reducing IQs
Even 7000 ppm (0.7%) CO2 only has mild effects, and this is much higher than is plausible for Earth’s atmosphere in the next few centuries.
Various ‘normal’ concerns: antibiotic resistant bacteria; peak oil; peak phosphorus;
It is possible that overreaction to these could cause large enough increases in prices to make poor of the world significantly worse off, which could cause political instability and eventually lead to something like nuclear war. But I think it is much lower probability than those that could directly reduce food supply abruptly by order of magnitude 10%.
I think the moderate climate change, perhaps 2°C over a century, is difficult to find a direct route to a collapse. However, it would make a 10% food production shortfall from extreme weather more likely. And there are many other catastrophes that could plausibly produce a 10% food production shortfall, such as:
1 Abrupt climate change (10 C loss over a continent in a decade, which has happened before)
2 Extreme climate change that is slow (~10 C over a century)
3 Volcanic eruption like Tambora (which caused the year without a summer in 1816: famine in Europe)
4 Super weed that out-competes crops, if a coordinated attack
5 Super crop disease, if a coordinated attack
6 Super crop pest (animal), if a coordinated attack
7 Losing beneficial bacteria abruptly
8 Abrupt loss of bees
9 gamma ray burst, which could disrupt the ozone layer
major wars;
This could be a 10% infrastructure destruction, so I think it could destabilize. Disruption of the Internet for an extended period globally could also cut off a lot of essential services.
reverse Flynn effect;
Even if the Flynn effect has stalled in developed countries (has it?), I still think globally over this century we are going to have a massive positive Flynn effect as education levels rise.
Other concerns that I don’t know of, or that no-one has yet thought of
I agree that I’d like to see more research on topics like these, but would flag that they seem arguably harder to do well than more standard X-risk research.
I think from where I’m standing, direct, “normal” X-risk work is relatively easy to understand the impact of; a 0.01% chance less of an X-risk is a pretty simple thing. When you get into more detailed models it can be more difficult to estimate the total importance or impact, even though more detailed models are often overall better. I think there’s a decent chance that 10-30 years from now the space would look quite different (similar to ways you mention) given more understanding (and propagation of that understanding) of more detailed models.
One issue regarding a Big List is figuring out what specifically should be proposed. I’d encourage you to write up a short blog post on this and we could see about adding it to this list or the next one :)
Why would research on ‘minor’ GCRs like the ones mentioned by Arepo be harder than eg AI alignment?
My impression is that there is plenty of good research on eg effects of CO2 on health, the Flynn effect and Kessler syndrome, and I would say its much higher quality than extant X risk research.
My point was just that understanding the expected impact seems more challenging. I’d agree that understanding the short-term impacts are much easier of those kinds of things, but it’s tricky to tell how that will impact things 200+ years from now.
I would like to see more about ‘minor’ GCRs and our chance of actually becoming an interstellar civilisation given various forms of backslide. In practice, the EA movement seems to treat the probability as 1.
I don’t think this is remotely justified. The arguments I’ve seen are generally of the form ‘we’ll still be able to salvage enough resources to theoretically recreate any given technology’, which doesn’t mean we can get anywhere near the economies of scale needed to create global industry on today’s scale, let alone that we actually will given realistic political development. And the industry would need to reach the point where we’re a reliably spacefaring civilisation, well beyond today’s technology, in order to avoid the usual definition of being an existential catastrophe (drastic curtailment of life’s potential).
If the chance of recovery from any given backslide is 99%, then that’s only two orders of magnitude between its expected badness and the badness of outright extinction, even ignoring other negative effects. And given the uncertainty around various GCRs, a couple of orders of magnitude isn’t that big a deal (Toby Ord’s The Precipice puts an order of magnitude or two between the probability of many of the existential risks we’re typically concerned with).
Things I would like to see more discussion of in this area:
General principles for assessing the probability of reaching interstellar travel given specific backslide parameters and then, with reference to this:
Kessler syndrome
Solar storm disruption
CO2 emissions from fossil fuels and other climate change rendering the atmosphere unbreathable (this would be a good old fashioned X-risk, but seems like one that no-one has discussed—in Toby’s book he details some extreme scenarios where a lot of CO2 could be released that wouldn’t necessarily cause human extinction by global warming, but that some of my back-of-the-envelope maths based on his figures seemed consistent with this scenario)
CO2 emissions from fossil fuels and other climate change substantially reducing IQs
Various ‘normal’ concerns: antibiotic resistant bacteria; peak oil; peak phosphorus; substantial agricultural collapse; moderate climate change; major wars; reverse Flynn effect; supporting interplanetary colonisation; zombie apocalypse
Other concerns that I don’t know of, or that no-one has yet thought of, that might otherwise be dismissed by committed X-riskers as ‘not a big deal’
Congratulations on winning the comment award! I definitely agree we should broaden the scenarios at which we look. You can see some work on the long term future impact of lesser catastrophes here and here.
Yes, and other catastrophes that could disrupt electricity/industry, such as high-altitude detonation of a nuclear weapon causing an electromagnetic pulse, coordinated cyber attack on electricity (perhaps narrow AI enabled), or an extreme pandemic causing the desertion of critical jobs may be important to work on.
Even 7000 ppm (0.7%) CO2 only has mild effects, and this is much higher than is plausible for Earth’s atmosphere in the next few centuries.
It is possible that overreaction to these could cause large enough increases in prices to make poor of the world significantly worse off, which could cause political instability and eventually lead to something like nuclear war. But I think it is much lower probability than those that could directly reduce food supply abruptly by order of magnitude 10%.
I think the moderate climate change, perhaps 2°C over a century, is difficult to find a direct route to a collapse. However, it would make a 10% food production shortfall from extreme weather more likely. And there are many other catastrophes that could plausibly produce a 10% food production shortfall, such as:
1 Abrupt climate change (10 C loss over a continent in a decade, which has happened before)
2 Extreme climate change that is slow (~10 C over a century)
3 Volcanic eruption like Tambora (which caused the year without a summer in 1816: famine in Europe)
4 Super weed that out-competes crops, if a coordinated attack
5 Super crop disease, if a coordinated attack
6 Super crop pest (animal), if a coordinated attack
7 Losing beneficial bacteria abruptly
8 Abrupt loss of bees
9 gamma ray burst, which could disrupt the ozone layer
This could be a 10% infrastructure destruction, so I think it could destabilize. Disruption of the Internet for an extended period globally could also cut off a lot of essential services.
Even if the Flynn effect has stalled in developed countries (has it?), I still think globally over this century we are going to have a massive positive Flynn effect as education levels rise.
Agreed, which is a reason that resilience and response are also important.
I agree that I’d like to see more research on topics like these, but would flag that they seem arguably harder to do well than more standard X-risk research.
I think from where I’m standing, direct, “normal” X-risk work is relatively easy to understand the impact of; a 0.01% chance less of an X-risk is a pretty simple thing. When you get into more detailed models it can be more difficult to estimate the total importance or impact, even though more detailed models are often overall better. I think there’s a decent chance that 10-30 years from now the space would look quite different (similar to ways you mention) given more understanding (and propagation of that understanding) of more detailed models.
One issue regarding a Big List is figuring out what specifically should be proposed. I’d encourage you to write up a short blog post on this and we could see about adding it to this list or the next one :)
Why would research on ‘minor’ GCRs like the ones mentioned by Arepo be harder than eg AI alignment?
My impression is that there is plenty of good research on eg effects of CO2 on health, the Flynn effect and Kessler syndrome, and I would say its much higher quality than extant X risk research.
Is the argument that they are less neglected?
My point was just that understanding the expected impact seems more challenging. I’d agree that understanding the short-term impacts are much easier of those kinds of things, but it’s tricky to tell how that will impact things 200+ years from now.
Write a post on which aspect? You mean basically fleshing out the whole comment?
Yes, fleshing out the whole comment, basically.