The long-term significance of reducing global catastrophic risks [link]
Nick Beckstead lays down the arguments for thinking about extinction risks and non-extinction risks at the same time, with the possible exception of the case of AI, at the GiveWell blog.
This is a comment from Jim Terry, reposted with permission (none of it mine)
Copied from my comment on a Facebook post:
I especially liked Nick’s sapling analogy, and found it fitting. I worry that EAs are drawn from subgroups with a tendency to believe relatively simple formalistic and mechanistic processes essentially describe complex ones, with perhaps a decrease in accuracy (relative to more complex models) but not in the general sign and magnitude of the result. This seems really dangerous.
“Imagine a Level 1 event that disproportionately affected people in areas that are strong in innovative science (of which we believe there are a fairly contained number). Possible consequences of such an event might include a decades-long stall in scientific progress or even an end to scientific culture or institutions and a return to rates of scientific progress comparable to what we see in areas with weaker scientific institutions today or saw in pre-industrial civilization.” It seems likely that any Level 1 event will have disproportionate effects on certain groups (possibly ones that would be especially useful for bringing civilization back from a level 1 event), and this seems like a pretty under-investigated consideration. A pandemic that was extremely virulent but only contagious enough to spread fully in big cities. Or extreme climate change or geoengineering gone awry knocking out mostly the global north or mostly equatorial regions or coastal regions.
He doesn’t really discuss the possibility of a Level 1 event immediately provoking a Level 2 event, but that also seems possible (for example, one catastrophic use of biowarfare could incentivize another country to develop even more powerful bioweapons, or to develop some sort of militaristic AI for defense. Or catastrophic climate change could cause the use of extreme and ill-tested geoengineering). This actually seems moderately likely, and I wonder why he didn’t discuss it.
From the spreadsheet linked there ( https://docs.google.com/spreadsheets/d/1b7ohoyAi2MlyBOzgarvJ-bOE2v9mJ8a9YDfQYGNk9vk/edit#gid=1273928110 )
Does anybody find the row on Anthropogenic climate change (other than geoengineering) puzzling in the sense that it seems to be not given sufficient priority?
“Not many suitable remaining funding opportunities” for “R&D on clean tech, adaptation preparations, and working toward carbon pricing are all possibilities but all generally highly funded already.”
The likelihood of highest-damage scenario over the next 100 years is categorized on the same level as AI risk (‘Highly uncertain, somewhat conjunctive, but plausible’).
Climate change is a very crowded space, and AFAIK geoengineering is the only cost-effective climate change intervention (I haven’t really researched this but that’s my impression). There’s already tons of research going into e.g. clean energy, so marginal research is not very valuable. Geoengineering by contrast is a lot less crowded and potentially much more cost-effective.
I was puzzled!