I previously asked this question in an AMA, but I’d like to pose it to the broader EA Forum community:
Do you believe that AGI poses a greater existential risk than other proposed x-risk hazards, such as engineered pandemics? Why or why not?
For background, it seems to me like the longtermist movement spends lots of resources (including money and talent) on AI safety and biosecurity as opposed to working to either discover or mitigate other potential x-risks such as disinformation and great power war. It also seems to be a widely held belief that transformative AI poses a greater threat to humanity’s future than all other existential hazards, and I am skeptical of this. At most, we have arguments that TAI poses a big x-risk, but not arguments that it is bigger than every other x-risk (although I appreciate Nate’s argument that it does outweigh engineered pandemics).
Sorry, I totally meant to answer this on the AMA, but ended up missing it.
I don’t have particularly strong views on which x-risk is largest, mostly because I haven’t thought very much about x-risks other than AI.
(That being said, for many x-risks such as nuclear war you can get some rough bounds by noticing that they haven’t occurred yet, whereas we don’t have AGI yet so we can’t make a similar argument. Anyone who thinks that AI poses, say, >50% x-risk could use these arguments to justify that AI x-risk is larger than most other risks.)
I do think that “all else equal”, AI alignment is the most impactful object-level x-risk to work on, because:
The absolute level of risk is not small (and is at least comparable to other risks)
It’s a “single problem”, i.e. there’s a specific technical-ish problem description and it is plausible that there’s a “single solution” that entirely handles it, that we have to find. (This cashes out as higher tractability in the ITN framework, though there are also reasons to expect lower tractability.)
It’s extremely neglected relative to most other risks.
I am not sure if “all else equal” (by which I think you mean if we are don’t have good likelihood estimates) that “AI alignment is the most impactful object-level x-risk to work on” applies to people without relevant technical skills.
If there is some sense of “all risks are equal” then for people with policy skills I would direct them to focus their attention right now on pandemics (or on general risk management) which is much more politically tractable, and much clear what kinds of policy changes are needed.
By “all else equal” I meant to ignore questions of personal fit (including e.g. whether or not people have the relevant technical skills). I was not imagining that the likelihoods were similar.
I agree that in practice personal fit will be a huge factor in determining what any individual should do.
Ah, sorry, I misunderstood. Thank you for the explanation :-)
I think the answer depends on the timeframe you are asking over. I give some example timeframes you might want to ask the question over and plausible answers to the biggest x-risks.
1-3 year: nuclear war
Reasoning: we are not close enough to building TAI that it will happen in the next few years. Nuclear war this year seems possible.
4-20 years: TAI
Reasoning: Firstly you could say we are a bit closer to TAI than to building x-risk level viruses (very unsure about that). Secondly the TAI threat is most worrying in scenarios where it happens very quickly and we loose control (a fast risk) and the pandemic threat is most worrying in scenarios where we gradually get more and more ability to produce homebrew viruses (a slow risk).
21-50 years: TAI or manmade pandemics (unclear)
Reasoning: As above TAI is less worrying if we have lots of time to work on alignment.
51-100 years: unknown unknown risks
Reasoning: Imagine trying to predict the biggest x-risks today from 50 years ago. The world is changing too fast. There are so many technologies that could be transformative and potentially pose x-risk level threats. To think that the risks we think are biggest today will still be biggest in 50+ years is hubris.
I think as a community we could do more to map out the likelihood of different risks on different timeframes and to consider strategies for addressing unknown unknow risks