Sorry, I totally meant to answer this on the AMA, but ended up missing it.
I don’t have particularly strong views on which x-risk is largest, mostly because I haven’t thought very much about x-risks other than AI.
(That being said, for many x-risks such as nuclear war you can get some rough bounds by noticing that they haven’t occurred yet, whereas we don’t have AGI yet so we can’t make a similar argument. Anyone who thinks that AI poses, say, >50% x-risk could use these arguments to justify that AI x-risk is larger than most other risks.)
I do think that “all else equal”, AI alignment is the most impactful object-level x-risk to work on, because:
The absolute level of risk is not small (and is at least comparable to other risks)
It’s a “single problem”, i.e. there’s a specific technical-ish problem description and it is plausible that there’s a “single solution” that entirely handles it, that we have to find. (This cashes out as higher tractability in the ITN framework, though there are also reasons to expect lower tractability.)
It’s extremely neglected relative to most other risks.
I am not sure if “all else equal” (by which I think you mean if we are don’t have good likelihood estimates) that “AI alignment is the most impactful object-level x-risk to work on” applies to people without relevant technical skills.
If there is some sense of “all risks are equal” then for people with policy skills I would direct them to focus their attention right now on pandemics (or on general risk management) which is much more politically tractable, and much clear what kinds of policy changes are needed.
By “all else equal” I meant to ignore questions of personal fit (including e.g. whether or not people have the relevant technical skills). I was not imagining that the likelihoods were similar.
I agree that in practice personal fit will be a huge factor in determining what any individual should do.
Sorry, I totally meant to answer this on the AMA, but ended up missing it.
I don’t have particularly strong views on which x-risk is largest, mostly because I haven’t thought very much about x-risks other than AI.
(That being said, for many x-risks such as nuclear war you can get some rough bounds by noticing that they haven’t occurred yet, whereas we don’t have AGI yet so we can’t make a similar argument. Anyone who thinks that AI poses, say, >50% x-risk could use these arguments to justify that AI x-risk is larger than most other risks.)
I do think that “all else equal”, AI alignment is the most impactful object-level x-risk to work on, because:
The absolute level of risk is not small (and is at least comparable to other risks)
It’s a “single problem”, i.e. there’s a specific technical-ish problem description and it is plausible that there’s a “single solution” that entirely handles it, that we have to find. (This cashes out as higher tractability in the ITN framework, though there are also reasons to expect lower tractability.)
It’s extremely neglected relative to most other risks.
I am not sure if “all else equal” (by which I think you mean if we are don’t have good likelihood estimates) that “AI alignment is the most impactful object-level x-risk to work on” applies to people without relevant technical skills.
If there is some sense of “all risks are equal” then for people with policy skills I would direct them to focus their attention right now on pandemics (or on general risk management) which is much more politically tractable, and much clear what kinds of policy changes are needed.
By “all else equal” I meant to ignore questions of personal fit (including e.g. whether or not people have the relevant technical skills). I was not imagining that the likelihoods were similar.
I agree that in practice personal fit will be a huge factor in determining what any individual should do.
Ah, sorry, I misunderstood. Thank you for the explanation :-)