I am not sure if “all else equal” (by which I think you mean if we are don’t have good likelihood estimates) that “AI alignment is the most impactful object-level x-risk to work on” applies to people without relevant technical skills.
If there is some sense of “all risks are equal” then for people with policy skills I would direct them to focus their attention right now on pandemics (or on general risk management) which is much more politically tractable, and much clear what kinds of policy changes are needed.
By “all else equal” I meant to ignore questions of personal fit (including e.g. whether or not people have the relevant technical skills). I was not imagining that the likelihoods were similar.
I agree that in practice personal fit will be a huge factor in determining what any individual should do.
I am not sure if “all else equal” (by which I think you mean if we are don’t have good likelihood estimates) that “AI alignment is the most impactful object-level x-risk to work on” applies to people without relevant technical skills.
If there is some sense of “all risks are equal” then for people with policy skills I would direct them to focus their attention right now on pandemics (or on general risk management) which is much more politically tractable, and much clear what kinds of policy changes are needed.
By “all else equal” I meant to ignore questions of personal fit (including e.g. whether or not people have the relevant technical skills). I was not imagining that the likelihoods were similar.
I agree that in practice personal fit will be a huge factor in determining what any individual should do.
Ah, sorry, I misunderstood. Thank you for the explanation :-)