Secondly, prioritizing competence. Ultimately, humanity is mostly in the same boat: we’re the incumbents who face displacement by AGI. Right now, many people are making predictable mistakes because they don’t yet take AGI very seriously. We should expect this effect to decrease over time, as AGI capabilities and risks become less speculative. This consideration makes it less important that decision-makers are currently concerned about AI risk, and more important that they’re broadly competent, and capable of responding sensibly to confusing and stressful situations, which will become increasingly common as the AI revolution speeds up.
I think this is a good point.
At the same time, I think you can infer that people who don’t take AI risk seriously are somewhat likely to lack important forms of competence. This is inference is only probabilistic, but it’s IMO pretty strong already (it’s a lot stronger now than it used to be four years ago) and it’ll get stronger still.
It also depends how much a specific person has been interacting with the technology; meaning, it probably applies a lot less to DC policy people, but applies more to ML scientists or people at AI labs.
you can infer that people who don’t take AI risk seriously are somewhat likely to lack important forms of competence
This seems true, but I’d also say that the people who do take AI risk seriously also typically lack different important forms of competence. I don’t think this is coincidental; instead I’d say that there’s (usually) a tradeoff between “good at taking very abstract ideas seriously” and “good at operating in complex fast-moving environments”. The former typically requires a sort of thinking-first orientation to the world, the latter an action-first orientation to the world. It’s possible to cultivate both, but I’d say most people are naturally inclined to one or the other (or neither).
I think this is a good point.
At the same time, I think you can infer that people who don’t take AI risk seriously are somewhat likely to lack important forms of competence. This is inference is only probabilistic, but it’s IMO pretty strong already (it’s a lot stronger now than it used to be four years ago) and it’ll get stronger still.
It also depends how much a specific person has been interacting with the technology; meaning, it probably applies a lot less to DC policy people, but applies more to ML scientists or people at AI labs.
This seems true, but I’d also say that the people who do take AI risk seriously also typically lack different important forms of competence. I don’t think this is coincidental; instead I’d say that there’s (usually) a tradeoff between “good at taking very abstract ideas seriously” and “good at operating in complex fast-moving environments”. The former typically requires a sort of thinking-first orientation to the world, the latter an action-first orientation to the world. It’s possible to cultivate both, but I’d say most people are naturally inclined to one or the other (or neither).