you can infer that people who don’t take AI risk seriously are somewhat likely to lack important forms of competence
This seems true, but I’d also say that the people who do take AI risk seriously also typically lack different important forms of competence. I don’t think this is coincidental; instead I’d say that there’s (usually) a tradeoff between “good at taking very abstract ideas seriously” and “good at operating in complex fast-moving environments”. The former typically requires a sort of thinking-first orientation to the world, the latter an action-first orientation to the world. It’s possible to cultivate both, but I’d say most people are naturally inclined to one or the other (or neither).
This seems true, but I’d also say that the people who do take AI risk seriously also typically lack different important forms of competence. I don’t think this is coincidental; instead I’d say that there’s (usually) a tradeoff between “good at taking very abstract ideas seriously” and “good at operating in complex fast-moving environments”. The former typically requires a sort of thinking-first orientation to the world, the latter an action-first orientation to the world. It’s possible to cultivate both, but I’d say most people are naturally inclined to one or the other (or neither).