AI capabilities people don’t psychologically feel like AI is a threat to their selfish interests (assuming they even understand why it is a threat), because humans value short-term gain more than long-term danger (time discounting). Therefore selfish actors have incentives to work on capabilities.
Great point that externalities might mislead people into thinking “ah yes, another instance where we need government regulation to stop greedy companies; government regulation will solve the problem.” (Although government intervention for slowing down capabilities and amplifying safety research would indeed be quite helpful.)
Not sure how externalities “dilutes” the claims. It’s a serious problem that there are huge economic incentives for capabilities, and minuscule economic incentives for safety.
I don’t think it’s very hard to make AI x-risk sound non-weird: (1) intelligence is a meaningful concept; (2) it might be possible to build AI systems with more intelligence than humans; (3) it might be possible to build such AIs within this century; (4) if built, these AIs would have a massive impact on society; (5) this impact might be extremely negative. These core ideas are reasonable-sounding propositions that someone with good social skills could bring up in a conversation.
AI capabilities people don’t psychologically feel like AI is a threat to their selfish interests (assuming they even understand why it is a threat), because humans value short-term gain more than long-term danger (time discounting). Therefore selfish actors have incentives to work on capabilities.
Great point that externalities might mislead people into thinking “ah yes, another instance where we need government regulation to stop greedy companies; government regulation will solve the problem.” (Although government intervention for slowing down capabilities and amplifying safety research would indeed be quite helpful.)
Not sure how externalities “dilutes” the claims. It’s a serious problem that there are huge economic incentives for capabilities, and minuscule economic incentives for safety.
I don’t think it’s very hard to make AI x-risk sound non-weird: (1) intelligence is a meaningful concept; (2) it might be possible to build AI systems with more intelligence than humans; (3) it might be possible to build such AIs within this century; (4) if built, these AIs would have a massive impact on society; (5) this impact might be extremely negative. These core ideas are reasonable-sounding propositions that someone with good social skills could bring up in a conversation.