The Biden-Harris administration has already done actually useful things on AI safety, like the AI executive order (which Trump promises to repeal).
My understanding is this was largely despite Kamala, who for most of the Biden administration was viewed as a liability and given little influence, rather than because of her? For public commentary on this, see for example her here ‘rebuking’ Rishi for focusing on existential risks:
Harris also urged the international community to focus on the “full spectrum” of artificial intelligence risks, including existing threats like bias and discrimination. It was a gentle rebuke to Sunak’s summit, which has courted controversy due to its laser focus on the unrealized existential risks of the tech.
“Let us be clear, there are additional threats that also demand our action. Threats that are currently causing harm, and which to many people also feel existential,” Harris said.
Or here, where she shows at best ignorance of the meaning of ‘existential’:
“When a senior is kicked off his healthcare plan because of a faulty AI algorithm, is that not existential for him?” Harris told a crowd in London last November. “When a woman is threatened by an abusive partner with explicit deepfake photographs, is that not existential for her?”
That might be right. Another explanation is that even if she takes x-risk seriously, she thinks it’s easier build political support around regulating AI by highlighting existing problems.
eh, I agree it’s possible but in the examples I’m aware of, it looks like other people around her (e.g. Biden, Sunak) were more pro-regulation for x-risk reasons.
My understanding is this was largely despite Kamala, who for most of the Biden administration was viewed as a liability and given little influence, rather than because of her? For public commentary on this, see for example her here ‘rebuking’ Rishi for focusing on existential risks:
Or here, where she shows at best ignorance of the meaning of ‘existential’:
Scary. :/
What’s so scary? I actually like that she talks about “full spectrum” and “additional risks”, i.e. it’s not dismissive of existential risk.
But anyway, it’s a bit reading tea leaves at this point
Basically it seems like evidence she doesn’t take x-risk seriously, if she considers those problems on par with the ending of human civilization.
That might be right. Another explanation is that even if she takes x-risk seriously, she thinks it’s easier build political support around regulating AI by highlighting existing problems.
eh, I agree it’s possible but in the examples I’m aware of, it looks like other people around her (e.g. Biden, Sunak) were more pro-regulation for x-risk reasons.