That might be right. Another explanation is that even if she takes x-risk seriously, she thinks it’s easier build political support around regulating AI by highlighting existing problems.
eh, I agree it’s possible but in the examples I’m aware of, it looks like other people around her (e.g. Biden, Sunak) were more pro-regulation for x-risk reasons.
What’s so scary? I actually like that she talks about “full spectrum” and “additional risks”, i.e. it’s not dismissive of existential risk.
But anyway, it’s a bit reading tea leaves at this point
Basically it seems like evidence she doesn’t take x-risk seriously, if she considers those problems on par with the ending of human civilization.
That might be right. Another explanation is that even if she takes x-risk seriously, she thinks it’s easier build political support around regulating AI by highlighting existing problems.
eh, I agree it’s possible but in the examples I’m aware of, it looks like other people around her (e.g. Biden, Sunak) were more pro-regulation for x-risk reasons.