It would probably be good AI Safety orgs explicitly and prominently endorsed the CAIS statement on AI Risk: ”Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
It would probably be good AI Safety orgs explicitly and prominently endorsed the CAIS statement on AI Risk:
”Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”