Related question to the one posed by Yadav:
Does the fact that OpenAI and DeepMind have AI Safety teams factor significantly into AI x-risk estimates?
My independent impression is that it’s very positive, but I haven’t seen this factor being taken explicitly into account in risk estimates.
Related question to the one posed by Yadav:
Does the fact that OpenAI and DeepMind have AI Safety teams factor significantly into AI x-risk estimates?
My independent impression is that it’s very positive, but I haven’t seen this factor being taken explicitly into account in risk estimates.