I think my argument will be even clearer if I talk about mitigating AI risk. Imagine if all AI safety orgs operated independently, even competing with each other. It would be (or arguably already is) a mess! There would be no ‘open letter’, just different people shouting separately. And surely AI safety could be advanced further if the already existing orgs worked together better.
So yes, choice is good, but to some degree we are and should be working towards common goals.
I think my argument will be even clearer if I talk about mitigating AI risk. Imagine if all AI safety orgs operated independently, even competing with each other. It would be (or arguably already is) a mess! There would be no ‘open letter’, just different people shouting separately. And surely AI safety could be advanced further if the already existing orgs worked together better.
So yes, choice is good, but to some degree we are and should be working towards common goals.