I think that AI safety needs to be promoted as a cause, not as a community. If you have personal moral uncertainty about whether to focus on animal suffering or AI risk, it might make sense to be a vegan AI researcher. But if you have moral uncertainty about what the priority is overall, you shouldn’t try to mix the two.
People in Machine learning are increasingly of the opinion that there is a risk, and it would be much better to educate them than to try to bring them in to a community which has goals they don’t, and don’t need to, care about.
But if we were to eliminate the EA community, an AI safety community would quickly replace it, as people are often attached to what they do. And this is even more likely if you add any moral connotation. People working at a charity, for example, are drawn to build an identity around it.
I think that AI safety needs to be promoted as a cause, not as a community. If you have personal moral uncertainty about whether to focus on animal suffering or AI risk, it might make sense to be a vegan AI researcher. But if you have moral uncertainty about what the priority is overall, you shouldn’t try to mix the two.
People in Machine learning are increasingly of the opinion that there is a risk, and it would be much better to educate them than to try to bring them in to a community which has goals they don’t, and don’t need to, care about.
But if we were to eliminate the EA community, an AI safety community would quickly replace it, as people are often attached to what they do. And this is even more likely if you add any moral connotation. People working at a charity, for example, are drawn to build an identity around it.
I’d suggest that we need multiple paths for drawing talent and general EA community building has been surprisingly successful so far.