It seems like lots of active AI safety researchers, even a majority, are aware of Yudkowsky and Bostrom’s views but only agree with parts of what they have to say (e.g. Russell, Amodei, Christiano, the teams at DeepMind, OpenAI, etc).
There may still not be enough intellectual diversity, but having the same perspective as Bostrom or Yudkowsky isn’t a filter to involvement.
It seems like lots of active AI safety researchers, even a majority, are aware of Yudkowsky and Bostrom’s views but only agree with parts of what they have to say (e.g. Russell, Amodei, Christiano, the teams at DeepMind, OpenAI, etc).
There may still not be enough intellectual diversity, but having the same perspective as Bostrom or Yudkowsky isn’t a filter to involvement.