To me, given the proportion of resources EA stakes on AI safety, it would be worth trying to understand why people (particularly knowledgeable ML researchers) have a different set of priorities to many in EA. It seems suspicious how little intellectual credit that ML/AI people who aren’t EA are given.
I don’t see this as suspicious, because I suspect different goals are driving EAs compared to AI researchers. I’m not surprised by the fact that they disagree, since even if AI risk is high, if you have a selfish worldview, it’s probably still rational to work on AI research.
I don’t see this as suspicious, because I suspect different goals are driving EAs compared to AI researchers. I’m not surprised by the fact that they disagree, since even if AI risk is high, if you have a selfish worldview, it’s probably still rational to work on AI research.