Hmm I’m not quite sure I agree that there’s such a clear division of two camps. For example, I think Seth is actually not that far off from Timnit’s perspective on AI Safety/​EA. Perhaps and bit less extreme and hostile, but I see that more of a degree in difference rather than a degree in kind.
I also disagree that people in your second camp are going to be useful for fruitful for collaboration, as they don’t just have technical objections but I think core philosophical objections to EA (or what they view as EA).
I guess overall I’m not sure. It’d be interesting to see some mapping of AI-researchers in some kind of belief-space plot so different groups could be distinguished. I think it’s very easy to extrapolate from a few small examples and miss what’s actually going—which I admit I might very well be doing with my pessimism here, but I sadly think it’s telling that I see so few counterexamples of collaboration but I can easily find examples of AI researchers dismissive or hostile to the AI Safety/​xRisk perspective.
I don’t think you have to agree on deep philosophical stuff to collaborate on specific projects. I do think it’ll be hard to collaborate if one/​both sides are frequently publicly claiming the other is malign and sinister or idiotic and incompetent or incredibly ideogically rigid and driven by emotion not reason (etc.)
Hmm I’m not quite sure I agree that there’s such a clear division of two camps. For example, I think Seth is actually not that far off from Timnit’s perspective on AI Safety/​EA. Perhaps and bit less extreme and hostile, but I see that more of a degree in difference rather than a degree in kind.
I also disagree that people in your second camp are going to be useful for fruitful for collaboration, as they don’t just have technical objections but I think core philosophical objections to EA (or what they view as EA).
I guess overall I’m not sure. It’d be interesting to see some mapping of AI-researchers in some kind of belief-space plot so different groups could be distinguished. I think it’s very easy to extrapolate from a few small examples and miss what’s actually going—which I admit I might very well be doing with my pessimism here, but I sadly think it’s telling that I see so few counterexamples of collaboration but I can easily find examples of AI researchers dismissive or hostile to the AI Safety/​xRisk perspective.
I don’t think you have to agree on deep philosophical stuff to collaborate on specific projects. I do think it’ll be hard to collaborate if one/​both sides are frequently publicly claiming the other is malign and sinister or idiotic and incompetent or incredibly ideogically rigid and driven by emotion not reason (etc.)