A thought about some of the bad dynamics on social media that occurred to me:
Some well-known researchers in the AI Ethics camp have been critical of the AI Safety camp (or associated ideas like longtermism). It seems to be true that by contrast, AI Safety researchers are neutral-to-positive on AI Ethics. So there is some asymmetry.
However, there are certainly mainstream non-safety ML researchers who are harshly (typically unfairly) critical of AI Ethics. And there are also AI-Safety/EA-adjacent popular voices (like Scott Alexander) who criticize AI Ethics. Then on top of this there are fairly vicious anonymous trolls on Twitter.
So some AI Ethics researchers reasonably feel like they’re being unfairly attacked and that people socially connected to EA/AI Safety are in the mix, which may naturally lead to hostility even if it isn’t completely well-directed.
A thought about some of the bad dynamics on social media that occurred to me:
Some well-known researchers in the AI Ethics camp have been critical of the AI Safety camp (or associated ideas like longtermism). It seems to be true that by contrast, AI Safety researchers are neutral-to-positive on AI Ethics. So there is some asymmetry.
However, there are certainly mainstream non-safety ML researchers who are harshly (typically unfairly) critical of AI Ethics. And there are also AI-Safety/EA-adjacent popular voices (like Scott Alexander) who criticize AI Ethics. Then on top of this there are fairly vicious anonymous trolls on Twitter.
So some AI Ethics researchers reasonably feel like they’re being unfairly attacked and that people socially connected to EA/AI Safety are in the mix, which may naturally lead to hostility even if it isn’t completely well-directed.
The vibe I usually get from posts by AI safety people is that fairness research is somewhere between useless and negligibly positive.
That’s the average online vibe maybe, but plenty of AGI risk people are going for detente.