A thought about some of the bad dynamics on social media that occurred to me:
Some well-known researchers in the AI Ethics camp have been critical of the AI Safety camp (or associated ideas like longtermism). It seems to be true that by contrast, AI Safety researchers are neutral-to-positive on AI Ethics. So there is some asymmetry.
However, there are certainly mainstream non-safety ML researchers who are harshly (typically unfairly) critical of AI Ethics. And there are also AI-Safety/EA-adjacent popular voices (like Scott Alexander) who criticize AI Ethics. Then on top of this there are fairly vicious anonymous trolls on Twitter.
So some AI Ethics researchers reasonably feel like they’re being unfairly attacked and that people socially connected to EA/AI Safety are in the mix, which may naturally lead to hostility even if it isn’t completely well-directed.
As more and more students get interested in AI safety, and AI-safety-specific research positions fail to open up proportionally, I expect that many of them (like me) will end up as graduate students in mainstream ethical-AI research groups. Resources like these are helping me to get my bearings.
(BTW there’s been a big spurt of alignment jobs lately, including serious spots in academia. e.g. here, here, here. probably not quite up to demand, but it’s better than you’d think.)
A thought about some of the bad dynamics on social media that occurred to me:
Some well-known researchers in the AI Ethics camp have been critical of the AI Safety camp (or associated ideas like longtermism). It seems to be true that by contrast, AI Safety researchers are neutral-to-positive on AI Ethics. So there is some asymmetry.
However, there are certainly mainstream non-safety ML researchers who are harshly (typically unfairly) critical of AI Ethics. And there are also AI-Safety/EA-adjacent popular voices (like Scott Alexander) who criticize AI Ethics. Then on top of this there are fairly vicious anonymous trolls on Twitter.
So some AI Ethics researchers reasonably feel like they’re being unfairly attacked and that people socially connected to EA/AI Safety are in the mix, which may naturally lead to hostility even if it isn’t completely well-directed.
The vibe I usually get from posts by AI safety people is that fairness research is somewhere between useless and negligibly positive.
That’s the average online vibe maybe, but plenty of AGI risk people are going for detente.
These are excellent answers, thanks so much!
As more and more students get interested in AI safety, and AI-safety-specific research positions fail to open up proportionally, I expect that many of them (like me) will end up as graduate students in mainstream ethical-AI research groups. Resources like these are helping me to get my bearings.
Good luck!
(BTW there’s been a big spurt of alignment jobs lately, including serious spots in academia. e.g. here, here, here. probably not quite up to demand, but it’s better than you’d think.)