EA started pulling additional mixed or negative reactions after moving into AI safety, such as the Dylan Matthews article or all the people who had prior familiarity with LessWrong and thought the whole thing was kooky.
Also, people’s reactions to wild animal suffering proposals seem to be substantially more negative than reactions to AI safety work (dataset: comment replies to McMahan and MacAskill’s articles, comment replies to AI safety editorials, several thousands of Reddit comments).
I see more negative reactions to AI safety. I don’t believe either of us has strong enough evidence to make a solid claim that one attracts substantially more negative PR than the other.
No one is actually opposed to the basic idea of researching AI safety. Some people just think it’s silly. But people actually think that intervening in nature is actually ethically wrong. The issue also links to debates over meat consumption, where people are already wired to be irrational. For these reasons you see people call out the idea in stronger terms than they talk about AI.
People react more erratically and strongly to AI safety if they are already involved in computer science and AI. But that’s not a representative reference class.
EA started pulling additional mixed or negative reactions after moving into AI safety, such as the Dylan Matthews article or all the people who had prior familiarity with LessWrong and thought the whole thing was kooky.
Also, people’s reactions to wild animal suffering proposals seem to be substantially more negative than reactions to AI safety work (dataset: comment replies to McMahan and MacAskill’s articles, comment replies to AI safety editorials, several thousands of Reddit comments).
I see more negative reactions to AI safety. I don’t believe either of us has strong enough evidence to make a solid claim that one attracts substantially more negative PR than the other.
No one is actually opposed to the basic idea of researching AI safety. Some people just think it’s silly. But people actually think that intervening in nature is actually ethically wrong. The issue also links to debates over meat consumption, where people are already wired to be irrational. For these reasons you see people call out the idea in stronger terms than they talk about AI.
People react more erratically and strongly to AI safety if they are already involved in computer science and AI. But that’s not a representative reference class.
Which McMahan and MacAskill articles?
McMahan: http://opinionator.blogs.nytimes.com/2010/09/19/the-meat-eaters/
MacAskill: http://qz.com/497675/to-truly-end-animal-suffering-the-most-ethical-choice-is-to-kill-all-predators-especially-cecil-the-lion/