For example, Brian Tomasik has suggested paying farmers to use humane insecticides. Calculations suggest that this could prevent 250,000 painful deaths per dollar.
(which is honestly not about WAS)
and
Open Phil might consider lumping WAS under its farm animal welfare program
because they look like paths to circumvent the biggest red flag, which is the profoundly negative reaction that most people have to calm discussions about wild animal suffering. It seems intuitively like an idea which is still before its time relative to the general population. I think most people would agree that it’s disliked to a greater extent than perhaps any other issue on the table.
I don’t know how big of a problem it is for the EA movement if lots of people notice what Open Phil is doing. It might be a problem. But doing something like the above would not be very controversial, would begin to shift priorities, and would create a foundation of work that blurs the line between traditional animal welfare and WAS work.
AI safety gets a similar negative reaction to WAS, but it’s Open Phil’s top priority for 2016. So I don’t think this is a major concern.
I definitely don’t think WAS should be part of the farm animal welfare program—it will almost certainly end up underfunded and won’t do as much good as it would as a separate cause area with dedicated staff.
EA started pulling additional mixed or negative reactions after moving into AI safety, such as the Dylan Matthews article or all the people who had prior familiarity with LessWrong and thought the whole thing was kooky.
Also, people’s reactions to wild animal suffering proposals seem to be substantially more negative than reactions to AI safety work (dataset: comment replies to McMahan and MacAskill’s articles, comment replies to AI safety editorials, several thousands of Reddit comments).
I see more negative reactions to AI safety. I don’t believe either of us has strong enough evidence to make a solid claim that one attracts substantially more negative PR than the other.
No one is actually opposed to the basic idea of researching AI safety. Some people just think it’s silly. But people actually think that intervening in nature is actually ethically wrong. The issue also links to debates over meat consumption, where people are already wired to be irrational. For these reasons you see people call out the idea in stronger terms than they talk about AI.
People react more erratically and strongly to AI safety if they are already involved in computer science and AI. But that’s not a representative reference class.
I’m interested in
(which is honestly not about WAS)
and
because they look like paths to circumvent the biggest red flag, which is the profoundly negative reaction that most people have to calm discussions about wild animal suffering. It seems intuitively like an idea which is still before its time relative to the general population. I think most people would agree that it’s disliked to a greater extent than perhaps any other issue on the table.
I don’t know how big of a problem it is for the EA movement if lots of people notice what Open Phil is doing. It might be a problem. But doing something like the above would not be very controversial, would begin to shift priorities, and would create a foundation of work that blurs the line between traditional animal welfare and WAS work.
What would you call that kind of suffering if not WAS?
Farm insect suffering? It is insects, being deliberately killed, on farms. It’s very different from the idea of intervening in natural ecosystems.
AI safety gets a similar negative reaction to WAS, but it’s Open Phil’s top priority for 2016. So I don’t think this is a major concern.
I definitely don’t think WAS should be part of the farm animal welfare program—it will almost certainly end up underfunded and won’t do as much good as it would as a separate cause area with dedicated staff.
EA started pulling additional mixed or negative reactions after moving into AI safety, such as the Dylan Matthews article or all the people who had prior familiarity with LessWrong and thought the whole thing was kooky.
Also, people’s reactions to wild animal suffering proposals seem to be substantially more negative than reactions to AI safety work (dataset: comment replies to McMahan and MacAskill’s articles, comment replies to AI safety editorials, several thousands of Reddit comments).
I see more negative reactions to AI safety. I don’t believe either of us has strong enough evidence to make a solid claim that one attracts substantially more negative PR than the other.
No one is actually opposed to the basic idea of researching AI safety. Some people just think it’s silly. But people actually think that intervening in nature is actually ethically wrong. The issue also links to debates over meat consumption, where people are already wired to be irrational. For these reasons you see people call out the idea in stronger terms than they talk about AI.
People react more erratically and strongly to AI safety if they are already involved in computer science and AI. But that’s not a representative reference class.
Which McMahan and MacAskill articles?
McMahan: http://opinionator.blogs.nytimes.com/2010/09/19/the-meat-eaters/
MacAskill: http://qz.com/497675/to-truly-end-animal-suffering-the-most-ethical-choice-is-to-kill-all-predators-especially-cecil-the-lion/