Thanks a lot! I’ve made the correction you pointed out.
JesseClifton
Scenarios for cellular agriculture
I’m not objecting to having moral uncertainty about animals. I’m objecting to treating animal ethics as if it were a matter of taste. EAs have rigorous standards of argument when it comes to valuations of humans, but when it comes to animals they often seem to shrug and say “It depends on how much you value them” rather than discussing how much we should value them.
I didn’t intend to actually debate what the proper valuation should be. But FWIW, the attitude that talking about how we should value animals “is likely to be emotionally charged and counterproductive”—an attitude I think is widespread given how little I’ve seen this issue discussed—strikes me as another example of EAs’ inconsistency when it comes to animals. No EA hesitates to debate, say, someone’s preference for Christians over Muslims. So why are we afraid to debate preference among species?
I take issue with the statement “it depends greatly on how much you value a human compared to a nonhuman animal”. Similar things are often said by EAs in discussions of animal welfare. This makes it seem as if the value one places on nonhumans is a matter of taste, rather than a claim subject to rational argument. The statement should read “it depends greatly on how much we ought to value a human compared to a nonhuman”.
Imagine if EAs went around saying “it depends on how much you value an African relative to an American”. Maybe there is more reasonable uncertainty about between- as opposed to within-species comparisons, but still we demand good reasons for the value we assign to different kinds of humans. This idea is at the core of Effective Altruism. We ought to do the same with non-human sentients.
I’m not saying any experiment is necessarily useless, but if MFA is going to spend a bunch of resources on another study they should use methods that won’t exaggerate effectiveness.
And it’s not only that “one should attend to priors in interpretation”—one should specify priors beforehand and explicitly update conditional on the data.
Confidence intervals still don’t incorporate prior information and so give undue weight to large effects.
I would be especially wary of conducting more studies if we plan on trying to “prove” or “disprove” the effectiveness of ads with so dubious a tool as null hypothesis significance tests.
Even if in a new study we were to reject the null hypothesis of no effect, this would arguably still be pretty weak evidence in favor of the effectiveness of ads.
As prohibitions on methods of animal exploitation—rather than just regulations which allow those forms of exploitation to persist if they’re more “humane”—I think these are different than typical welfare reforms. As I say in the post, this is the position taken by abolitionist-in-chief Gary Francione in Rain Without Thunder.
Of course the line between welfare reform and prohibition is murky. You could argue that these are not, in fact, prohibitions on the relevant form of exploitation—namely, raising animals to be killed for food. But in trying to figure out whether welfare reforms delay progress, we have to go on what evidence we have...and the fact that we do have these prohibitions on certain practices, in many cases based on the explicit recognition of animal interests that shouldn’t be violated (e.g. the Five Freedoms), seems to be about as good as it gets in terms of historical evidence bearing on the debate over welfarism.
I haven’t seen much on welfare reforms in these industries in particular. In the 90s Sweden required that foxes on fur farms be able to express their natural behaviors, but this made fur farming economically unviable and it ended altogether...so I’m not sure what that tells us. Other than that, animals used in fur farming and cosmetics testing are/were subject to general EU animal welfare laws, and laws concerning farm and experimental animals, respectively.
I think welfare having no effect on abolition is a reasonable conclusion. I just want to argue that it isn’t obviously counterproductive on the basis of this historical evidence.
Thanks for the comments!
″...we have evidence that welfare reforms lead to more welfare reforms, which might suggest someday they will get us to something close to animal rights, but I think Gary Francione’s historical argument that we have had welfare reforms for two centuries without significant actual improvements is a bit stronger....”
My point is that welfare reforms have led not only to more welfare reforms, but prohibitions as well. Even if we disqualify bans on battery cages, veal crates, and gestation crates as prohibitions, there are still bans on fur farming and cosmetics testing. There are also what might be considered proto-rights in the Five Freedoms.
″...many movements historically have come to a similar conclusion that seeking a more dramatic shift (abolition or desegregation) was more valuable than improved conditions (slavery reform or improved segregated black schools).”
I think the success of incrementalist vs. abolitionist strategies is highly context dependent. A society may simply not be ready to even consider the abolition of a particular institution. This seems to have been the case with abolitionist anti-vivisectionism.
And there is bias in looking at cases like slavery and civil rights in which dramatic shifts were actually achieved. Of course it looks, in retrospect, like pursuing a dramatic shift was the best choice! But history is littered with people whose calls for dramatic change were not realized: socialists, libertarians, anarchists, fascists, adherents of all religions, radical environmentalists, anti-globalizationists, anti-nuclearists, pacifists, Bernie Bros, and 19th century anti-vivisectionists. Arguably, however, each of these groups has been able to advance some of their goals through small changes.
My point is not to advocate for welfarism over abolitionism, but to say we can’t predict what will work in a given time and place, and therefore we should diversify our strategic portfolio. And I do think recognizing that welfarism does not seem to have prevented progress towards abolition is especially important in the case of developing countries, which seem particularly far from being receptive to animal liberation, but where animal welfare reforms could reduce the suffering of a lot of animals in the meantime.
In the EU, prohibitions on battery cages, gestation crates, veal crates, and cosmetics testing, and the adoption of the Five Freedoms as a basis for animal welfare policy. In the UK, Austria, Netherlands, Croatia, & Bosnia & Herzegovina, bans on fur farming.
Lessons from the history of animal rights
Echo what Issa said. I’ve been working with Vipul to create articles on animal welfare and rights topics, and it’s been a valuable experience. I’ve learned about Wikipedia, and more importantly I have learned a ton about the animal welfare/rights movement that will inform my own activism. I have already referred a lot to what I’ve learned and written about in conversations with other activists about what’s effective. I think it’s really good that now anyone will be able to easily access this information. Plus Vipul’s great to work with.
Seems like you ought to conduct the analysis with all of the reasonable priors to see how robust your conclusions are, huh?
“That’s not what’s happening here, because the case in question is an abstract discussion of a huge policy question regarding what stance we should take in the future, with little time pressure. These are precisely the areas where we should be consequentialist if ever we should be.”
Most people’s thinking is not nearly as targeted and consequentialist as this. On my model of human psychology, supporting the exploitation of animals in service of third-world development reinforces the belief that animals are for human benefit in general (rather than in this one instance where the benefits to all sentient beings were found to outweigh the harms). Given that speciesism is responsible for the vast majority of human-caused suffering, I think we should be extremely careful about supporting animal exploitation, even when it looks net-positive at first blush.
And I’m not concerned about EA looking “heartless and crazy” by endorsing livestock as a development tool, I was just pointing out that there are certain things EA should take off the table for signalling and memetic reasons.
“I doubt that we are well-advised to insist that people in the developing world cannot should not own animals as assets (regardless of the balance of cost and benefits).”
There’s a difference between insisting that people in the developing world not own animals as assets, which I agree would be mistaken, and opposing the adoption of livestock ownership as a development strategy.
I think adopting and spreading some deontic heuristics regarding the exploitation of animals is good from a consequentialist perspective. Presumably, EAs don’t consider whether enslaving, murdering, and eating other humans “is for the greater good impartially considered”. Even putting that on the table would make EA look much more heartless and crazy than it already does, and risk spreading some very dangerous memes. Likewise, not taking a firm stand against animal exploitation as a development tool makes EA seem less serious about helping animals, and reinforces the idea that animals are here to benefit humans.
A few ideas-
-Consider getting people to think about improving the effectiveness of a cause they already care about first, rather than leading with cause prioritization.
-I see the point about the effectiveness of targeting secular people, but I worry about EA being excluded from mainstream thought in the long run due to this kind of strategy. Just something to think about more carefully.
-Perhaps there needs to be more discussion of effective advocacy as an individual. What is importance of charisma and other “soft” attributes that are difficult to quantify? How can we best invest in our own advocacy abilities? Do we need to spend more time developing interests and social skills that allow us to persuade people outside of rationalist-type circles?
-On a related note...unless you know your audience really well...for Christ’s sake, don’t lead with killer robots
I agree that EA as a whole doesn’t have coherent goals (I think many EAs already acknowledge that it’s a shared set of tools rather than a shared set of values). But why are you so sure that “it’s going to cause much more suffering than it prevents”?