Many grantee organisations report the lessons they learnt to their donors. Open Philanthropy must have accumulated a lot of information on the best practices for animal welfare organisations. As far as I understand, grant makers are wary of giving object level advice and micromanaging grantees. On the other hand, many organisations already spend a lot of time trying to learn about the best (and worst) practices in other organisations. Could Open Phil animal welfare team prepare an anonymised write up about what their grantees report as the reasons for their successes and failures?
emre kaplan
I believe we’re in agreement that the official definition of veganism is vague as you also use words like “ambiguity” or “unclear” while describing it. In my comment I’m stating that vagueness of that definition isn’t that much of a problem.
I’m also curious why do you think animal tested ingredient consumption is first-order harm whereas crop deaths are a second-order harm. I can see how tractors crushing animals might be accidental instead of intentional. But when I compare pesticides to animal testing, both of them seem to be instances of intentionally exposing animals to harmful chemicals to improve product quality.
I think vagueness isn’t that much of a problem. Many useful categories are vague. Even murder and rape are vague. People can say “we don’t know the exact point where harm to animals becomes unacceptable. But morality is very difficult. That’s to be expected. We know some actions(such as eating animal products) are definitely too bad, for that reason we can confidently claim they are non-vegan.”
I think bigger problems with the consistency of veganism are:
-Some obviously vegan actions harm more animals than some obviously non-vegan actions.
-Some vegans cause more animal killings than some non-vegans.
-Veganism itself optimises for minimising animal product consumption. It doesn’t optimise for minimising killings caused or minimising harm caused or minimising the suffering in the world.
I think what happens is that human brain finds it much easier to attribute moral emotions like disgust and shame to physical objects. So our emotional reactions track “can I sustainably disgust this physical object” rather than “is this action causing the least harm possible”. If something can be completely eliminated it gets tabooed. On the other hand it’s unstable to wear clothes but also feel disgusted when someone buys way too many clothes. So you can’t create a taboo over clothing or vehicle use. I wrote more about this topic here.
Thank you. Particularly the section with “It’s typically legal for children of any age to work for their parents’ business.” is new to me. I will replace the examples.
My understanding is that there are often minimum age limits for minor employment with a blanket ban below a certain age. When I use the expression “child labour”, I don’t mean 17 years olds. But you’re right that my phrasing isn’t precise there. I also agree people won’t mind children selling lemonades on their own. But in my conversations there was a general agreement that you shouldn’t make your 10 years old child work full-time and you absolutely shouldn’t employ any kid of that age as an employer.
Even less controversially, since there is agreement that early children’s rights legislation was way below the acceptable standard, it serves as an example for “getting someone to do something less bad but still forbidden”.
If taking a salary cut is considered as honest fulfilment of the GWWC pledge, I’m willing to take the pledge.
I work in an EA-funded non-profit. It seems inefficient to donate my income instead of taking a salary cut.
Thanks, many websites seem to report this without the qualifier “per quarter”, which confused me.
Where does the “$200/user/year” figure come from? They report $68.44 average revenue per user for the US and Canada in their 2023 Q4 report.
This was super informative for me, thank you.
I’m confused by this section in this interview:
“Well, the simplest toy example is just going to be, imagine that you have some assessment that says, I think chickens are in a really bad state in factory farms, and I think that if we move layer hens from battery cages into a cage-free environment, we make them 40% better off. And I think that after doing this whole project — whatever the details, we’re just going to make up the toy numbers — I think that chickens have one-tenth of the welfare range of humans.
So now we’ve got 40% change in the welfare for these chickens and we’ve got 10% of the welfare range, so we can multiply these through and say how much welfare you’d be getting in a human equivalent for that benefit to one individual. Then you multiply the number of individuals and you can figure out how much benefit in human units we would be getting.”
It doesn’t seem to me that this follows. Let’s assume the “typical” welfare range for chickens is −10 to 10. Let’s also assume that for humans it’s −100 to 100. This is how I interpret “chickens have 10% of the welfare range of the humans”. Let’s also assume moving from cage to cage-free eliminates 50% of the suffering. We still don’t know whether that’s a move from −10 to −5 or −6 to −3. We also don’t know how to place QALYs within this welfare range. When we save a human, should we assume their welfare to be 100 throughout their life?
This also makes it even more crucial to provide a tight technical definition for welfare range so that scientists can place certain experiences within that range.
Yeah, I think so.
Thank you for all your feedback Constance!
Assume that an agent A is doing something morally wrong, eg. fighting in a violent unjust war. You don’t have power to stop the war altogether, but you can get the relevant state sign an agreement against chemical weapons and at least prevent the most horrific forms of killings. What could be deontological restrictions on negotiating with wrongdoers? My preliminary conclusion: It’s good to negotiate for outcomes that are ex-ante Pareto superior even if they don’t cease the constraint violations.
What might be the deontological constraints violated by animal product consumption and how serious are they. There is a meme among animal advocates that all animal product consumption is murder. For that reason, it’s morally forbidden to ask people to reduce their animal product consumption since it’s akin to asking them to reduce the murder they commit.
There are also some instances of doing harm where asking for reduction is totally permissible according to common sense morality, such as asking people to reduce their carbon emissions or their consumption of products made by slaves. I want to look more into whether murder comparison is really apt.
Short thoughts on whether spreading the concept of veganism could be in tension with the principle “love the sinner, hate the sin”. Veganism might implicitly create the category of non-vegans, and might make the activists see the world in terms of “sinners vs. innocents”.
Thoughts on the idea that “Working for institutional change is far more effective than working for individual change in animal advocacy”. How strong is the evidence behind that statement, in which contexts it might be wrong?
Thoughts on how to resolve the tensions between being maximally honest vs running a coalition that unites different stakeholders that have different positions on the topic. Being maximally honest requires saying the things you believe as much as possible. Running a coalition requires acting inside the common ground, and not illegitimately seizing the platform to promote your specific position on the matter.
Empirical research I would like to see in animal advocacy. Primarily to 1. have more robust estimates of impact in animal advocacy 2. Most research in animal advocacy is understandably driven by donor needs. I would like to list some questions that would be more decision-relevant for the animal advocacy organisations(when to campaign, how to campaign, which targets to select etc.).
Thanks for this, I had a couple of things listed out. This looks like a nice way to prioritise them. I will list some ideas here.
Does requiring ex-ante Pareto superiority incentivise information suppression?
Assume I emit x kg of carbon dioxide. Later on, I donate to offset 2x kg of carbon dioxide emissions. The combination of these two actions seems to make everyone better off in expectation. It’s ex-ante Pareto superior. Even though we know that my act of emitting carbon and offsetting it will cause cause the deaths of different individuals due to different extreme weather events compared to not emitting at all, climate scientists report that higher carbon emissions will make the severity of climate change worse overall. Since our forecasts are not granular enough and nobody is made foreseeably worse off by reducing emissions, it’s morally permissible to reduce the total amount of emissions.
This position seems to incentivise information suppression.
Assume a climate scientist creates a reliable and sophisticated climate model that can forecast specific weather events caused by different levels of carbon emissions. Such a model would allow us to infer that reducing emissions by a specific amount would make a specific village in Argentina worse off. The villagers from there could complain to a politician that “your offsetting/reduction policy foreseeably causes severe drought in my region, therefore it makes us foreseeably worse off”.
Policy makers who want to act permissibly would have incentives to prevent such a detailed climate model if ex-ante Pareto superiority were a sound condition for permissibility.