I find it emotionally draining when heated topics become battlegrounds for social proofing through mass use of agreement vote/karma. It makes me feel like people are trying to manipulate me by illegitimate means and I’m a target of aggression. I don’t have any good solutions here but I wanted to offer feedback on my experience.
emre kaplan
How is HBD action-relevant for EA in a pre-AGI world? Do you think getting people accept HBD is one of the top 50 interventions for making progress on AI safety and governance?
“there have been a bunch of radical-leftist animal rights people at various conferences that have been cited to me many times as something that made very promising young people substantially less likely to attend (I don’t want to dox the relevant attendees here, but would be happy to DM you some names if you want).”
I’m curious about the type of behaviour rather than the names of the people.
I have seen popular uses of the term “effective altruist” in a way that doesn’t require self-identification. In this example Peter Singer refers to Bill Gates, Melinda French Gates and Warren Buffet as the most effective altruists in history.
My two cents:
I shortly looked into where wealthy Muslims in Türkiye donate to for their zakat. A few people mentioned that one common way businesspeople pay their zakat is through paying bonuses to their employees. I saw quite a lot discussion of this in Islamic jurisprudence websites but I couldn’t identify someone explicitly doing that as people are discouraged from talking about their donations.
“constraint on warranted hostility: the target must be ill-willed and/or unreasonable.”
Trying to apply this constraint seems to contradict with non-violent communication norms on not assuming intent and keeping the discussion focused on harms/benefits/specific behaviours.
Very interesting and exciting. Looking forward to learning from this.
Does requiring ex-ante Pareto superiority incentivise information suppression?
Assume I emit x kg of carbon dioxide. Later on, I donate to offset 2x kg of carbon dioxide emissions. The combination of these two actions seems to make everyone better off in expectation. It’s ex-ante Pareto superior. Even though we know that my act of emitting carbon and offsetting it will cause the deaths of different individuals due to different extreme weather events compared to not emitting at all, climate scientists report that higher carbon emissions will make the severity of climate change worse overall. Since our forecasts are not granular enough and nobody is made foreseeably worse off by reducing emissions, it’s morally permissible to reduce the total amount of emissions.
This position seems to incentivise information suppression.
Assume a climate scientist creates a reliable and sophisticated climate model that can forecast specific weather events caused by different levels of carbon emissions. Such a model would allow us to infer that reducing emissions by a specific amount would make a specific village in Argentina worse off. The villagers from there could complain to a politician that “your offsetting/reduction policy foreseeably causes severe drought in my region, therefore it makes us foreseeably worse off”.
Policy makers who want to act permissibly would have incentives to prevent such a detailed climate model if ex-ante Pareto superiority were a sound condition for permissibility.
Many grantee organisations report the lessons they learnt to their donors. Open Philanthropy must have accumulated a lot of information on the best practices for animal welfare organisations. As far as I understand, grant makers are wary of giving object level advice and micromanaging grantees. On the other hand, many organisations already spend a lot of time trying to learn about the best (and worst) practices in other organisations. Could Open Phil animal welfare team prepare an anonymised write up about what their grantees report as the reasons for their successes and failures?
I believe we’re in agreement that the official definition of veganism is vague as you also use words like “ambiguity” or “unclear” while describing it. In my comment I’m stating that vagueness of that definition isn’t that much of a problem.
I’m also curious why do you think animal tested ingredient consumption is first-order harm whereas crop deaths are a second-order harm. I can see how tractors crushing animals might be accidental instead of intentional. But when I compare pesticides to animal testing, both of them seem to be instances of intentionally exposing animals to harmful chemicals to improve product quality.
I think vagueness isn’t that much of a problem. Many useful categories are vague. Even murder and rape are vague. People can say “we don’t know the exact point where harm to animals becomes unacceptable. But morality is very difficult. That’s to be expected. We know some actions(such as eating animal products) are definitely too bad, for that reason we can confidently claim they are non-vegan.”
I think bigger problems with the consistency of veganism are:
-Some obviously vegan actions harm more animals than some obviously non-vegan actions.
-Some vegans cause more animal killings than some non-vegans.
-Veganism itself optimises for minimising animal product consumption. It doesn’t optimise for minimising killings caused or minimising harm caused or minimising the suffering in the world.
I think what happens is that human brain finds it much easier to attribute moral emotions like disgust and shame to physical objects. So our emotional reactions track “can I sustainably disgust this physical object” rather than “is this action causing the least harm possible”. If something can be completely eliminated it gets tabooed. On the other hand it’s unstable to wear clothes but also feel disgusted when someone buys way too many clothes. So you can’t create a taboo over clothing or vehicle use. I wrote more about this topic here.
Thank you. Particularly the section with “It’s typically legal for children of any age to work for their parents’ business.” is new to me. I will replace the examples.
My understanding is that there are often minimum age limits for minor employment with a blanket ban below a certain age. When I use the expression “child labour”, I don’t mean 17 years olds. But you’re right that my phrasing isn’t precise there. I also agree people won’t mind children selling lemonades on their own. But in my conversations there was a general agreement that you shouldn’t make your 10 years old child work full-time and you absolutely shouldn’t employ any kid of that age as an employer.
Even less controversially, since there is agreement that early children’s rights legislation was way below the acceptable standard, it serves as an example for “getting someone to do something less bad but still forbidden”.
Deontological Constraints on Animal Products
If taking a salary cut is considered as honest fulfilment of the GWWC pledge, I’m willing to take the pledge.
I work in an EA-funded non-profit. It seems inefficient to donate my income instead of taking a salary cut.
Thanks, many websites seem to report this without the qualifier “per quarter”, which confused me.
Where does the “$200/user/year” figure come from? They report $68.44 average revenue per user for the US and Canada in their 2023 Q4 report.
This was super informative for me, thank you.
I’m confused by this section in this interview:
“Well, the simplest toy example is just going to be, imagine that you have some assessment that says, I think chickens are in a really bad state in factory farms, and I think that if we move layer hens from battery cages into a cage-free environment, we make them 40% better off. And I think that after doing this whole project — whatever the details, we’re just going to make up the toy numbers — I think that chickens have one-tenth of the welfare range of humans.
So now we’ve got 40% change in the welfare for these chickens and we’ve got 10% of the welfare range, so we can multiply these through and say how much welfare you’d be getting in a human equivalent for that benefit to one individual. Then you multiply the number of individuals and you can figure out how much benefit in human units we would be getting.”
It doesn’t seem to me that this follows. Let’s assume the “typical” welfare range for chickens is −10 to 10. Let’s also assume that for humans it’s −100 to 100. This is how I interpret “chickens have 10% of the welfare range of the humans”. Let’s also assume moving from cage to cage-free eliminates 50% of the suffering. We still don’t know whether that’s a move from −10 to −5 or −6 to −3. We also don’t know how to place QALYs within this welfare range. When we save a human, should we assume their welfare to be 100 throughout their life?
This also makes it even more crucial to provide a tight technical definition for welfare range so that scientists can place certain experiences within that range.
You should be familiar with this from activism, people use “like”s and mass comments in social media to make bystanders more likely to believe in an idea or have certain attitudes just through social proof effect. I feel a similar vibe with discussions under heated posts.