I think moral cluelessness is the best argument against effective altruism in general, and that this post makes that point better than any other I have seen. I do not mean that as a criticism or even a bad thing. Merely, this sort of thinking (possibly correctly, I don’t even know) suggests to me that it might be time to give up on doing good.
justsaying
I think your coverage of Scott Alexander’s alleged association with HBD is both unfair and unethical. This is evidenced in part by the fact that you lead your post about him with an allegedly leaked private email. You acknowledge deep into your post that you are largely basing your accusations on various unconfirmed sources yet you repeatedly summarize your claims about him without such disclaimers. Even if the email was real, it seems to form almost the entire basis of your case against him and you don’t know the context of a private email. Taking the email at face value, it does not say the things you imply it says.
I don’t know Scott personally but I have been a reader of his blog and various associated forums for many years. Contrary to your characterization, he has in fact actively pushed back against a lot of discussion around HBD on his blog and related spaces. I think your posting about him undermines your credibility elsewhere.
Did you see this blog post from Wayne Hsuing? https://blog.simpleheart.org/p/the-mass-extermination-of-animals
Thanks for writing this up. I would be really interested in thoughts about whether this makes working on U.S. policy less worthwhile compared to other interventions. Some reasons it might not see that a) there is a lot of infrastructure work to be done on policy that spans multiple administrations, b) there are elements of a trump administration that might be good for animals that we could capitalize on(see for example project 2025 recommendations for cutting farm subsidies; also consider some people in trumps orbit who seem to care about animals and wild influence; also consider that trumps last secretary of ag said more positive things about alt proteins than biden’s, etc).
Animal welfare has also been somewhat salient for Republicans. As far as I am aware, they have all been focused on pet-related issues but I still think it says something that it’s been a focus. There was the peanut the squirrel saga (arguably not welfare per se, but still revolved around the life of a non-human animal); there was the dog-shooting thing that seemed to sink Kristi Neom; and there was the baseless accusations that immigrants were eating cats and dogs. Maybe there is a way to leverage some of this sentiment into broader animal welfare initiatives?
Unfortunately I don’t see Vivek as being directly influential on animal issues. Politico mentioned him as possible head of the department of homeland security, which would keep him busy elsewhere and away from animal issues. Really hope I am wrong about this, I was also viewing him as a possible silver lining.
It seems to me that you are doing more to associate HBD with EA by linking this here than Scott Alexander was allegedly doing by sending a private email.
Would you be able to share a source for the $9 billion figure? I’m interested in for another project I am working on, not as it relates to this debate.
Seems to me that the effectiveness costs of public support are already baked into existing effectiveness estimates. It also seems to me that the fact that animal welfare is comparatively unpopular means that it is more neglected and therefore has more low-hanging fruit.
I don’t think any of the popularity-based arguments really support the claim that there is going to be a large backlash that has not yet manifested. I agree that a world where we knew everyone would be 100 percent behind the idea of improving welfare but for some reason hadn’t made it happen out of inertia would make animal welfare interventions even more cost effective. However, I don’t think this means that we should favor global health and development over animal welfare any more than the possibility that people might resent helping the poor people in poor countries over poor people in our own countries means we should focus more on helping the domestic poor out of fear of backlash.
This post is mostly about how animal welfare is less popular than global health but I don’t really see the tie-in for how this (probably correct) claim translates to it being less effective. Taking the first argument at face value, that some people won’t like being in some ways forced to pay more or change their habits, does not seem to translate to “it is not cost effective to do successfully force them (and one hopes eventually change their hearts and minds) anyway.” This was precisely the case for a lot of social movements (abolition, women’s suffrage, civil rights, worker’s rights, the environmental movement, etc.) but all these movements were to various degrees successful.
It seems to me that in order for any of these popularity based arguments to hold water, you need a follow-on of “and therefore it is not cost effective to invest in them, and here is the evidence.” However, I think we have a lot of evidence for cost-effectiveness in investing animal interventions. See cage-free egg campaigns for example. I similarly don’t understand the relevance of other popularity-based concerns, such as being accused of being culturally insensitive. What is the implication for effectiveness if such accusations are made? Why does that matter?
You don’t think a lot of non-EA altruistic actions involve saving lives??
I think about the meat eater problem as pretty distinct from general moral cluelessness. You can estimate how much additional meat people will eat as their incomes increase or as they continue to live. You might be highly uncertain about weighing animal vs. Humans as moral patients, but that is also something you can pretty directly debate, and see the implications of different weights. I think of cluelessness as applying only when there are many many possible consequences that could be highly positive or negative and it’s nearly impossible to discuss/attempt to quantify because the dimensions of uncertainty are so numerous.
The point that I was initially trying to make was only that I don’t think the generalized cluelessness critique particularly favors one cause (for example animal welfare ) over another (for example human health—or vice versa). I think you might make specific arguments about uncertainty regarding particular causes or interventions, but pointing to a general sense of uncertainty does not really move the needle towards any particular cause area.
Separate from that point, I do sort of believe in cluelessness (moral and otherwise) more generally, but honestly just try to ignore that belief for the most part.
Yes I agree with this
I am pretty unmoved by this distinction, and based on the link above, it seems that Greaves is really just making the point that a longtermism mindset incentivizes us to find robustly good interventions, not that it actually succeeds. I think it’s pretty easy to make the cluelessness case about AI alignment as a cause area, for example. Seems quite plausible to me that a lot of so-called alignment work is actually just serving to speed capabilities. Also seems to me that you could align an AI to human values and find that human values are quite bad. Or you could successfully align AI enough to avoid extinction and find that the future is astronomically bad and extinction would have been preferable.
This critique seems to me to be applicable to the entire EA project.
It seems like the whole premise of this debate is (rightly) based on the idea that there is in fact a necessary trade-off between human and animal welfare, no? I.e. if we give the $100 million towards the most cost-effective human focused intervention we can think of then we are necessarily not giving it towards the most cost-effective animal-focused intervention we can think of, no? Of course it is theoretically possible that there exists some intervention which is simultaneously the most cost-effective intervention on both a humans-per-dollar and animals-per-dollar but that seems extremely unlikely.
I am curious where you think it stops. What standard of living are people “obligated” to sink to in order to help strangers? I don’t deny any of this is good or praiseworthy, but it doesn’t seem to have any limiting principle. Should everyone live in squalor, forego a family/deep friendships, and not pursue any passions because time and money can always be spent saving another stranger?
Yes but I think it’s significant that one is morally entitled, not just legally entitled. In other words, imagine replacing pressing the button with actually doing the work to earn 6k. Do you think you are, for example, obligated to drive 12 hours each way in order to pull a drowning child out of a lake? The amount of money in your bank account is endogenous to how much work and effort you put into filling it, whereas I think the way this thought experiment is framed makes it sound like that money fell from the sky.
If you think you are in fact obligated drive 24 hours/increase your own risk of death by taking on a risky job/give up time with your children in order to save a stranger, then I am more sympathetic to the idea that you are obligated give up money for that stranger. However I do not share that intuition.
The difference is that property is distributed based on morally significant, non-random, voluntary activities. See Governing Least by Dan Moller for a moral defense or property. This implies that a) you are entitled to your property because you earned it through morally legitimate means and b) It is a good thing for society more broadly to accept the moral legitimacy of property that is earned through creation, discovery, etc., so the norm that people in general are entitled to their property in most cases is pro-social.
In contrast to most forms of property, accepting money for murder is not a defensible basis for property. This means that a) you are not entitled to that money, and b) supporting such a norm would be bad.
There are of course cases in which you might acquire property in a non-morally-legitimate way. I think the distinction there is far more tenuous, but that is not the case for the bulk of most people’s money.
Christianity is interpreted wildly differently by different people. I agree that there is a coherent version of Christianity that is not only compatible with ea, but demands it. There are also many equally coherent versions of Christianity that are strictly incompatible at least on some elements. I’m all for religious people making inroads about ea to their co-religionists in religious forums but I don’t think it’s a good idea for people on this forum, who have no common religion that unites us, to be discussing the Christian theology of ea. The conversation gets extremely muddled extremely quickly because most participants are not Christian at all and those who are likely do not share a common version of Christianity. It is extremely difficult to progress the conversation under these circumstances and is likely to come off to religious people (who could be entirely swayed by secular arguments) as quite alienating.