Hi Vasco, I am curious why you stop at nemotodes and don’t extend your conclusions to single-called organisms such as bacteria.
justsaying
If you could set the dial on how interested you wanted candidates to be in taking a position before applying, do you have any thoughts on where you would set it? For example, would you generally hope that candidates who would put their probability of taking the job if offered at 25 percent apply?
If that seems too abstract, could you generally comment on how much resources you put into evaluating the marginal candidate in a hiring round?
Relatedly, how often do your top candidates decline offers and how much does that throw off your process?
Thanks for doing this! I’m a mid-career policy person currently in what I consider to be a relatively high-impact role in a non-ai cause area. I am good at my job but keep thinking about ai and and whether I should be pivoting to an ai policy role. I’ve been reading books about ai and I’ve listened to lots of 80k hours, Dwarkesh, and other podcasts on the topic. I have a strong sense of various threat models from ai but I still don’t have developed views on what good ai policy would look like. What should I be reading/listening to in order to develop these views? Do you think it make sense to apply to ai policy roles before my views on optimal policy are well-developed?
This is correct and equally but less visibly so at current margins. This would increase the profitability of animal farming and also legitimize the idea that as a farmer, my Coase-ian property rights over my animals include the right to effectively torture. I am not saying this on balance makes it bad. I don’t know, but it’s extremely important to carefully think through this trade off.
Christianity is interpreted wildly differently by different people. I agree that there is a coherent version of Christianity that is not only compatible with ea, but demands it. There are also many equally coherent versions of Christianity that are strictly incompatible at least on some elements. I’m all for religious people making inroads about ea to their co-religionists in religious forums but I don’t think it’s a good idea for people on this forum, who have no common religion that unites us, to be discussing the Christian theology of ea. The conversation gets extremely muddled extremely quickly because most participants are not Christian at all and those who are likely do not share a common version of Christianity. It is extremely difficult to progress the conversation under these circumstances and is likely to come off to religious people (who could be entirely swayed by secular arguments) as quite alienating.
I think moral cluelessness is the best argument against effective altruism in general, and that this post makes that point better than any other I have seen. I do not mean that as a criticism or even a bad thing. Merely, this sort of thinking (possibly correctly, I don’t even know) suggests to me that it might be time to give up on doing good.
I think your coverage of Scott Alexander’s alleged association with HBD is both unfair and unethical. This is evidenced in part by the fact that you lead your post about him with an allegedly leaked private email. You acknowledge deep into your post that you are largely basing your accusations on various unconfirmed sources yet you repeatedly summarize your claims about him without such disclaimers. Even if the email was real, it seems to form almost the entire basis of your case against him and you don’t know the context of a private email. Taking the email at face value, it does not say the things you imply it says.
I don’t know Scott personally but I have been a reader of his blog and various associated forums for many years. Contrary to your characterization, he has in fact actively pushed back against a lot of discussion around HBD on his blog and related spaces. I think your posting about him undermines your credibility elsewhere.
Did you see this blog post from Wayne Hsuing? https://blog.simpleheart.org/p/the-mass-extermination-of-animals
Thanks for writing this up. I would be really interested in thoughts about whether this makes working on U.S. policy less worthwhile compared to other interventions. Some reasons it might not see that a) there is a lot of infrastructure work to be done on policy that spans multiple administrations, b) there are elements of a trump administration that might be good for animals that we could capitalize on(see for example project 2025 recommendations for cutting farm subsidies; also consider some people in trumps orbit who seem to care about animals and wild influence; also consider that trumps last secretary of ag said more positive things about alt proteins than biden’s, etc).
Animal welfare has also been somewhat salient for Republicans. As far as I am aware, they have all been focused on pet-related issues but I still think it says something that it’s been a focus. There was the peanut the squirrel saga (arguably not welfare per se, but still revolved around the life of a non-human animal); there was the dog-shooting thing that seemed to sink Kristi Neom; and there was the baseless accusations that immigrants were eating cats and dogs. Maybe there is a way to leverage some of this sentiment into broader animal welfare initiatives?
Unfortunately I don’t see Vivek as being directly influential on animal issues. Politico mentioned him as possible head of the department of homeland security, which would keep him busy elsewhere and away from animal issues. Really hope I am wrong about this, I was also viewing him as a possible silver lining.
It seems to me that you are doing more to associate HBD with EA by linking this here than Scott Alexander was allegedly doing by sending a private email.
Would you be able to share a source for the $9 billion figure? I’m interested in for another project I am working on, not as it relates to this debate.
Seems to me that the effectiveness costs of public support are already baked into existing effectiveness estimates. It also seems to me that the fact that animal welfare is comparatively unpopular means that it is more neglected and therefore has more low-hanging fruit.
I don’t think any of the popularity-based arguments really support the claim that there is going to be a large backlash that has not yet manifested. I agree that a world where we knew everyone would be 100 percent behind the idea of improving welfare but for some reason hadn’t made it happen out of inertia would make animal welfare interventions even more cost effective. However, I don’t think this means that we should favor global health and development over animal welfare any more than the possibility that people might resent helping the poor people in poor countries over poor people in our own countries means we should focus more on helping the domestic poor out of fear of backlash.
This post is mostly about how animal welfare is less popular than global health but I don’t really see the tie-in for how this (probably correct) claim translates to it being less effective. Taking the first argument at face value, that some people won’t like being in some ways forced to pay more or change their habits, does not seem to translate to “it is not cost effective to do successfully force them (and one hopes eventually change their hearts and minds) anyway.” This was precisely the case for a lot of social movements (abolition, women’s suffrage, civil rights, worker’s rights, the environmental movement, etc.) but all these movements were to various degrees successful.
It seems to me that in order for any of these popularity based arguments to hold water, you need a follow-on of “and therefore it is not cost effective to invest in them, and here is the evidence.” However, I think we have a lot of evidence for cost-effectiveness in investing animal interventions. See cage-free egg campaigns for example. I similarly don’t understand the relevance of other popularity-based concerns, such as being accused of being culturally insensitive. What is the implication for effectiveness if such accusations are made? Why does that matter?
You don’t think a lot of non-EA altruistic actions involve saving lives??
I think about the meat eater problem as pretty distinct from general moral cluelessness. You can estimate how much additional meat people will eat as their incomes increase or as they continue to live. You might be highly uncertain about weighing animal vs. Humans as moral patients, but that is also something you can pretty directly debate, and see the implications of different weights. I think of cluelessness as applying only when there are many many possible consequences that could be highly positive or negative and it’s nearly impossible to discuss/attempt to quantify because the dimensions of uncertainty are so numerous.
The point that I was initially trying to make was only that I don’t think the generalized cluelessness critique particularly favors one cause (for example animal welfare ) over another (for example human health—or vice versa). I think you might make specific arguments about uncertainty regarding particular causes or interventions, but pointing to a general sense of uncertainty does not really move the needle towards any particular cause area.
Separate from that point, I do sort of believe in cluelessness (moral and otherwise) more generally, but honestly just try to ignore that belief for the most part.
Yes I agree with this
I am pretty unmoved by this distinction, and based on the link above, it seems that Greaves is really just making the point that a longtermism mindset incentivizes us to find robustly good interventions, not that it actually succeeds. I think it’s pretty easy to make the cluelessness case about AI alignment as a cause area, for example. Seems quite plausible to me that a lot of so-called alignment work is actually just serving to speed capabilities. Also seems to me that you could align an AI to human values and find that human values are quite bad. Or you could successfully align AI enough to avoid extinction and find that the future is astronomically bad and extinction would have been preferable.
I wonder if the crux here is the effectiveness of your particular call to action: “Please strive to be less stupid, and call it out when you see it in others.”
I am guessing I am a pretty typical ea-forum reader in that I am appalled by the anti-vaccine turn of the u.s. government. I cannot do much to “be less stupid” by your lights in this particular respect because I generally agree with you on the immorality of preventing vaccine access. But I also don’t think calling out the stupidity when I see it is necessarily a good strategy. That could be very alienating, reduce trust, make anti-vaccine advocates feel victimized, inadvertently associate my various controversial views with vaccines, and increase backlash in the form of more anti-vaccine advocacy. I’m surr in some instances it is in fact the exactly right thing to do, but I also don’t think it’s the straightforward correct response towards people who genuinely think that vaccines cause autism, death, or other harms.