Religion also often encourages (or is used to defend) speciesism, and it also leads many people to not believe in x-risk. As such, religious EAs are mostly only relevant to 2⁄3 of the major cause areas of EA. Given that I think global poverty is by far the least important of these cause areas, convincing religious people to care about EA doesn’t seem to have very high value to me.
Buck
As a result of your faith, are you only interested in working on global poverty, and not x-risk or speciesism?
(It’s great to have you and people like you around; I don’t mean to sound judgemental.)
I would strongly consider joining GWWC if this change were made. I agree that there are a number of thorny issues to work out.
EDIT: In particular, I’m really uncomfortable with the prospect of environmentalists joining GWWC.
What’s your response to Peter Hurford’s arguments in his article Why I’m Skeptical Of Unproven Causes...?
I am not worried as much as you about the effect of AI on nonhuman animals, but I agree that it would maybe be nice if MIRI was slightly more explicitly anti-speciesist in its materials. I think they have a pretty good excuse for not being clearer about this, though.
FWIW, MIRI people seem pretty un-speciesist to me, in the strict sense of not being biased based on species. (Eliezer is AFAIK alone among MIRI employees in his confidence that chickens etc are morally irrelevant.) I have had a few conversations with Nate about nonhuman animals, and I’ve thought his opinions were thoroughly reasonable.
(Nate can probably respond to this too, but I think it’s possible that I’m a more unbiased source on MIRI’s attitude to non-human animals.)
A way of thinking about saving vs improving lives
Yeah, I meant “in the way” rather than “like”. Thanks.
One advantage of life extension is that it might prompt people to think in a more long-term-focused way, which might be nice for solving coordination problems and x-risks.
Some people think that they are able to think more clearly about the future as a result of being signed up for cryonics, because they aren’t as scared of death and don’t need to rationalize that eg the Singularity will happen in their lifetimes.
In cryonicists’ defense, I’ve never heard them say that they buy cryonics from their EA budget; it seems to be a personal spending thing.
Thanks so much for writing this. I agree with your arguments and I find your conclusion fairly persuasive.
Not all entomologists think that insects don’t have suffering or pain.
Health eFilings is type 1.
At this time I still feel confident recommending that small donors support REG.
How small is “small donor” here?
I plan to donate about $38k more this year, but I appreciate your guessing :)
One quick response: The people whose lives are saved by the Against Malaria Foundation are usually too poor to afford much meat (Malawians consume 25x less meat than Americans), and farmed animals in developing countries plausibly have better lives than those in developed countries at the moment, so I’m not very concerned about an immediate negative impact of AMF.
On the other hand, an increased population of Malawi now could lead to increased meat consumption in the future if the nation becomes wealthier in the future.
The effects of increased population on wild animal suffering are also important, which leads me to be unsure about whether increasing populations is net good. I can’t immediately find a good link for this, but this is an alright starting point.
I think this is dumb; I don’t see any particular evidence that this happens very often, and I’m much more worried about people being overconfident about things based on tenuous, badly thought out, oversimplified models than I am about them being underconfident because of concerns like these.
I disagree with several of the numbers you use.
For example, Globals$B22 is the utility of a developing-world human. I feel like that number is quite tenuous: I think you’re using happiness research in a way pretty differently from how it was intended. I would not call that a very robust estimate. (Also, “developing world human” is pretty ambiguous. My speculation is that you mean the level of poverty experienced by the people affected by GiveDirectly and AMF; is that correct?)
I think the probability of death from AI that you give is too low. You’ve done the easy and defensible and epistemically-virtuous-looking thing of looking at a survey of expert opinion. But I don’t think it’s actually a very good estimate of the most educated guess, any more than measuring the proportion of Americans who are vegetarian is a good way of getting a good estimate for the probability that chickens have moral value.
What do you mean by “size of FAI community”? If you mean “number of full time AI safety people”, I think your estimate is way too high. There are like, maybe 50 AI safety people? So you’re estimating that we at least quadruple that? I also don’t quite understand the relevance of the link tohttps://intelligence.org/2014/01/28/how-big-is-ai/.
I also have some more general concerns about how you treat uncertainty: I think it plausibly makes some sense that variance in your estimate of chicken sentience should decrease your estimate of effectiveness of cage-free campaigns. I’ll argue for this properly at some point in the future.
Great work overall; I’m super excited to see this! This kind of work has a lot of influence on my donations.
What downvoting brigades are there?
(which is honestly not about WAS)
What would you call that kind of suffering if not WAS?
BTW, the term “social justice warrior” is generally considered derogatory.