It makes me quite sad that in practice EA has become so much about specific answers (work on AI risk, donate to this charity, become vegan) to the question of how we effective make the world a better place, that not agreeing with a specific answer can create so much friction. In my mind EA really is just about the question itself and the world is super complicated so we should be skeptical of any particular answer.
If we accidentally start selecting for people that intuitive agree with certain answers (which it sounds like we are doing, I know people that have a deep desire to make a lot of counterfactual impact, but were turned of because they ‘disagreed’ with some common EA belief, and sounds like if you read superintelligence earlier that would have been the case for you as well) that has a big negative effect on our epistemics and ultimately hurts our goal. We won’t be able to check each others biases and have a less diverse set of views and viewpoints.
I think you are too optimistic about how much the average person in the global north care about people in the global south (and possibly too pessimistic about how much people care about animals, but less confident about that).
“Saving children from malaria, diarrheal disease, lead poisoning, or treating cataracts and obstetric fistula is hard to argue against without sounding like a bad person.”
The argument that you should help locally instead (even if the people making that argument don’t do so) is easily made without sounding like a bad person. I live in the Netherlands and any spending on developmental cooperation or charity work tends to be very unpopular, our current government also pledged to cut a lot of that spending. Pushback on ‘giving money to Africa’ is something I certainly encounter, it might be less than for being vegan but I’m not sure and also not sure by how much. I would want better data on this, this piece seems to assume a big difference in how socially acceptable, moderate and politizised global health is compared to animal welfare. I would like to see better data on whether that assumption is true or not.
I find the slippery slope argument pretty weak, historically expanding the moral circle (to women, slaves, people of color, lhbtqi+ people) has been quite important and still requires a lot of work. It seems much easier to not go far enough than to go too far, and expanding the moral circle to animals or people in the global south is both an expansion. A core strength of EA is that there is a lot of attention to groups that are outside the moral circle of many comperatively affluent and wealthy people or institutions (the global poor, animals, and future generations). The slippery-slope argument is only meaningful if going down the slope would be bad, which is unclear to me.