I’ve done ~16 years of academic research work, mostly quantitative or theoretical biology. Philosophically I’m not particularly aligned with EA. But I tend to have similar concerns in practice. My interests include 1) affective wellbeing, 2) AI consciousness & welfare, 3) possible economic & social fallout from AI. Lasted edited 2024-12-17.
Hzn
Karma: −2
I would question some of this basically along 2 lines.
1. I agree that a distinction can be made between personal benefit, personal connection & impartiality in the context of charity. But the examples given seem a bit problematic. From the perspective of an orthodox Christian or Muslim, funding those religions especially in missionary capacity probably is about big impact on the greater good. (As an interesting aside—Muhammad claimed to have converted jins to Islam). Actually to some extent giving any examples in this manner is conflating the donor’s beliefs regarding the charity with specific types of charity.
2. The conflating of effectiveness & impartiality. In a simple model perhaps increasing impartiality never decreases effectiveness. But adding complexities (eg bounded rationality) can break this relationship. For example it’s generally accepted that investing in yourself may be effective for a number of reasons including increasing future earnings. But doesn’t this same logic apply, to some extent, to a sibling most obviously in the case of identical twins? A person could extended this argument to cover a much larger group of people similar to themself.
I am inclined to agree with the post in its main points. But I think the categories ought to be refined.