Since I don’t see it mentioned already, one of the major “cause areas” analyzed by e.g., 80000 Hours is “meta-EA” (promoting effective altruism). I understand that one of your points is about trying to promote altruism more generally, and I too have wondered about the potential benefits/tradeoffs of watering down the EA message to try to reach more people (e.g., encouraging people to at least donate to decently effective charities—perhaps “try to find the most effective charity for the problem you want to focus on [even if the overall problem area isn’t really that important]”). While I definitely think there are some ways this could be done better, I don’t know exactly what they are, and I have also thought of/seen a few counterpoints (non-exhaustive):
There is a chance that it could lead to message confusion/dilution regarding EA.
Persuading people in general to be more altruistic when they wouldn’t have been otherwise (counterfactually) may be fairly difficult.
(Closely tied with (1)) It may legitimately be the case that (counterfactually) persuading/causing one person to aspire to EA is more impactful than persuading 50 people to be more generally altruistic if, for example, the 50 people begin donating to seeing eye dog charities whereas the one EA person is donating to a schistosomiasis charity.
Promoting EA might be done better under a non-EA banner or it may not be EA’s niche/comparative advantage.
Yes, these are all sound counterpoints. Together, they suggest the idea is at least, neglected. I think your point 2 was also made by Stefan_Schubert in a comment above. I would be very interested to see research in the area, if there is any. I agree your points 1&3 are a problem if the number of altruistic people were finite, but what if everyone behaved altruistically, to the benefit of others? To the point that it would not matter if some people chose to donate to seeing eye dog charities?
I can appreciate your argument that promoting general altruism might not fit under EA banner, specifically because it lacks the “effective” intent, but I would argue that it could be one of the “hits based”, fat-tailed investments EA has been seeking. What if it were tractable and scalable to make people generally nicer to each other, and desire to help each other, non-human animals, the environment, and the future, impartially?
Since I don’t see it mentioned already, one of the major “cause areas” analyzed by e.g., 80000 Hours is “meta-EA” (promoting effective altruism). I understand that one of your points is about trying to promote altruism more generally, and I too have wondered about the potential benefits/tradeoffs of watering down the EA message to try to reach more people (e.g., encouraging people to at least donate to decently effective charities—perhaps “try to find the most effective charity for the problem you want to focus on [even if the overall problem area isn’t really that important]”). While I definitely think there are some ways this could be done better, I don’t know exactly what they are, and I have also thought of/seen a few counterpoints (non-exhaustive):
There is a chance that it could lead to message confusion/dilution regarding EA.
Persuading people in general to be more altruistic when they wouldn’t have been otherwise (counterfactually) may be fairly difficult.
(Closely tied with (1)) It may legitimately be the case that (counterfactually) persuading/causing one person to aspire to EA is more impactful than persuading 50 people to be more generally altruistic if, for example, the 50 people begin donating to seeing eye dog charities whereas the one EA person is donating to a schistosomiasis charity.
Promoting EA might be done better under a non-EA banner or it may not be EA’s niche/comparative advantage.
Yes, these are all sound counterpoints. Together, they suggest the idea is at least, neglected. I think your point 2 was also made by Stefan_Schubert in a comment above. I would be very interested to see research in the area, if there is any. I agree your points 1&3 are a problem if the number of altruistic people were finite, but what if everyone behaved altruistically, to the benefit of others? To the point that it would not matter if some people chose to donate to seeing eye dog charities?
I can appreciate your argument that promoting general altruism might not fit under EA banner, specifically because it lacks the “effective” intent, but I would argue that it could be one of the “hits based”, fat-tailed investments EA has been seeking. What if it were tractable and scalable to make people generally nicer to each other, and desire to help each other, non-human animals, the environment, and the future, impartially?