Upvoted despite disagreeing, since I think this is an important question to explore. But Iām puzzled by the following claim:
from where I stand, someone who is giving half their salary to the āaltruistic causeā of having community events and recruiting more people isnāt effective altruism.
Obviously the motivation for community-building is not that the community is an end in itself, but instrumental: more people ājoining EAā, taking the GWWC pledge and/āor going into directly high-impact work, means indirectly causing more good for all the other EA causes that we ultimately care about. Without addressing this head-on, Iām not sure which of the following you mean:
(1) An empirical disagreement: You deny that EA community-building is instrumentally effective for (indirectly) helping other, first-order EA causes.
(2) A moral/āconceptual disagreement: You deny that indirectly causing good counts as altruism.
Obviously the motivation for community-building is not that the community is an end in itself, but instrumental: more people ājoining EAā, taking the GWWC pledge and/āor going into directly high-impact work, means indirectly causing more good for all the other EA causes that we ultimately care about.
I took OPās point here to be that this logic looks suspiciously like the kind of rationalizations EA got its start criticizing in other areas.
āWhy do they throw these fancy gala fundraising dinners instead of being more frugal and giving more money to the cause?ā seems like a classic EA critique of conventional philanthropy. But once EA becomes not just an idea but an identity, then itās understood that building the community is per se good, so suddenly sponsoring a fellowship slash vacation in the Bahamas becomes virtuous community building. To anyone outside the bubble, this looks like just recapitulating problems from elsewhere.
Hmm, I think of the āclassic EAā case for GiveWell over Charity Navigator as precisely based on an awareness that bad optics around āoverheadā, CEO pay, fundraising, etc., arenāt necessarily bad uses of funds, and we should instead look at what the organization ultimately achieves.
I donāt mean either (1) or (2), but Iām not sure itās a single argument.
First, I think itās epistemically and socially healthy for people to separate giving to their community from altruism. To explain a bit more, itās good to view your community as a valid place to invest effort independent of eventual value. Without that, I think people often end up being exploitative, pushing people to do things instead of treating them respectfully, or being dismissive of others, for example, telling people they shouldnāt be in EA because they arenāt making the right choices. If your community isnāt just about the eventual altruistic value they will create, those failure modes are less likely.
Second, itās easy to lose sight of eventual goals when focused on instrumental ones, and get stuck in a mode where you are goodharting community size, or dollars being donatedāboth community size and total dollars seem like an unfortunately easy attractor for this failure.
Third, relatedly, I think that people should be careful not to build models of impact that are too indirect, because they often fail at unexpected places. The simpler your path to impact is, the fewer failure points exist. Community building in many steps removed from the objective, and we should certainly be cautious about doing naĆÆve EV calculations about increasing community size!
Separate but related to community, I think your point about identity, and whether fostering EA as an identity is epistemically healthy, is also relevant to (1).
Your analogy to church spoke very powerfully to me and to something I have always been a bit uncomfortable with. To me, EA is a philosophy/āschool of thought, and I struggle to understand how a person can ābeā a philosophy, or how a philosophy can ārecruit membersā.
I also suspect that a strong self-perception that one is a āgood personā can just as often provide (internal and external) cover for wrong-doing as it can be a motivator to actually do good, as any number of high-profile non-profit scandals (and anecdotal experience from Iām guessing most young women who have ever been involved in a movement for change) can tell you.
I have nothing at all against organic communities, or professional conferences etc, but I also wonder whether there is evidence that building EA as an identity (ājoin us!ā) as opposed to something that people can do is instrumentally effective for first-order causes. Maybe it does, but I think it warrants some interrogation.
> healthy for people to separate giving to their community from altruism.
Is this realistically achievable, with the community we have now? How?
(I imagine it would take a comms team with a social psychology genius and a huge budget, and still would only work partially, and would require very strong buy in from current power players, and a revision of how EA is presented and introduced? but perhaps you think another, leaner and more viable approach is possible?)
>The simpler your path to impact is, the fewer failure points exist
Thatās not always true.
Some extreme counter-examples:
a. Programmes on infant stunting keep failing, partly because an overly simple approach has been adopted (intensive infant feeding, Plumpy Nuts etc, with insufficient attention to maternal nutrition, aflatoxin removal, treating parasites in pregnancy, adolescent nutrition, conditional cash transfers etc)
b. A critical path plan was used for Apollo, and worked much better than the simpler Soviet approach, despite being much more complicated.
c. The Brexit Leave campaign SEEMED simple but was actually formed through practice on previous campaigns, and was very sophisticated āunder the hoodā, which made it hard to oppose.
Upvoted despite disagreeing, since I think this is an important question to explore. But Iām puzzled by the following claim:
Obviously the motivation for community-building is not that the community is an end in itself, but instrumental: more people ājoining EAā, taking the GWWC pledge and/āor going into directly high-impact work, means indirectly causing more good for all the other EA causes that we ultimately care about. Without addressing this head-on, Iām not sure which of the following you mean:
(1) An empirical disagreement: You deny that EA community-building is instrumentally effective for (indirectly) helping other, first-order EA causes.
(2) A moral/āconceptual disagreement: You deny that indirectly causing good counts as altruism.
Can you clarify which of these you have in mind?
I took OPās point here to be that this logic looks suspiciously like the kind of rationalizations EA got its start criticizing in other areas.
āWhy do they throw these fancy gala fundraising dinners instead of being more frugal and giving more money to the cause?ā seems like a classic EA critique of conventional philanthropy. But once EA becomes not just an idea but an identity, then itās understood that building the community is per se good, so suddenly sponsoring a fellowship slash vacation in the Bahamas becomes virtuous community building. To anyone outside the bubble, this looks like just recapitulating problems from elsewhere.
Hmm, I think of the āclassic EAā case for GiveWell over Charity Navigator as precisely based on an awareness that bad optics around āoverheadā, CEO pay, fundraising, etc., arenāt necessarily bad uses of funds, and we should instead look at what the organization ultimately achieves.
I donāt mean either (1) or (2), but Iām not sure itās a single argument.
First, I think itās epistemically and socially healthy for people to separate giving to their community from altruism. To explain a bit more, itās good to view your community as a valid place to invest effort independent of eventual value. Without that, I think people often end up being exploitative, pushing people to do things instead of treating them respectfully, or being dismissive of others, for example, telling people they shouldnāt be in EA because they arenāt making the right choices. If your community isnāt just about the eventual altruistic value they will create, those failure modes are less likely.
Second, itās easy to lose sight of eventual goals when focused on instrumental ones, and get stuck in a mode where you are goodharting community size, or dollars being donatedāboth community size and total dollars seem like an unfortunately easy attractor for this failure.
Third, relatedly, I think that people should be careful not to build models of impact that are too indirect, because they often fail at unexpected places. The simpler your path to impact is, the fewer failure points exist. Community building in many steps removed from the objective, and we should certainly be cautious about doing naĆÆve EV calculations about increasing community size!
Separate but related to community, I think your point about identity, and whether fostering EA as an identity is epistemically healthy, is also relevant to (1).
Your analogy to church spoke very powerfully to me and to something I have always been a bit uncomfortable with. To me, EA is a philosophy/āschool of thought, and I struggle to understand how a person can ābeā a philosophy, or how a philosophy can ārecruit membersā.
I also suspect that a strong self-perception that one is a āgood personā can just as often provide (internal and external) cover for wrong-doing as it can be a motivator to actually do good, as any number of high-profile non-profit scandals (and anecdotal experience from Iām guessing most young women who have ever been involved in a movement for change) can tell you.
I have nothing at all against organic communities, or professional conferences etc, but I also wonder whether there is evidence that building EA as an identity (ājoin us!ā) as opposed to something that people can do is instrumentally effective for first-order causes. Maybe it does, but I think it warrants some interrogation.