Upvoted despite disagreeing, since I think this is an important question to explore. But I’m puzzled by the following claim:
from where I stand, someone who is giving half their salary to the “altruistic cause” of having community events and recruiting more people isn’t effective altruism.
Obviously the motivation for community-building is not that the community is an end in itself, but instrumental: more people “joining EA”, taking the GWWC pledge and/or going into directly high-impact work, means indirectly causing more good for all the other EA causes that we ultimately care about. Without addressing this head-on, I’m not sure which of the following you mean:
(1) An empirical disagreement: You deny that EA community-building is instrumentally effective for (indirectly) helping other, first-order EA causes.
(2) A moral/conceptual disagreement: You deny that indirectly causing good counts as altruism.
Obviously the motivation for community-building is not that the community is an end in itself, but instrumental: more people “joining EA”, taking the GWWC pledge and/or going into directly high-impact work, means indirectly causing more good for all the other EA causes that we ultimately care about.
I took OP’s point here to be that this logic looks suspiciously like the kind of rationalizations EA got its start criticizing in other areas.
“Why do they throw these fancy gala fundraising dinners instead of being more frugal and giving more money to the cause?” seems like a classic EA critique of conventional philanthropy. But once EA becomes not just an idea but an identity, then it’s understood that building the community is per se good, so suddenly sponsoring a fellowship slash vacation in the Bahamas becomes virtuous community building. To anyone outside the bubble, this looks like just recapitulating problems from elsewhere.
Hmm, I think of the “classic EA” case for GiveWell over Charity Navigator as precisely based on an awareness that bad optics around “overhead”, CEO pay, fundraising, etc., aren’t necessarily bad uses of funds, and we should instead look at what the organization ultimately achieves.
I don’t mean either (1) or (2), but I’m not sure it’s a single argument.
First, I think it’s epistemically and socially healthy for people to separate giving to their community from altruism. To explain a bit more, it’s good to view your community as a valid place to invest effort independent of eventual value. Without that, I think people often end up being exploitative, pushing people to do things instead of treating them respectfully, or being dismissive of others, for example, telling people they shouldn’t be in EA because they aren’t making the right choices. If your community isn’t just about the eventual altruistic value they will create, those failure modes are less likely.
Second, it’s easy to lose sight of eventual goals when focused on instrumental ones, and get stuck in a mode where you are goodharting community size, or dollars being donated—both community size and total dollars seem like an unfortunately easy attractor for this failure.
Third, relatedly, I think that people should be careful not to build models of impact that are too indirect, because they often fail at unexpected places. The simpler your path to impact is, the fewer failure points exist. Community building in many steps removed from the objective, and we should certainly be cautious about doing naïve EV calculations about increasing community size!
Separate but related to community, I think your point about identity, and whether fostering EA as an identity is epistemically healthy, is also relevant to (1).
Your analogy to church spoke very powerfully to me and to something I have always been a bit uncomfortable with. To me, EA is a philosophy/school of thought, and I struggle to understand how a person can “be” a philosophy, or how a philosophy can “recruit members”.
I also suspect that a strong self-perception that one is a “good person” can just as often provide (internal and external) cover for wrong-doing as it can be a motivator to actually do good, as any number of high-profile non-profit scandals (and anecdotal experience from I’m guessing most young women who have ever been involved in a movement for change) can tell you.
I have nothing at all against organic communities, or professional conferences etc, but I also wonder whether there is evidence that building EA as an identity (“join us!”) as opposed to something that people can do is instrumentally effective for first-order causes. Maybe it does, but I think it warrants some interrogation.
> healthy for people to separate giving to their community from altruism.
Is this realistically achievable, with the community we have now? How?
(I imagine it would take a comms team with a social psychology genius and a huge budget, and still would only work partially, and would require very strong buy in from current power players, and a revision of how EA is presented and introduced? but perhaps you think another, leaner and more viable approach is possible?)
>The simpler your path to impact is, the fewer failure points exist
That’s not always true.
Some extreme counter-examples:
a. Programmes on infant stunting keep failing, partly because an overly simple approach has been adopted (intensive infant feeding, Plumpy Nuts etc, with insufficient attention to maternal nutrition, aflatoxin removal, treating parasites in pregnancy, adolescent nutrition, conditional cash transfers etc)
b. A critical path plan was used for Apollo, and worked much better than the simpler Soviet approach, despite being much more complicated.
c. The Brexit Leave campaign SEEMED simple but was actually formed through practice on previous campaigns, and was very sophisticated “under the hood”, which made it hard to oppose.
Upvoted despite disagreeing, since I think this is an important question to explore. But I’m puzzled by the following claim:
Obviously the motivation for community-building is not that the community is an end in itself, but instrumental: more people “joining EA”, taking the GWWC pledge and/or going into directly high-impact work, means indirectly causing more good for all the other EA causes that we ultimately care about. Without addressing this head-on, I’m not sure which of the following you mean:
(1) An empirical disagreement: You deny that EA community-building is instrumentally effective for (indirectly) helping other, first-order EA causes.
(2) A moral/conceptual disagreement: You deny that indirectly causing good counts as altruism.
Can you clarify which of these you have in mind?
I took OP’s point here to be that this logic looks suspiciously like the kind of rationalizations EA got its start criticizing in other areas.
“Why do they throw these fancy gala fundraising dinners instead of being more frugal and giving more money to the cause?” seems like a classic EA critique of conventional philanthropy. But once EA becomes not just an idea but an identity, then it’s understood that building the community is per se good, so suddenly sponsoring a fellowship slash vacation in the Bahamas becomes virtuous community building. To anyone outside the bubble, this looks like just recapitulating problems from elsewhere.
Hmm, I think of the “classic EA” case for GiveWell over Charity Navigator as precisely based on an awareness that bad optics around “overhead”, CEO pay, fundraising, etc., aren’t necessarily bad uses of funds, and we should instead look at what the organization ultimately achieves.
I don’t mean either (1) or (2), but I’m not sure it’s a single argument.
First, I think it’s epistemically and socially healthy for people to separate giving to their community from altruism. To explain a bit more, it’s good to view your community as a valid place to invest effort independent of eventual value. Without that, I think people often end up being exploitative, pushing people to do things instead of treating them respectfully, or being dismissive of others, for example, telling people they shouldn’t be in EA because they aren’t making the right choices. If your community isn’t just about the eventual altruistic value they will create, those failure modes are less likely.
Second, it’s easy to lose sight of eventual goals when focused on instrumental ones, and get stuck in a mode where you are goodharting community size, or dollars being donated—both community size and total dollars seem like an unfortunately easy attractor for this failure.
Third, relatedly, I think that people should be careful not to build models of impact that are too indirect, because they often fail at unexpected places. The simpler your path to impact is, the fewer failure points exist. Community building in many steps removed from the objective, and we should certainly be cautious about doing naïve EV calculations about increasing community size!
Separate but related to community, I think your point about identity, and whether fostering EA as an identity is epistemically healthy, is also relevant to (1).
Your analogy to church spoke very powerfully to me and to something I have always been a bit uncomfortable with. To me, EA is a philosophy/school of thought, and I struggle to understand how a person can “be” a philosophy, or how a philosophy can “recruit members”.
I also suspect that a strong self-perception that one is a “good person” can just as often provide (internal and external) cover for wrong-doing as it can be a motivator to actually do good, as any number of high-profile non-profit scandals (and anecdotal experience from I’m guessing most young women who have ever been involved in a movement for change) can tell you.
I have nothing at all against organic communities, or professional conferences etc, but I also wonder whether there is evidence that building EA as an identity (“join us!”) as opposed to something that people can do is instrumentally effective for first-order causes. Maybe it does, but I think it warrants some interrogation.