Effective altruism has had three main direct broad causes (global poverty, animal rights, and far future), for quite some time.
The whole concept of EA having specific recognizable compartmentalized cause areas and charities associated with it is bankrupt and should be zapped, because it invites stagnation as founder effects entrench further every time a newcomer joins and devotes mindshare to signalling ritual adherence to the narrative of different finite tribal Houses to join and build alliances between or cannibalize, crowding out new classes of intervention and eclipsing the prerogative to optimize everything as a whole without all these distinctions. “Oh, I’m an (animal, poverty, AI) person! X-risk aversion!”
“Effective altruism” in itself should be a scaleable cause-neutral methodology de-identified from its extensional recommendations. It should stop reinforcing these arbitrary divisions as though they were somehow sancrosanct. The task is harder when people and organizations ostensibly about advancing that methodology settle into the same buildings and object-level positions, or when charity evaluators do not even strive for cause-neutrality in their consumer offerings. Not saying those can’t be net-goods, but the effects on homogenization, centralization, and bias all restrict the purview of Effective Altruism.
I have often heard people worry that it’s too hard for a new cause to be accepted by the effective altruism movement.
Everyone here knows there are new causes and wants to accept them, but they don’t know that everyone knows there are new causes, etc, a common-knowledge problem. They’re waiting for chosen ones to update the leaderboard.
If the tribally-approved list were opened it would quickly spiral out of working memory bounds. This is a difficult problem to work with but not impossible. Let’s make the list and put it somewhere prominent for salient access.
Anyway, here is an experimental Facebook group explicitly for initial cause proposal and analysis. Join if you’re interested in doing these!
One thing that’s very useful about having separate cause areas is that it helps people decide what to study and research in depth, e.g. get a PhD in. This probably doesn’t need to be illustrated, but I’ll do it anyway:
If you consider two fields of study, A and B, such that A has only one promising intervention, and B has two, and all three interventions are roughly equal in expectation (or whatever other measures are important to you); then it would be better to study B, because if one of its two interventions don’t pan out, you can more easily switch to the other; with A, you might have to move onto a new field entirely. Studying B actually has higher expected value than studying A, despite all three interventions being equal in expectation.
The whole concept of EA having specific recognizable compartmentalized cause areas and charities associated with it is bankrupt and should be zapped, because it invites stagnation as founder effects entrench further every time a newcomer joins and devotes mindshare to signalling ritual adherence to the narrative of different finite tribal Houses to join and build alliances between or cannibalize, crowding out new classes of intervention and eclipsing the prerogative to optimize everything as a whole without all these distinctions. “Oh, I’m an (animal, poverty, AI) person! X-risk aversion!”
“Effective altruism” in itself should be a scaleable cause-neutral methodology de-identified from its extensional recommendations. It should stop reinforcing these arbitrary divisions as though they were somehow sancrosanct. The task is harder when people and organizations ostensibly about advancing that methodology settle into the same buildings and object-level positions, or when charity evaluators do not even strive for cause-neutrality in their consumer offerings. Not saying those can’t be net-goods, but the effects on homogenization, centralization, and bias all restrict the purview of Effective Altruism.
Everyone here knows there are new causes and wants to accept them, but they don’t know that everyone knows there are new causes, etc, a common-knowledge problem. They’re waiting for chosen ones to update the leaderboard.
If the tribally-approved list were opened it would quickly spiral out of working memory bounds. This is a difficult problem to work with but not impossible. Let’s make the list and put it somewhere prominent for salient access.
Anyway, here is an experimental Facebook group explicitly for initial cause proposal and analysis. Join if you’re interested in doing these!
One thing that’s very useful about having separate cause areas is that it helps people decide what to study and research in depth, e.g. get a PhD in. This probably doesn’t need to be illustrated, but I’ll do it anyway:
If you consider two fields of study, A and B, such that A has only one promising intervention, and B has two, and all three interventions are roughly equal in expectation (or whatever other measures are important to you); then it would be better to study B, because if one of its two interventions don’t pan out, you can more easily switch to the other; with A, you might have to move onto a new field entirely. Studying B actually has higher expected value than studying A, despite all three interventions being equal in expectation.