Telofy and kgallas: I’m not planning to write up an exhaustive list of the messages associated with EA that we’re not comfortable with. We don’t have full internal agreement on which messages are good vs. problematic, and writing up a list would be a bit of a project in itself. But I will give a couple of examples, speaking only for myself:
I’m generally uncomfortable with (and disagree with) the “obligation” frame of EA. I’m particularly uncomfortable with messages along the lines of “The arts are a waste when there are people suffering,” “You should feel bad about (or feel the need to defend) every dollar you spend on yourself beyond necessities,” etc. I think messages along these lines make EA sound overly demanding/costly to affiliate with as well as intellectually misguided.
I think there are a variety of messages associated with EA that communicate unwarranted confidence on a variety of dimensions, implying that we know more than we do about what the best causes are and to what extent EAs are “outperforming” the rest of the world in terms of accomplishing good. “Effective altruism could be the last social movement we ever need” and “Global poverty is a rounding error compared to other causes” are both examples of this; both messages have been prominently enough expressed to get in this article , and both messages are problematic in my view.
Telofy: my general answer on a given grant idea is going to be to ask whether it fits into any of our focus areas, and if not, to have a very high bar for it as a “one-off” grant. In this case, supporting ACE fits into the Farm Animal Welfare focus area, where we’ve recently made a new hire; it’s too early to say where this sort of thing will rank in our priorities after Lewis has put some work into considering all the options.
About point 1: “I think messages along these lines make EA sound overly demanding/costly to affiliate with”: This strategic issue is one that I have no informed opinion on. Intuitively I would also think that people work that way, but the practice of hazing, e.g., initiation rites of fraternities, suggests that such costliness might counter recidivism, and that’s an important factor. Moral frameworks that have this obligation aspect also seem relatively more simple and consistent to me, which might make it easier to defend them convincingly in outreach.
“As well as intellectually misguided”: From a moral antirealist’s perspective, this depends on the person’s moral framework. Taking Brian’s critique of the demandingness critique into account, this does apply to mine, so whether to demand the same from others, again only boils down to the strategic question above. Do you have an ethical or epistemic reason why it would be misguided even from a broadly utilitarian viewpoint?
Thanks for the comments, all.
Telofy and kgallas: I’m not planning to write up an exhaustive list of the messages associated with EA that we’re not comfortable with. We don’t have full internal agreement on which messages are good vs. problematic, and writing up a list would be a bit of a project in itself. But I will give a couple of examples, speaking only for myself:
I’m generally uncomfortable with (and disagree with) the “obligation” frame of EA. I’m particularly uncomfortable with messages along the lines of “The arts are a waste when there are people suffering,” “You should feel bad about (or feel the need to defend) every dollar you spend on yourself beyond necessities,” etc. I think messages along these lines make EA sound overly demanding/costly to affiliate with as well as intellectually misguided.
I think there are a variety of messages associated with EA that communicate unwarranted confidence on a variety of dimensions, implying that we know more than we do about what the best causes are and to what extent EAs are “outperforming” the rest of the world in terms of accomplishing good. “Effective altruism could be the last social movement we ever need” and “Global poverty is a rounding error compared to other causes” are both examples of this; both messages have been prominently enough expressed to get in this article , and both messages are problematic in my view.
Telofy: my general answer on a given grant idea is going to be to ask whether it fits into any of our focus areas, and if not, to have a very high bar for it as a “one-off” grant. In this case, supporting ACE fits into the Farm Animal Welfare focus area, where we’ve recently made a new hire; it’s too early to say where this sort of thing will rank in our priorities after Lewis has put some work into considering all the options.
I’m looking forward to news from Lewis then!
Agreed on point 2.
About point 1: “I think messages along these lines make EA sound overly demanding/costly to affiliate with”: This strategic issue is one that I have no informed opinion on. Intuitively I would also think that people work that way, but the practice of hazing, e.g., initiation rites of fraternities, suggests that such costliness might counter recidivism, and that’s an important factor. Moral frameworks that have this obligation aspect also seem relatively more simple and consistent to me, which might make it easier to defend them convincingly in outreach.
“As well as intellectually misguided”: From a moral antirealist’s perspective, this depends on the person’s moral framework. Taking Brian’s critique of the demandingness critique into account, this does apply to mine, so whether to demand the same from others, again only boils down to the strategic question above. Do you have an ethical or epistemic reason why it would be misguided even from a broadly utilitarian viewpoint?