This feels misplaced to me. Making an argument for some cause to be prioritised highly is in some sense one of the core activities of effective altruism. Of course, many people who’d like to centre their pet cause make poor arguments for its prioritisation, but in that case I think the quality of argument is the entire problem, not anything about the fact they’re trying to promote a cause. “I want effective altruists to highly prioritise something that they currently don’t” is in some sense how all our existing priorities got to where they are. I don’t think we should treat this kind of thing as suspicious by nature (perhaps even the opposite).
It seems to me that one should draw a distinction between, “I see this cause as offering good value for money, and here is my reasoning why”, and “I have this cause that I like and I hope I can get EA to fund it”. Sometimes the latter is masquerading as the former, using questionable reasoning.
Some examples that seem like they might be in the latter category to me:
In any case though, I’m not sure it makes a difference in terms of the right way to respond. If the reasoning is suspect, or the claims of evidence are missing, we can assume good faith and respond with questions like, “why did you choose this program”, “why did you conduct the analysis in this way”, or “have you thought about these potentially offsetting considerations”. In the examples above, the original posters generally haven’t engaged with these kind of questions.
If we end up with people coming to EA looking for resources for ineffective causes, and then sealioning over the reasoning, I guess that could be a problem, but I haven’t seen that here much, and I doubt that sort of behavior would ultimately be rewarded in any way.
The third one seems at least generally fine to me—clearly the poster believes in their theory of change and isn’t unbiased, but that’s generally true of posts by organizations seeking funding. I don’t know if the poster has made a (metaphorically) better bednet or not, but thought the Forum was enhanced by having the post here.
The other two are posts from new users who appear to have no clear demonstrated connection to EA at all. The occasional donation pitch or advice request from a charity that doesn’t line up with EA very well at all is a small price to pay for an open Forum. The karma system dealt with preventing diversion of the Forum from its purposes. A few kind people offered some advice. I don’t see any reason for concern there.
those posts all go out of their way to say they’re new to EA. I feel pretty differently about someone with an existing cause discovering EA and trying to fundraise vs someone who integrated EA principles[1] and found a new cause they think is important.
I don’t love the phrase “EA principles”, EA gets some stuff critically wrong and other subcultures get some stuff right. But it will do for these purposes.
I think that to a certain extent that is right, but this context was less along the lines of “here is a cause that is going to be highly impactful” and more along the lines of “here is a cause that I care about.” Less “mental health coaching via an app can be cost effective” and more like “let’s protect elephants.”
But I do think that in a broad sense you are correct: proposing new interventions, new cause areas, etc., is how the overall community progresses.
This feels misplaced to me. Making an argument for some cause to be prioritised highly is in some sense one of the core activities of effective altruism. Of course, many people who’d like to centre their pet cause make poor arguments for its prioritisation, but in that case I think the quality of argument is the entire problem, not anything about the fact they’re trying to promote a cause. “I want effective altruists to highly prioritise something that they currently don’t” is in some sense how all our existing priorities got to where they are. I don’t think we should treat this kind of thing as suspicious by nature (perhaps even the opposite).
Hi Ben,
It seems to me that one should draw a distinction between, “I see this cause as offering good value for money, and here is my reasoning why”, and “I have this cause that I like and I hope I can get EA to fund it”. Sometimes the latter is masquerading as the former, using questionable reasoning.
Some examples that seem like they might be in the latter category to me:
https://forum.effectivealtruism.org/posts/Dytsn9dDuwadFZXwq/fundraising-for-a-school-in-liberia
https://forum.effectivealtruism.org/posts/R5r2FPYTZGDzWdJEY/how-to-get-wealthier-folks-involved-in-mutual-aid
https://forum.effectivealtruism.org/posts/zsLcixRzqr64CacfK/zzappmalaria-twice-as-cost-effective-as-bed-nets-in-urban
In any case though, I’m not sure it makes a difference in terms of the right way to respond. If the reasoning is suspect, or the claims of evidence are missing, we can assume good faith and respond with questions like, “why did you choose this program”, “why did you conduct the analysis in this way”, or “have you thought about these potentially offsetting considerations”. In the examples above, the original posters generally haven’t engaged with these kind of questions.
If we end up with people coming to EA looking for resources for ineffective causes, and then sealioning over the reasoning, I guess that could be a problem, but I haven’t seen that here much, and I doubt that sort of behavior would ultimately be rewarded in any way.
Ian
The third one seems at least generally fine to me—clearly the poster believes in their theory of change and isn’t unbiased, but that’s generally true of posts by organizations seeking funding. I don’t know if the poster has made a (metaphorically) better bednet or not, but thought the Forum was enhanced by having the post here.
The other two are posts from new users who appear to have no clear demonstrated connection to EA at all. The occasional donation pitch or advice request from a charity that doesn’t line up with EA very well at all is a small price to pay for an open Forum. The karma system dealt with preventing diversion of the Forum from its purposes. A few kind people offered some advice. I don’t see any reason for concern there.
I agree, and to be clear I’m not trying to say that any forum policy change is needed at this time.
those posts all go out of their way to say they’re new to EA. I feel pretty differently about someone with an existing cause discovering EA and trying to fundraise vs someone who integrated EA principles[1] and found a new cause they think is important.
I don’t love the phrase “EA principles”, EA gets some stuff critically wrong and other subcultures get some stuff right. But it will do for these purposes.
I think that to a certain extent that is right, but this context was less along the lines of “here is a cause that is going to be highly impactful” and more along the lines of “here is a cause that I care about.” Less “mental health coaching via an app can be cost effective” and more like “let’s protect elephants.”
But I do think that in a broad sense you are correct: proposing new interventions, new cause areas, etc., is how the overall community progresses.