“I want to place [my pet cause], a neglected and underinvested cause, at the center of the Effective Altruism movement.”[1]
In my mind, this seems anti-scouty. Rather than finding what works and what is impactful, it is saying “I want my team to win.” Or perhaps the more charitable interpretation is that this person is talking about a rough hypothesis and I am interpreting it as a confident claim. Of course, there are many problems with drawing conclusions from small snippets of text on the internet, and if I meet this person and have a conversation I might feel very differently. But at this point it seems like a small red flag, demonstrating that there is a bit less cause-neutrality here (and a bit more being wedded to a particular issue) than I would like. But it is hard to argue with personal fit; maybe this person simply doesn’t feel motivated about lab grown meat or bednets or bio-risk reduction, and this is their maximum impact possibility.
I changed the exact words to that I won’t publicly embarrass or draw attention to the person who wrote this. But to be clear, this is not a thought experiment of mine, someone actually wrote this. EDIT: And the cause this individual promoted is more along the lines of helping homeless people in America or protect elephants or rescuing political dissidents: it would probably have a positive effect, but I doubt it would be competitive with saving a life (in expectation) for 4-6 thousand USD.
In my experience, many of those arguments are bad and not cause-neutral, though to me your take seems too negative—cause prioritization is ultimately a social enterprise and the community can easily vet and detect bad cases, and having proposals for new causes to vet seems quite important (i.e. the Popperian insight, individuals do not need to be unbiased, unbiasedness/intersubjectivity comes from open debate).
You make a good point. I probably allow myself to be too affected by claims (such as “saving the great apes should be at the center of effective altruism”), when in reality I should simply allow the community sieve to handle them.
This feels misplaced to me. Making an argument for some cause to be prioritised highly is in some sense one of the core activities of effective altruism. Of course, many people who’d like to centre their pet cause make poor arguments for its prioritisation, but in that case I think the quality of argument is the entire problem, not anything about the fact they’re trying to promote a cause. “I want effective altruists to highly prioritise something that they currently don’t” is in some sense how all our existing priorities got to where they are. I don’t think we should treat this kind of thing as suspicious by nature (perhaps even the opposite).
It seems to me that one should draw a distinction between, “I see this cause as offering good value for money, and here is my reasoning why”, and “I have this cause that I like and I hope I can get EA to fund it”. Sometimes the latter is masquerading as the former, using questionable reasoning.
Some examples that seem like they might be in the latter category to me:
In any case though, I’m not sure it makes a difference in terms of the right way to respond. If the reasoning is suspect, or the claims of evidence are missing, we can assume good faith and respond with questions like, “why did you choose this program”, “why did you conduct the analysis in this way”, or “have you thought about these potentially offsetting considerations”. In the examples above, the original posters generally haven’t engaged with these kind of questions.
If we end up with people coming to EA looking for resources for ineffective causes, and then sealioning over the reasoning, I guess that could be a problem, but I haven’t seen that here much, and I doubt that sort of behavior would ultimately be rewarded in any way.
The third one seems at least generally fine to me—clearly the poster believes in their theory of change and isn’t unbiased, but that’s generally true of posts by organizations seeking funding. I don’t know if the poster has made a (metaphorically) better bednet or not, but thought the Forum was enhanced by having the post here.
The other two are posts from new users who appear to have no clear demonstrated connection to EA at all. The occasional donation pitch or advice request from a charity that doesn’t line up with EA very well at all is a small price to pay for an open Forum. The karma system dealt with preventing diversion of the Forum from its purposes. A few kind people offered some advice. I don’t see any reason for concern there.
those posts all go out of their way to say they’re new to EA. I feel pretty differently about someone with an existing cause discovering EA and trying to fundraise vs someone who integrated EA principles[1] and found a new cause they think is important.
I don’t love the phrase “EA principles”, EA gets some stuff critically wrong and other subcultures get some stuff right. But it will do for these purposes.
I think that to a certain extent that is right, but this context was less along the lines of “here is a cause that is going to be highly impactful” and more along the lines of “here is a cause that I care about.” Less “mental health coaching via an app can be cost effective” and more like “let’s protect elephants.”
But I do think that in a broad sense you are correct: proposing new interventions, new cause areas, etc., is how the overall community progresses.
I think a lot of the EA community shares your attitude regarding exuberant people looking to advance different cause areas or interventions, which actually concerns me. I am somewhat encouraged by the disagreement with you regarding your comment that makes this disposition more explicit. Currently, I think that EA, in terms of extension of resources, has much more solicitude for thoughts within or adjacent to recognized areas. Furthermore, an ability to fluently convey ones ideas in EA terms or with an EA attitude is important.
Expanding on jackva re the Popperian insight, having individuals passionately explore new areas to exploit is critical to the EA project and I am a bit concerned that EA is often disinterested in exploring in directions where a proponent lacks some of the EA’s usual trappings and/or lacks status signals. I would be inclined to be supportive of passion and exuberance in the presentation of ideas where this is natural to the proponent.
I suspect you are right that many of us (myself included) focus more than we ought to on how similar an idea sounds in relation to ideas we are already supporting. I suppose maybe a cruxy aspect of this is how much effort/time/energy we should spend considering claims that seem unreasonable at first glance?
If someone honestly told me that protecting elephants (as an example) should be EA’s main cause area, the two things that go through my heard first are that either that this person doesn’t understand some pretty basic EA concepts[1], or that there is something really important to their argument that I am completely ignorant of.
But depending on how extreme a view it is, I also wonder about their motives. Which is more-or-less what led me to viewing the claim as anti-scouty. If John Doe has been working for elephant protecting (sorry to pick on elephants) for many years and now claims that elephant protection should be a core EA cause area, I’m automatically asking if John is A) trying to get funding for elephant protection or B) trying to figure out what does the most good and to do that. While neither of those are villainous motives, the second strikes me as a bit more intellectually honest. But this is a fuzzy thing, and I don’t have good data to point to.
I also suspect that I myself may have an over-sensitive “bullshit detector” (for lack of a more polite term), so that I end up getting false positives sometimes.
I agree that advocacy inspired by other-than-EA frameworks is a concern, I just think that the EA community is already quite inclined to express skepticism for new ideas and possible interventions. So, the worry that someone with high degrees of partiality for a particular cause manages to hijack EA resources is much weaker than the concern that potentially promising cases may be ignored because they have an unfortunate messenger.
the worry that someone with high degrees of partiality for a particular cause manages to hijack EA resources is much weaker than the concern that potentially promising cases may be ignored because they have an unfortunate messenger
I think you’ve phrased that very well. As much as I may want to find the people who are “hijacking” EA resources, the benefit of that is probably outweighed by how it disincentivized people to try new things. Thanks for commenting back and forth with me on this. I’ll try to jump the gun a bit less from now on when it comes to gut feeling evaluations of new causes.
I think it’s important to consider that the other person may be coming from a very different ethical framework than you are. I wouldn’t likely support any of the examples in your footnote, but one can imagine an ethical framework in which the balance looks closer than it does to me. To be clear, I highly value saving the lives of kids under five as the standard EA lifesaving projects do. But: I can’t objectively show that a framework that assigns little to no value to averting death (e.g., because the dead do not suffer) is a bad one. And such a significant difference in values could be behind some statements of the sort you describe.
I’m concerned whenever I see things like this:
In my mind, this seems anti-scouty. Rather than finding what works and what is impactful, it is saying “I want my team to win.” Or perhaps the more charitable interpretation is that this person is talking about a rough hypothesis and I am interpreting it as a confident claim. Of course, there are many problems with drawing conclusions from small snippets of text on the internet, and if I meet this person and have a conversation I might feel very differently. But at this point it seems like a small red flag, demonstrating that there is a bit less cause-neutrality here (and a bit more being wedded to a particular issue) than I would like. But it is hard to argue with personal fit; maybe this person simply doesn’t feel motivated about lab grown meat or bednets or bio-risk reduction, and this is their maximum impact possibility.
I changed the exact words to that I won’t publicly embarrass or draw attention to the person who wrote this. But to be clear, this is not a thought experiment of mine, someone actually wrote this. EDIT: And the cause this individual promoted is more along the lines of helping homeless people in America or protect elephants or rescuing political dissidents: it would probably have a positive effect, but I doubt it would be competitive with saving a life (in expectation) for 4-6 thousand USD.
In my experience, many of those arguments are bad and not cause-neutral, though to me your take seems too negative—cause prioritization is ultimately a social enterprise and the community can easily vet and detect bad cases, and having proposals for new causes to vet seems quite important (i.e. the Popperian insight, individuals do not need to be unbiased, unbiasedness/intersubjectivity comes from open debate).
You make a good point. I probably allow myself to be too affected by claims (such as “saving the great apes should be at the center of effective altruism”), when in reality I should simply allow the community sieve to handle them.
This feels misplaced to me. Making an argument for some cause to be prioritised highly is in some sense one of the core activities of effective altruism. Of course, many people who’d like to centre their pet cause make poor arguments for its prioritisation, but in that case I think the quality of argument is the entire problem, not anything about the fact they’re trying to promote a cause. “I want effective altruists to highly prioritise something that they currently don’t” is in some sense how all our existing priorities got to where they are. I don’t think we should treat this kind of thing as suspicious by nature (perhaps even the opposite).
Hi Ben,
It seems to me that one should draw a distinction between, “I see this cause as offering good value for money, and here is my reasoning why”, and “I have this cause that I like and I hope I can get EA to fund it”. Sometimes the latter is masquerading as the former, using questionable reasoning.
Some examples that seem like they might be in the latter category to me:
https://forum.effectivealtruism.org/posts/Dytsn9dDuwadFZXwq/fundraising-for-a-school-in-liberia
https://forum.effectivealtruism.org/posts/R5r2FPYTZGDzWdJEY/how-to-get-wealthier-folks-involved-in-mutual-aid
https://forum.effectivealtruism.org/posts/zsLcixRzqr64CacfK/zzappmalaria-twice-as-cost-effective-as-bed-nets-in-urban
In any case though, I’m not sure it makes a difference in terms of the right way to respond. If the reasoning is suspect, or the claims of evidence are missing, we can assume good faith and respond with questions like, “why did you choose this program”, “why did you conduct the analysis in this way”, or “have you thought about these potentially offsetting considerations”. In the examples above, the original posters generally haven’t engaged with these kind of questions.
If we end up with people coming to EA looking for resources for ineffective causes, and then sealioning over the reasoning, I guess that could be a problem, but I haven’t seen that here much, and I doubt that sort of behavior would ultimately be rewarded in any way.
Ian
The third one seems at least generally fine to me—clearly the poster believes in their theory of change and isn’t unbiased, but that’s generally true of posts by organizations seeking funding. I don’t know if the poster has made a (metaphorically) better bednet or not, but thought the Forum was enhanced by having the post here.
The other two are posts from new users who appear to have no clear demonstrated connection to EA at all. The occasional donation pitch or advice request from a charity that doesn’t line up with EA very well at all is a small price to pay for an open Forum. The karma system dealt with preventing diversion of the Forum from its purposes. A few kind people offered some advice. I don’t see any reason for concern there.
I agree, and to be clear I’m not trying to say that any forum policy change is needed at this time.
those posts all go out of their way to say they’re new to EA. I feel pretty differently about someone with an existing cause discovering EA and trying to fundraise vs someone who integrated EA principles[1] and found a new cause they think is important.
I don’t love the phrase “EA principles”, EA gets some stuff critically wrong and other subcultures get some stuff right. But it will do for these purposes.
I think that to a certain extent that is right, but this context was less along the lines of “here is a cause that is going to be highly impactful” and more along the lines of “here is a cause that I care about.” Less “mental health coaching via an app can be cost effective” and more like “let’s protect elephants.”
But I do think that in a broad sense you are correct: proposing new interventions, new cause areas, etc., is how the overall community progresses.
I think a lot of the EA community shares your attitude regarding exuberant people looking to advance different cause areas or interventions, which actually concerns me. I am somewhat encouraged by the disagreement with you regarding your comment that makes this disposition more explicit. Currently, I think that EA, in terms of extension of resources, has much more solicitude for thoughts within or adjacent to recognized areas. Furthermore, an ability to fluently convey ones ideas in EA terms or with an EA attitude is important.
Expanding on jackva re the Popperian insight, having individuals passionately explore new areas to exploit is critical to the EA project and I am a bit concerned that EA is often disinterested in exploring in directions where a proponent lacks some of the EA’s usual trappings and/or lacks status signals. I would be inclined to be supportive of passion and exuberance in the presentation of ideas where this is natural to the proponent.
I suspect you are right that many of us (myself included) focus more than we ought to on how similar an idea sounds in relation to ideas we are already supporting. I suppose maybe a cruxy aspect of this is how much effort/time/energy we should spend considering claims that seem unreasonable at first glance?
If someone honestly told me that protecting elephants (as an example) should be EA’s main cause area, the two things that go through my heard first are that either that this person doesn’t understand some pretty basic EA concepts[1], or that there is something really important to their argument that I am completely ignorant of.
But depending on how extreme a view it is, I also wonder about their motives. Which is more-or-less what led me to viewing the claim as anti-scouty. If John Doe has been working for elephant protecting (sorry to pick on elephants) for many years and now claims that elephant protection should be a core EA cause area, I’m automatically asking if John is A) trying to get funding for elephant protection or B) trying to figure out what does the most good and to do that. While neither of those are villainous motives, the second strikes me as a bit more intellectually honest. But this is a fuzzy thing, and I don’t have good data to point to.
I also suspect that I myself may have an over-sensitive “bullshit detector” (for lack of a more polite term), so that I end up getting false positives sometimes.
Expected value, impartiality, ITN framework, scout mindset, and the like
I agree that advocacy inspired by other-than-EA frameworks is a concern, I just think that the EA community is already quite inclined to express skepticism for new ideas and possible interventions. So, the worry that someone with high degrees of partiality for a particular cause manages to hijack EA resources is much weaker than the concern that potentially promising cases may be ignored because they have an unfortunate messenger.
I think you’ve phrased that very well. As much as I may want to find the people who are “hijacking” EA resources, the benefit of that is probably outweighed by how it disincentivized people to try new things. Thanks for commenting back and forth with me on this. I’ll try to jump the gun a bit less from now on when it comes to gut feeling evaluations of new causes.
I can only aspire to be as good a scout as you, Joseph. Cheers
I think it’s important to consider that the other person may be coming from a very different ethical framework than you are. I wouldn’t likely support any of the examples in your footnote, but one can imagine an ethical framework in which the balance looks closer than it does to me. To be clear, I highly value saving the lives of kids under five as the standard EA lifesaving projects do. But: I can’t objectively show that a framework that assigns little to no value to averting death (e.g., because the dead do not suffer) is a bad one. And such a significant difference in values could be behind some statements of the sort you describe.