I don’t think it makes sense from an EA worldview to seek the best charity within a specific cause unless you have reason to believe that cause is the most effective. It’s fine to have whatever personal priorities you have, but I don’t think it’s an appropriate discussion topic for the EA Forum.
but I don’t think it’s an appropriate discussion topic for the EA Forum.
This sentence is what moved this comment from neutral-to-disagree to strong-disagree to me. I think it’s reasonable for folks to disagree about whether “most effective intervention within pre-specified constraints” is an EA-apt question. For various reasons, I strongly feel that we shouldn’t censure folks that try to do that thing (within reason).
If you are going to do the “well actually even the best interventions in this class aren’t effective enough to be counterfactually worthwhile” thing, I think it’s critical to do that in a tactful and constructive way, which this isn’t.
I disagree with partner!Will’s implication that Michael’s comment is unconstructive. I think it is very blunt, but that seems fine to me. I have not-settled opinions about the content of Michael’s comment.
Fair point. Is there a consensus within EA that EA should only be focused on what are the most effective causes in terms of increasing total utility, vs there being space to optimize effective impact within non-optimal causes?
My personal interests aside, it seems like there would be an case to address this, as many people outside the current EA movement are not particularly interested in maxing utils in the abstract, but rather guided by personal priorities—so improving the efficacy of their donations within those priorities would have value. And there to my knowledge, there is a vacuum in analyzing the impact of charities outside of top EA cause areas. I would imagine that on net, it’s a loss to allocate non-trivial resources to this away from higher impact cause areas… arguably asking people to share the information they currently have in low-effort ways would be positive on net, though I can see why one would want to promote conversational norms that discourage this.
Maybe I’ll take this to LessWrong, where I’ll hit many folks with the same knowledge base, but without violating the norms you put forth?
You’re fine, in my opinion. Your post title is eight words. If people don’t want to engage with the question you asked, that decision consumed two seconds of their time.
While I understand and respect why people don’t want to devote resources to charity selection within causes they view as relatively low impact, I think it’s possible to apply importance, tractability and/or neglectedness to some extent to donation opportunities within most cause areas. And I think it’s good to get people thinking more about those criteria, even if they are not thinking about them in the context of an cause area EA views as high-impact.
I think you’re fine, I don’t think that only the most effective causes should be discussed or pursued is a EA consensus and I really hope we aren’t looking to dissuade people who want to be the most effective within their own framework of priorities as a norm.
yes, this drives me a little bit crazy about EA. by definition “effective altruism” should include any kind of altruism that someone is trying to do effectively. But what is actually practiced by the capital letters Effective Altruism movement is actually “altruistic rationality”.
As Julia Galef mentions in this 2017 EAG panel, people have three buckets through which they spend their money: personal, personal causes (e.g. the university you went to or homelessness in the city you live), and EA causes (global make-the-world-better type things). Trying to guilt people to move money between buckets, e.g. “your $5 coffee in the morning is killing children in africa” is ineffective outreach. EA has got the third bucket covered, and I don’t see why the second bucket shouldn’t be included too. Getting people to think about philanthropy and volunteering more rationally in terms of effectiveness is generally good no matter what people’s motivations are, and increasing people’s rationality in the context of charity IMO will ultimately lead to more people naturally wanting to donate to the global EA cause bucket in the end.
I don’t think it makes sense from an EA worldview to seek the best charity within a specific cause unless you have reason to believe that cause is the most effective. It’s fine to have whatever personal priorities you have, but I don’t think it’s an appropriate discussion topic for the EA Forum.
This sentence is what moved this comment from neutral-to-disagree to strong-disagree to me. I think it’s reasonable for folks to disagree about whether “most effective intervention within pre-specified constraints” is an EA-apt question. For various reasons, I strongly feel that we shouldn’t censure folks that try to do that thing (within reason).
If you are going to do the “well actually even the best interventions in this class aren’t effective enough to be counterfactually worthwhile” thing, I think it’s critical to do that in a tactful and constructive way, which this isn’t.
I disagree with partner!Will’s implication that Michael’s comment is unconstructive. I think it is very blunt, but that seems fine to me. I have not-settled opinions about the content of Michael’s comment.
Yeah, I don’t think it’s super mean or anything. Hence why I didn’t actually downvote Michael’s comment, just disagreevoted.
Fair point. Is there a consensus within EA that EA should only be focused on what are the most effective causes in terms of increasing total utility, vs there being space to optimize effective impact within non-optimal causes?
My personal interests aside, it seems like there would be an case to address this, as many people outside the current EA movement are not particularly interested in maxing utils in the abstract, but rather guided by personal priorities—so improving the efficacy of their donations within those priorities would have value. And there to my knowledge, there is a vacuum in analyzing the impact of charities outside of top EA cause areas. I would imagine that on net, it’s a loss to allocate non-trivial resources to this away from higher impact cause areas… arguably asking people to share the information they currently have in low-effort ways would be positive on net, though I can see why one would want to promote conversational norms that discourage this.
Maybe I’ll take this to LessWrong, where I’ll hit many folks with the same knowledge base, but without violating the norms you put forth?
You’re fine, in my opinion. Your post title is eight words. If people don’t want to engage with the question you asked, that decision consumed two seconds of their time.
While I understand and respect why people don’t want to devote resources to charity selection within causes they view as relatively low impact, I think it’s possible to apply importance, tractability and/or neglectedness to some extent to donation opportunities within most cause areas. And I think it’s good to get people thinking more about those criteria, even if they are not thinking about them in the context of an cause area EA views as high-impact.
I think you’re fine, I don’t think that only the most effective causes should be discussed or pursued is a EA consensus and I really hope we aren’t looking to dissuade people who want to be the most effective within their own framework of priorities as a norm.
Hope you find an org that does great work!
yes, this drives me a little bit crazy about EA. by definition “effective altruism” should include any kind of altruism that someone is trying to do effectively. But what is actually practiced by the capital letters Effective Altruism movement is actually “altruistic rationality”.
As Julia Galef mentions in this 2017 EAG panel, people have three buckets through which they spend their money: personal, personal causes (e.g. the university you went to or homelessness in the city you live), and EA causes (global make-the-world-better type things). Trying to guilt people to move money between buckets, e.g. “your $5 coffee in the morning is killing children in africa” is ineffective outreach. EA has got the third bucket covered, and I don’t see why the second bucket shouldn’t be included too. Getting people to think about philanthropy and volunteering more rationally in terms of effectiveness is generally good no matter what people’s motivations are, and increasing people’s rationality in the context of charity IMO will ultimately lead to more people naturally wanting to donate to the global EA cause bucket in the end.