I suspect you are right that many of us (myself included) focus more than we ought to on how similar an idea sounds in relation to ideas we are already supporting. I suppose maybe a cruxy aspect of this is how much effort/time/energy we should spend considering claims that seem unreasonable at first glance?
If someone honestly told me that protecting elephants (as an example) should be EA’s main cause area, the two things that go through my heard first are that either that this person doesn’t understand some pretty basic EA concepts[1], or that there is something really important to their argument that I am completely ignorant of.
But depending on how extreme a view it is, I also wonder about their motives. Which is more-or-less what led me to viewing the claim as anti-scouty. If John Doe has been working for elephant protecting (sorry to pick on elephants) for many years and now claims that elephant protection should be a core EA cause area, I’m automatically asking if John is A) trying to get funding for elephant protection or B) trying to figure out what does the most good and to do that. While neither of those are villainous motives, the second strikes me as a bit more intellectually honest. But this is a fuzzy thing, and I don’t have good data to point to.
I also suspect that I myself may have an over-sensitive “bullshit detector” (for lack of a more polite term), so that I end up getting false positives sometimes.
I agree that advocacy inspired by other-than-EA frameworks is a concern, I just think that the EA community is already quite inclined to express skepticism for new ideas and possible interventions. So, the worry that someone with high degrees of partiality for a particular cause manages to hijack EA resources is much weaker than the concern that potentially promising cases may be ignored because they have an unfortunate messenger.
the worry that someone with high degrees of partiality for a particular cause manages to hijack EA resources is much weaker than the concern that potentially promising cases may be ignored because they have an unfortunate messenger
I think you’ve phrased that very well. As much as I may want to find the people who are “hijacking” EA resources, the benefit of that is probably outweighed by how it disincentivized people to try new things. Thanks for commenting back and forth with me on this. I’ll try to jump the gun a bit less from now on when it comes to gut feeling evaluations of new causes.
I suspect you are right that many of us (myself included) focus more than we ought to on how similar an idea sounds in relation to ideas we are already supporting. I suppose maybe a cruxy aspect of this is how much effort/time/energy we should spend considering claims that seem unreasonable at first glance?
If someone honestly told me that protecting elephants (as an example) should be EA’s main cause area, the two things that go through my heard first are that either that this person doesn’t understand some pretty basic EA concepts[1], or that there is something really important to their argument that I am completely ignorant of.
But depending on how extreme a view it is, I also wonder about their motives. Which is more-or-less what led me to viewing the claim as anti-scouty. If John Doe has been working for elephant protecting (sorry to pick on elephants) for many years and now claims that elephant protection should be a core EA cause area, I’m automatically asking if John is A) trying to get funding for elephant protection or B) trying to figure out what does the most good and to do that. While neither of those are villainous motives, the second strikes me as a bit more intellectually honest. But this is a fuzzy thing, and I don’t have good data to point to.
I also suspect that I myself may have an over-sensitive “bullshit detector” (for lack of a more polite term), so that I end up getting false positives sometimes.
Expected value, impartiality, ITN framework, scout mindset, and the like
I agree that advocacy inspired by other-than-EA frameworks is a concern, I just think that the EA community is already quite inclined to express skepticism for new ideas and possible interventions. So, the worry that someone with high degrees of partiality for a particular cause manages to hijack EA resources is much weaker than the concern that potentially promising cases may be ignored because they have an unfortunate messenger.
I think you’ve phrased that very well. As much as I may want to find the people who are “hijacking” EA resources, the benefit of that is probably outweighed by how it disincentivized people to try new things. Thanks for commenting back and forth with me on this. I’ll try to jump the gun a bit less from now on when it comes to gut feeling evaluations of new causes.
I can only aspire to be as good a scout as you, Joseph. Cheers