Thanks for this comment. I think I essentially agree with all your specific points, though I get the impression that you’re more optimistic about trying to get “better answers to mainstream questions” often being the best use of an EA’s time. That said:
this is just based mainly on something like a “vibe” from your comment (not specific statements)
my own views are fairly tentative anyway
mostly I think people also need to consider specifics of their situation, rather than strongly assuming either that it’s pretty much always a good idea to try to get “better answers” on mainstream questions or that it’s pretty much never a good idea to try that
One minor thing I’d push back on is “especially for EAs, who are constitutionally hyper-aware of the pitfalls of bad research, have high standards of rigor, and are often quantitatively sophisticated.” I think these things are true on average, but “constitutionally” is a bit too strong, and there is also a fair amount of bad research by EAs, low standards of rigour among EAs, and other problems. And I think it’s importnat that we remember that (though not in an over-the-top or self-flagellating way, and not with a sort of false modesty that would guide our behaviour poorly).
To clarify, I’m not sure this is likely to be the best use of any individual EA’s time, but I think it can still be true that it’s potentially a good use of community resources, if intelligently directed.
I agree that perhaps “constitutionally” is too strong—what I mean is that EAs tend (generally) to have an interest in / awareness of these broadly meta-scientific topics.
In general, the argument I would make would be for greater attention to the possibility that mainstream causes deserve attention and more meta-level arguments for this case (like your post).
Thanks for this comment. I think I essentially agree with all your specific points, though I get the impression that you’re more optimistic about trying to get “better answers to mainstream questions” often being the best use of an EA’s time. That said:
this is just based mainly on something like a “vibe” from your comment (not specific statements)
my own views are fairly tentative anyway
mostly I think people also need to consider specifics of their situation, rather than strongly assuming either that it’s pretty much always a good idea to try to get “better answers” on mainstream questions or that it’s pretty much never a good idea to try that
One minor thing I’d push back on is “especially for EAs, who are constitutionally hyper-aware of the pitfalls of bad research, have high standards of rigor, and are often quantitatively sophisticated.” I think these things are true on average, but “constitutionally” is a bit too strong, and there is also a fair amount of bad research by EAs, low standards of rigour among EAs, and other problems. And I think it’s importnat that we remember that (though not in an over-the-top or self-flagellating way, and not with a sort of false modesty that would guide our behaviour poorly).
To clarify, I’m not sure this is likely to be the best use of any individual EA’s time, but I think it can still be true that it’s potentially a good use of community resources, if intelligently directed.
I agree that perhaps “constitutionally” is too strong—what I mean is that EAs tend (generally) to have an interest in / awareness of these broadly meta-scientific topics.
In general, the argument I would make would be for greater attention to the possibility that mainstream causes deserve attention and more meta-level arguments for this case (like your post).