Does anyone have good examples of respected* scholars who have reviewed EA research and either praised it highly or found it lackluster?
Presumably youād also be interested in examples where such scholars reviewed EA research and came to a conclusion in between high praise and finding it lackluster? I expect most academics find a lot of work in general to be somewhere around just āpretty goodā.
Thatās also good to see, and Iād appreciate examples! But I think itās a bit less interesting/āuseful to me because itās what I would expect in general.
I see a lot of people claiming that EA has better research than the norm, others claiming worse than the norm, so Iām curious which opinion actually seems more popular among scholars (vs. the neutral āyeah, this is fine, thatās why the journal accepted itā reaction Iād expect to be more common than either of the other reactions).
Ah, that makes sense. I was thinking more about the detailed points reviewers might make about specifics from particular EA research, rather than getting data on the general quality of EA research to inform how seriously to take other such research (which also seems very/āmore valuable).
Data on āgeneral qualityā was my goal here, yes, albeit split up by source (since āEA researchā includes everything from published journal articles to informal blog posts).
Specifics are valuable too, but in my work, I often have to decide which recent research to share, and how widely; I donāt expect experts to weigh in very quickly, but a general sense of quality from different sources may help me make better judgments around what to share.
Good question!
Presumably youād also be interested in examples where such scholars reviewed EA research and came to a conclusion in between high praise and finding it lackluster? I expect most academics find a lot of work in general to be somewhere around just āpretty goodā.
Thatās also good to see, and Iād appreciate examples! But I think itās a bit less interesting/āuseful to me because itās what I would expect in general.
I see a lot of people claiming that EA has better research than the norm, others claiming worse than the norm, so Iām curious which opinion actually seems more popular among scholars (vs. the neutral āyeah, this is fine, thatās why the journal accepted itā reaction Iād expect to be more common than either of the other reactions).
Ah, that makes sense. I was thinking more about the detailed points reviewers might make about specifics from particular EA research, rather than getting data on the general quality of EA research to inform how seriously to take other such research (which also seems very/āmore valuable).
Data on āgeneral qualityā was my goal here, yes, albeit split up by source (since āEA researchā includes everything from published journal articles to informal blog posts).
Specifics are valuable too, but in my work, I often have to decide which recent research to share, and how widely; I donāt expect experts to weigh in very quickly, but a general sense of quality from different sources may help me make better judgments around what to share.