I sometimes see people claim that EA research tends to be low-quality or ānot taken seriouslyā by scholars in relevant fields.
There are cases where this clearly isnāt true (e.g. AI alignment questions seem to have at least split the scholarly community, with a lot of people on both sites).
But I worry that, as a non-scientist, Iām living in a bubble where I donāt see strong critique of GiveWellās methodology, FHIās policy papers, etc.
Does anyone have good examples of respected* scholars who have reviewed EA research and either praised it highly or found it lackluster?
*Iām using this word to mean a combination of āregarded highly within their fieldā and āregarded reasonably well by EAs who care about their fieldā; if youāre not sure whether someone counts, please share the example anyway!
Specifically, Iām looking for reviews of EA research that doesnāt go through peer-reviewed research channels, or that gets published in very obscure journals that separate it from being āmainstreamā within its field. Examples include:
Eric Drexlerās Comprehensive AI Services model
Wild animal suffering (especially attempts to estimate its magnitude or compare it to human suffering on a moral basis)
GiveWellās cost-effectiveness models
X-risk policy work from FHI, CSER, or other longtermist research orgs
Recent EA discussion of COVID-19
An example of feedback that fits what Iām looking for:
Judea Pearl, a renowned computer scientist, reviewing Stuart Russellās Human Compatible:
āHuman Compatible made me a convert to Russellās concerns with our ability to control our upcoming creationāsuper-intelligent machines. Unlike outside alarmists and futurists, Russell is a leading authority on AI. His new book will educate the public about AI more than any book I can think of, and is a delightful and uplifting read.ā
Hereās a thread in which a World Bank economist critiques GiveWell on research/āpublication methods. (GiveWell responds here.)
In addition to Will MacAskillās critique of functional decision theory (MIRI-originated and intended to be relevant for AI alignment), thereās this write-up by someone that refereed FDTās submission to a philosophy journal:
Since then, the related paper Cheating Death in Damascus has apparently been accepted by The Journal of Philosophy, though it doesnāt seem to be published yet.
The Wolfgang Schwarz writeup is exactly the sort of thing Iām looking for; thank you!
Willās critique is also a reasonable fit; I was hoping to avoid āEA people reviewing other EA people,ā but he seems to approach the topic in his capacity as a philosopher and shows no sign of soft-pedaling his critique.
Probably more informal than you want, but hereās a Facebook thread debating AI safety involving some of the biggest names in AI.
As someone who has sometimes made a similar claim, I find a lot of assessments of othersā work, not just that of EAs, tends to be informal, off-the-record, and discussion-based. I in fact think that EAs with some frequency miss out on a wealth of knowledge due to a widespread and often insistent requirement that knowledge be citable in order to be meaningful. There are very strong reasons to greatly prefer and put greater weight on citable knowledge, but there is A LOT of intelligence that people do not share in recorded formats for a variety of reasons, such as effort and reputational risks.
So I believe some lack of answers to this may be due to critiques of EA work being shared e.g. verbally, rather than more formally. Personally, Iāve discussed EA work with at least 4 quite prominent economists, at least 2 of whom I believed had reviewed some significant aspect of EA research and perspective thoroughly, but I have not really shared these accounts. To be sharable, Iād likely require more time and attention of these economists than Iām easily able to get, in order to ensure I provided both full and proper explanation and sufficient guarantee of anonymity.
Do you feel comfortable giving some general impression of what the economistsā views were (e.g. āone favorable, two mixed, one unfavorableā)? If not, thatās understandable!
I would expect EA to have a weaker insistence on citable knowledge than people in other academic fields; do you think the insistence is actually stronger? (Or are most people in academic fields wrong, and EA isnāt an exception?)
The āWorm Warsā could arguably be an example (though the contentious research was not just from the EA community)
How much of that research was from the EA community?
Iāve been involved (in some capacity) with most of the publications at the Centre for the Governance of AI at FHI coming out over the past 1.5 years. Iād say that for most of our research there is someone outside the EA community involved. Reasonably often, one or more of the authors of the piece wouldnāt identify as part of the EA community. As for input to the work: If it is academically published, weād get input from reviewers. We also seek additional input for all our work from folks we think will be able to provide useful input. This often includes academics we know in relevant fields. (This of course leads to a bit of a selection effect)
Likewise for publications at CSER. Iād add that for policy work, written policy submissions often provide summaries and key takaways and action-relevant points based on āprimaryā work done by the centre and its collaborators, where the primary work is peer-reviewed.
Weāve received informal/āprivate feedback from people in policy/āgovernment roles at various points that our submissions and presentations have been particularly useful or influential. And weāll have some confidential written testimony to support this for a few examples for University REF (research excellence framework) assessment purposes; however unfortunately I donāt have permission to share these publicly at this time. However, this comment I wrote last year provides some info that could be used as indirect indications of the work being seen as high-quality (being chosen as a select number to be invited to present orally; follow-up engagement, etc).
https://āāforum.effectivealtruism.org/āāposts/āāwhDMv4NjsMcPrLq2b/āācser-and-fhi-advice-to-un-high-level-panel-on-digital?commentId=y7DjYFE3gjZZ9caij