I agree that non-replication is a danger. But I don’t think that positive results are specifically high-value; instead, I care about studies that tell me whether or not an intervention is worth funding.
I’d expect most studies of neglected interventions to turn up “negative” results, in the sense that those interventions will seem less valuable than the best ones we already know about. But it still seems good to conduct these studies, for a couple of reasons:
If an intervention does look high-value, there’s a chance we’ve stumbled onto a way to substantially improve our impact (pending replication studies).
Studying an intervention that is different from those currently recommended may help us gain new kinds of knowledge we didn’t have from studying other interventions.
For example, if the community is already well-versed in public health research, we might learn less from research on deworming than research on Chinese tobacco taxation policy (which could teach about generally useful topics like Chinese tax policy, the Chinese legislative process, and how Chinese corporate lobbying works).
Of course, doing high-quality China research might be more difficult and expensive, which cuts against it (and other research into “neglected” areas), but I do like the idea of EA having access to strong data on many different important areas.
That said, you do make good points, and I continue to think that neglectedness is less important as a consideration than scale or tractability.
Related: I really like GiveWell’s habit of telling us not only which charities they like, but which ones they looked at and deprioritized. This is helpful for understanding the extent to which the “deliberate neglect by rational funders” model pans out for a given intervention/charity, and I wish it were done by more grantmaking organizations.
I agree that non-replication is a danger. But I don’t think that positive results are specifically high-value; instead, I care about studies that tell me whether or not an intervention is worth funding.
I’d expect most studies of neglected interventions to turn up “negative” results, in the sense that those interventions will seem less valuable than the best ones we already know about. But it still seems good to conduct these studies, for a couple of reasons:
If an intervention does look high-value, there’s a chance we’ve stumbled onto a way to substantially improve our impact (pending replication studies).
Studying an intervention that is different from those currently recommended may help us gain new kinds of knowledge we didn’t have from studying other interventions.
For example, if the community is already well-versed in public health research, we might learn less from research on deworming than research on Chinese tobacco taxation policy (which could teach about generally useful topics like Chinese tax policy, the Chinese legislative process, and how Chinese corporate lobbying works).
Of course, doing high-quality China research might be more difficult and expensive, which cuts against it (and other research into “neglected” areas), but I do like the idea of EA having access to strong data on many different important areas.
That said, you do make good points, and I continue to think that neglectedness is less important as a consideration than scale or tractability.
Related: I really like GiveWell’s habit of telling us not only which charities they like, but which ones they looked at and deprioritized. This is helpful for understanding the extent to which the “deliberate neglect by rational funders” model pans out for a given intervention/charity, and I wish it were done by more grantmaking organizations.