I had a recent chat with someone linked to 3iE. They highlighted their ‘Development Evidence Portal’ which (often) links with the Campbell Collaboration. They also mentioned other initiatives in this space, doing similar work (such as Cochrane, World Bank, IDEaL Kfw, etcAIR, IfPri (food policy), and IDinsight)
In general they are doing/preparing projects where…
They conduct systematic reviews of evidence on particular interventions, e.g., this one.
Within these systematic reviews, they assess each ‘Impact evaluation paper’
in 1-day, 2 people, 1 page reports,
with some structured and some descriptive evaluation, following particular frameworks, including e.g., tools for ‘risk-of-bias assessment’
They also assess external meta-analyses; e.g., see here.
My questions:
How much are people and orgs in the EA GH&D space aware of their work?
If you are aware of these, what are your thoughts on the strengths and weaknesses of these individual assessments and systematic reviews, and of their database… as a useful input into the CBA modeling work of GiveWell, Open Philanthropy, Rethink Priorities, etc?
I’m asking this (naturally) both to know whether there’s a communication gap to close, and to get a sense of how The Unjournal should consider this work (e.g., avoiding overlap, evaluating these evaluations, etc.)
[Question] Are EA-aligned evaluators aware of 3ie’s ‘Development Evidence Portal’ and reviews of evidence?
I had a recent chat with someone linked to 3iE. They highlighted their ‘Development Evidence Portal’ which (often) links with the Campbell Collaboration. They also mentioned other initiatives in this space, doing similar work (such as Cochrane, World Bank, IDEaL Kfw, etcAIR, IfPri (food policy), and IDinsight)
In general they are doing/preparing projects where…
They conduct systematic reviews of evidence on particular interventions, e.g., this one.
Within these systematic reviews, they assess each ‘Impact evaluation paper’
in 1-day, 2 people, 1 page reports,
with some structured and some descriptive evaluation, following particular frameworks, including e.g., tools for ‘risk-of-bias assessment’
They also assess external meta-analyses; e.g., see here.
My questions:
How much are people and orgs in the EA GH&D space aware of their work?
If you are aware of these, what are your thoughts on the strengths and weaknesses of these individual assessments and systematic reviews, and of their database… as a useful input into the CBA modeling work of GiveWell, Open Philanthropy, Rethink Priorities, etc?
I’m asking this (naturally) both to know whether there’s a communication gap to close, and to get a sense of how The Unjournal should consider this work (e.g., avoiding overlap, evaluating these evaluations, etc.)