(a) Agreed that there is a lot of research being down, and I think my main concern (and CE’s too, I understand, though I won’t speak for Joey and his team on this) is the issue of systematicity—causes can appear more or less important based on the specific research methodology employed, and so 1000 causes evaluated by a 1000 people just doesn’t deliver the same actionable information as a 1000 causes evaluated by a single organization employing a single methodology.
My main outstanding uncertainty at this point is just whether such an attempt at broad systematic research is really feasible given how much time research even at the shallow stage is taking.
I understand that GWWC is looking to do evaluation of evaluators (i.e. GiveWell, FP, CE etc) and in many ways, maybe that’s far more feasible in terms of providing the EA community with systematic, comparative results—if you get a sense of how much more optimistic/pessimistic various evaluators are, you can penalize their individual cause/intervention prioritizations, and get a better sense of how disparate causes stack up against one another even if different methodologies/assumptions are used.
(b) The timeline for (hopefully) finding a Cause X is fairly arbitrary! I definitely don’t have a good/strong sense of how long it’ll take, so it’s probably best to see the timeline as a kind of stretch-goal meant to push the organization. I guess the other issue is how much more impactful we expect Cause X to be - the DCP global health interventions vary by like 10,000 in cost-effectiveness, and if you think that interventions within broad cause areas (i.e. global health vs violent conflict vs political reform vs economic policy) vary at least as much, then one might expect there to be some Cause X out there three to four magnitudes more impactful than top GiveWell stuff, but it’s so hard to say.
(c) Wrote about the issue of cause classification in somewhat more detail in the response to Aidan below!
Thanks a lot for the feedback!
(a) Agreed that there is a lot of research being down, and I think my main concern (and CE’s too, I understand, though I won’t speak for Joey and his team on this) is the issue of systematicity—causes can appear more or less important based on the specific research methodology employed, and so 1000 causes evaluated by a 1000 people just doesn’t deliver the same actionable information as a 1000 causes evaluated by a single organization employing a single methodology.
My main outstanding uncertainty at this point is just whether such an attempt at broad systematic research is really feasible given how much time research even at the shallow stage is taking.
I understand that GWWC is looking to do evaluation of evaluators (i.e. GiveWell, FP, CE etc) and in many ways, maybe that’s far more feasible in terms of providing the EA community with systematic, comparative results—if you get a sense of how much more optimistic/pessimistic various evaluators are, you can penalize their individual cause/intervention prioritizations, and get a better sense of how disparate causes stack up against one another even if different methodologies/assumptions are used.
(b) The timeline for (hopefully) finding a Cause X is fairly arbitrary! I definitely don’t have a good/strong sense of how long it’ll take, so it’s probably best to see the timeline as a kind of stretch-goal meant to push the organization. I guess the other issue is how much more impactful we expect Cause X to be - the DCP global health interventions vary by like 10,000 in cost-effectiveness, and if you think that interventions within broad cause areas (i.e. global health vs violent conflict vs political reform vs economic policy) vary at least as much, then one might expect there to be some Cause X out there three to four magnitudes more impactful than top GiveWell stuff, but it’s so hard to say.
(c) Wrote about the issue of cause classification in somewhat more detail in the response to Aidan below!