I am, however, disappointed that the scope of the competition is so narrow and a bit confused by its name. The contest page says you do want people to re-analyse your existing interventions but that you don’t want them to suggest different interventions or make ‘purely subjective arguments’ - I’m not sure what the latter bit means, but I guess it rules out any fundamental discussions about ethical worldviews or questions of how best to measure ‘good’. On this basis, it seems like you’re asking people not to try to change your mind, but rather to check your working.
This strikes me as a lost opportunity. After all, rethinking what matters and what the top interventions are could be where we find the biggest gains.
At the risk of being a noisy, broken record, I, and the team at HLI, have long-advocated measuring impact using self-reports and argued that this could really shake up the priorities (spot the differences between these 2016, 2018 and 2022 posts). Our meta-analyses recently found that treating depression via therapy is about 9x more cost-effective than cash transfers (2021 analysis; 2022 update), we’d previously explored how to compare life-improving to life-saving interventions using the same method and pointed out how various philosophical considerations might really change the picture (2020).
I’m still not really sure what GiveWell thinks of any of this. There’s been no public response except that, 9 months ago, GiveWell said they were working on their own reports on group therapy and subjective wellbeing and expected to publish those in 3-6 months. It looks like all this work would fall outside this competition but, if GiveWell were open to changing their mind, this would be one good place to look.
Hi Michael—this is Isabel Arjmand, Special Projects Officer at GiveWell. Thank you for the feedback and for HLI’s critique of deworming, which played a role in inspiring this contest!
We designed this contest to incentivize critiques that are relatively straightforward for us to evaluate and particularly likely to change our mind about upcoming allocation decisions. This is our first time running a contest like this, so we wanted to keep the scope manageable. We may run future contests with different or broader prompts.
A bit more color on why we’re keeping the contest’s scope to our existing cost-effectiveness analyses:
We believe critiques of our existing cost-effectiveness analyses will be relatively straightforward for us to review, as opposed to broader critiques, such as those that suggest we take an entirely different approach to recommending giving opportunities. We anticipate that having this well-defined scope will make it easier for us to compare and give due consideration to all entries with our current research capacity (which we are hoping to expand!).
We’re getting ready to make some large decisions about how to allocate funding across the programs we currently support at the end of the year. This contest is designed to solicit the feedback that we think has the greatest potential to improve those upcoming decisions; excellent entries could meaningfully change how we allocate funds, leading to more lives saved or improved. Broader critiques or proposals for wholly new approaches would be unlikely to influence this year’s decisions, given how much vetting we put into our allocations and how little time remains before they are finalized.
Outside of this contest, we welcome feedback on all aspects of our work, and we’re glad to receive those at any time via email, as blog comments on our open threads, or here on the EA Forum.
We appreciate your continued engagement on subjective well-being, particularly the useful feedback you provided on our draft reports explaining why we’re not as optimistic about subjective well-being measures and Interpersonal Psychotherapy Groups as you are. We’re still planning to publish those reports, but we’re behind the timeline we originally laid out. Thanks for your patience with this!
I’m really pleased to see GiveWell is doing this, and particularly that you singled out HLI’s critique of GiveWell’s deworming CEA as an example of what you’d like to see.
I am, however, disappointed that the scope of the competition is so narrow and a bit confused by its name. The contest page says you do want people to re-analyse your existing interventions but that you don’t want them to suggest different interventions or make ‘purely subjective arguments’ - I’m not sure what the latter bit means, but I guess it rules out any fundamental discussions about ethical worldviews or questions of how best to measure ‘good’. On this basis, it seems like you’re asking people not to try to change your mind, but rather to check your working.
This strikes me as a lost opportunity. After all, rethinking what matters and what the top interventions are could be where we find the biggest gains.
At the risk of being a noisy, broken record, I, and the team at HLI, have long-advocated measuring impact using self-reports and argued that this could really shake up the priorities (spot the differences between these 2016, 2018 and 2022 posts). Our meta-analyses recently found that treating depression via therapy is about 9x more cost-effective than cash transfers (2021 analysis; 2022 update), we’d previously explored how to compare life-improving to life-saving interventions using the same method and pointed out how various philosophical considerations might really change the picture (2020).
I’m still not really sure what GiveWell thinks of any of this. There’s been no public response except that, 9 months ago, GiveWell said they were working on their own reports on group therapy and subjective wellbeing and expected to publish those in 3-6 months. It looks like all this work would fall outside this competition but, if GiveWell were open to changing their mind, this would be one good place to look.
Hi Michael—this is Isabel Arjmand, Special Projects Officer at GiveWell. Thank you for the feedback and for HLI’s critique of deworming, which played a role in inspiring this contest!
We designed this contest to incentivize critiques that are relatively straightforward for us to evaluate and particularly likely to change our mind about upcoming allocation decisions. This is our first time running a contest like this, so we wanted to keep the scope manageable. We may run future contests with different or broader prompts.
A bit more color on why we’re keeping the contest’s scope to our existing cost-effectiveness analyses:
We believe critiques of our existing cost-effectiveness analyses will be relatively straightforward for us to review, as opposed to broader critiques, such as those that suggest we take an entirely different approach to recommending giving opportunities. We anticipate that having this well-defined scope will make it easier for us to compare and give due consideration to all entries with our current research capacity (which we are hoping to expand!).
We’re getting ready to make some large decisions about how to allocate funding across the programs we currently support at the end of the year. This contest is designed to solicit the feedback that we think has the greatest potential to improve those upcoming decisions; excellent entries could meaningfully change how we allocate funds, leading to more lives saved or improved. Broader critiques or proposals for wholly new approaches would be unlikely to influence this year’s decisions, given how much vetting we put into our allocations and how little time remains before they are finalized.
Outside of this contest, we welcome feedback on all aspects of our work, and we’re glad to receive those at any time via email, as blog comments on our open threads, or here on the EA Forum.
We appreciate your continued engagement on subjective well-being, particularly the useful feedback you provided on our draft reports explaining why we’re not as optimistic about subjective well-being measures and Interpersonal Psychotherapy Groups as you are. We’re still planning to publish those reports, but we’re behind the timeline we originally laid out. Thanks for your patience with this!