I haven’t read this through and don’t know to what extent the criticisms are valid, but it certainly looks impressive.
My concern is, if some or many of these criticisms are valid, how a key player in the Effective Altruism movement could have been blind to them or at least not have addressed them, for so long. GiveWell and its cost-effectiveness model has been directing large amounts of money over the past 15 years. I would have hoped that somewhere along the road they would have opened up their model to scrutiny from world-leading experts in cost-effectiveness. I see in a comment that a GiveWell representative is very keen to chat to Froolow which raises the question—why has it taken so long for that chat to happen?
Perhaps other EA orgs are also neglecting opening up their analysis to world-leading experts?
One of the key takeaways in the body of the text which perhaps I should have brought out more in the summary is that the GiveWell model is basically as reliable as highly professionalised bodies like pharma companies have figured out how to make a cost-effectiveness model. A small number of minor errors are unexceptional for a model of this complexity, even models that we submit to pharma regulators that have had several million dollars of development behind them.
I would say that while the errors are uninteresting and unexceptional, the unusual model design decisions are worth commenting on. The GiveWell team are admirably transparent with their model, and anybody who wants to review it can have access to almost everything at the click of a button (some assumptions are gated to GiveWell staff, but these aren’t central). Given this, it is remarkable the EA community didn’t manage to surface anyone who knew enough about models to flag to GiveWell that there were optimisations in design to be made—the essay above is not really arcane modelling lore but rather something anyone with a few years’ experience in pharma HEOR could have told you. Is this because there are too few quant actors in the EA space? Is it because they don’t think their contributions would be valued so don’t speak up? Is it because criticism of GiveWell makes you unemployable in EA spaces so is heavily incentivised against? Etc etc. That is to say—I think asking why GiveWell missed the improvements is missing the important point, which is that everyone missed these improvements so there’s probably changes that can be made to expert knowledge synthesis in EA right across the board.
Just to add that I think outreach efforts like the Red Team contest are a really good way of doing this—I wouldn’t have heard about the EA Forums had it not been for the plug Scott Alexander gave the contest on Astral Codex Ten (which I read mostly for the stuff on prediction markets).
I haven’t read this through and don’t know to what extent the criticisms are valid, but it certainly looks impressive.
My concern is, if some or many of these criticisms are valid, how a key player in the Effective Altruism movement could have been blind to them or at least not have addressed them, for so long. GiveWell and its cost-effectiveness model has been directing large amounts of money over the past 15 years. I would have hoped that somewhere along the road they would have opened up their model to scrutiny from world-leading experts in cost-effectiveness. I see in a comment that a GiveWell representative is very keen to chat to Froolow which raises the question—why has it taken so long for that chat to happen?
Perhaps other EA orgs are also neglecting opening up their analysis to world-leading experts?
One of the key takeaways in the body of the text which perhaps I should have brought out more in the summary is that the GiveWell model is basically as reliable as highly professionalised bodies like pharma companies have figured out how to make a cost-effectiveness model. A small number of minor errors are unexceptional for a model of this complexity, even models that we submit to pharma regulators that have had several million dollars of development behind them.
I would say that while the errors are uninteresting and unexceptional, the unusual model design decisions are worth commenting on. The GiveWell team are admirably transparent with their model, and anybody who wants to review it can have access to almost everything at the click of a button (some assumptions are gated to GiveWell staff, but these aren’t central). Given this, it is remarkable the EA community didn’t manage to surface anyone who knew enough about models to flag to GiveWell that there were optimisations in design to be made—the essay above is not really arcane modelling lore but rather something anyone with a few years’ experience in pharma HEOR could have told you. Is this because there are too few quant actors in the EA space? Is it because they don’t think their contributions would be valued so don’t speak up? Is it because criticism of GiveWell makes you unemployable in EA spaces so is heavily incentivised against? Etc etc. That is to say—I think asking why GiveWell missed the improvements is missing the important point, which is that everyone missed these improvements so there’s probably changes that can be made to expert knowledge synthesis in EA right across the board.
Just to add that I think outreach efforts like the Red Team contest are a really good way of doing this—I wouldn’t have heard about the EA Forums had it not been for the plug Scott Alexander gave the contest on Astral Codex Ten (which I read mostly for the stuff on prediction markets).