Very helpful comment, thank you for taking the time to write out this reply and sharing useful reflections and resources!
First I think precise ranking of “cause areas”is nearly impossible as its hard to meaningfully calculate the “cost-effectiveness” of a cause, you can only accurately calculate the cost-effectiveness of an intervention which specifically targets that cause. So if you did want a meaningful rank, you at least need to have an intervention which has probably already been tried and researched to some degree at least.
There’s a lot going on here. I suspect I’m more optimistic than you that sharing uncertain but specific rankings is helpful for clarifying views and making progress? I agree in principle that what we want to do is evaluate specific actions (“interventions”), but I still think you can rank expected cost-effectiveness at a slightly more zoomed-out level, as long as you are comparing across roughly similar levels of abstraction. (Implicitly, you’re evaluating the average intervention in that category, rather than a single intervention.) Given these things, I don’t think I endorse the view that “you at least need to have an intervention which has probably already been tried and researched to some degree at least.”
Secondly I think having public specific rankings has potential to be both meaningless and reputationally dangerous.
I agree with the reputational risks and the potential for people to misunderstand your claim or think that it’s more confident than it is, etc. I somewhat suspect that this will be mitigated by there just being more such rankings though, as well as having clear disclaimers. E.g. at the moment, people might look at 80k and Open Phil rankings and conclude that there must be strong evidence behind the ratings. But if they see that there are 5 different ranked lists with only some amount of overlap, it’s implicitly pretty clear that there’s a lot of subjectivity and difficult decision-making going into this. (I don’t agree with it being “meaningless” or “dishonest”—I think that relates to the points above.)
Also I personally think that GiveWell might do the most work which achieves the substance of what you are looking for within global health and wellbeing. Also like you mentioned the Copenhagen Consensus also does a pretty good job of outlining what they think might be the 12 best interventions (best things first) with much reasoning and calculation behind each one.
Thanks a lot for these pointers! I will look into them more carefully. This is exactly the sort of thing I was hoping to receive in response to this quick take, so thanks a lot for your help. Best Things First sounds great and I’ve added it to my Audible wishlist. Is this what you have in mind for GiveWell? (Context: I’m not very familiar with global health.)
I’d be interested to hear what you think might be the upsides of “ranking” specifically vs clustering our best estimates at effective cause areas/interventions.
Oh this might have just been me using unintentionally specific language. I would have included “tiered” lists as part of “ranked”. Indeed the Open Phil list is tiered rather than numerically ranked. Thank you for highlighting this though, I’ve edited the original post to add the word “tiered”. (Is that what you meant by “clustering our best estimates at effective cause areas/interventions? Lmk if you meant something else.)
Very helpful comment, thank you for taking the time to write out this reply and sharing useful reflections and resources!
There’s a lot going on here. I suspect I’m more optimistic than you that sharing uncertain but specific rankings is helpful for clarifying views and making progress? I agree in principle that what we want to do is evaluate specific actions (“interventions”), but I still think you can rank expected cost-effectiveness at a slightly more zoomed-out level, as long as you are comparing across roughly similar levels of abstraction. (Implicitly, you’re evaluating the average intervention in that category, rather than a single intervention.) Given these things, I don’t think I endorse the view that “you at least need to have an intervention which has probably already been tried and researched to some degree at least.”
I agree with the reputational risks and the potential for people to misunderstand your claim or think that it’s more confident than it is, etc. I somewhat suspect that this will be mitigated by there just being more such rankings though, as well as having clear disclaimers. E.g. at the moment, people might look at 80k and Open Phil rankings and conclude that there must be strong evidence behind the ratings. But if they see that there are 5 different ranked lists with only some amount of overlap, it’s implicitly pretty clear that there’s a lot of subjectivity and difficult decision-making going into this. (I don’t agree with it being “meaningless” or “dishonest”—I think that relates to the points above.)
Thanks a lot for these pointers! I will look into them more carefully. This is exactly the sort of thing I was hoping to receive in response to this quick take, so thanks a lot for your help. Best Things First sounds great and I’ve added it to my Audible wishlist. Is this what you have in mind for GiveWell? (Context: I’m not very familiar with global health.)
Oh this might have just been me using unintentionally specific language. I would have included “tiered” lists as part of “ranked”. Indeed the Open Phil list is tiered rather than numerically ranked. Thank you for highlighting this though, I’ve edited the original post to add the word “tiered”. (Is that what you meant by “clustering our best estimates at effective cause areas/interventions? Lmk if you meant something else.)
Thanks again!