Ultimately, what we want to know is this: ‘if I add an additional unit of resources to solving this problem, how much good will be accomplished’?
If (i) you have more time to make your comparison and (ii) you’re comparing two problems where well-defined interventions already exist with a track record of data, then we recommend making a quantified estimate of the benefits per unit of costs.
i.e. there’s a spectrum—when you’re funding established interventions you can take more a marginal approach and do cost-effectiveness analysis; when you’re funding new areas or startups then focus more on evaluating the cause area (and team quality).
Moreover, the effectiveness of the best interventions is closely related to the tractability score, so the information is in the INT framework too.
I also don’t think it’s obvious that ultimately you fund interventions. If anything, ultimately you donate to people/organisations.
An organisation may focus on a specific intervention, but they might also modify the intervention; discover a new intervention; or move into an adjacent intervention. Funding organisations that focus on good cause areas is more robust because it means the nearby opportunities are better, which increases long-run impact.
Moreover, as a funder you need to learn about an area to make good decision about what to fund. If you focus on a specific cause, then everything you learn fits together more, building a valuable body of knowledge in the medium term, increasing your effectiveness. If you just fund lots of random interventions, then you don’t build a coherent body of knowledge and connections. This is a major reason why OpenPhil decided to focus on causes as their key way of dividing up the opportunity space.
The same applies even more so to career choice.
I agree with the approach to evaluating interventions. Basically, use cost-effectiveness estimates or CBA where you update based on the strength of the evidence. Plus you can factor in VOI.
I disagree about the cause area and organization being more important than the intervention. To me, it’s all about the intervention in the end. Supporting people that you “believe in” in a cause that you think is important is basically a bet that you are making that a high impact intervention will spring forth. That is one valid way of going about maximizing impact, however, working the other way – starting with the intervention and then supporting those best suited to implement it, is also valid.
The same is true for your point about a funder specializing in one knowledge area so they are in the best position to judge high impact activity within that area. That is a sensible approach to discover high impact interventions, however, as with strategy of supporting people, the reverse method can also be valid. It makes no sense to reject a high potential intervention you happen to come across (assuming its value is fairly obvious) simply because it isn’t in the area that you have been targeting. You’re right, this is what Open Phil does. Nevertheless, rejecting a high potential intervention simply because it wasn’t where you were looking for it is a bias and counter to effective altruism. And I object to your dismissal of interventions from surprising places as “random.” Again, this is completely counter to effective altruism, which is all about maximizing value wherever you find it.
We say much of this on our problem framework page: https://80000hours.org/articles/problem-framework/
i.e. there’s a spectrum—when you’re funding established interventions you can take more a marginal approach and do cost-effectiveness analysis; when you’re funding new areas or startups then focus more on evaluating the cause area (and team quality).
Moreover, the effectiveness of the best interventions is closely related to the tractability score, so the information is in the INT framework too.
I also don’t think it’s obvious that ultimately you fund interventions. If anything, ultimately you donate to people/organisations.
An organisation may focus on a specific intervention, but they might also modify the intervention; discover a new intervention; or move into an adjacent intervention. Funding organisations that focus on good cause areas is more robust because it means the nearby opportunities are better, which increases long-run impact.
Moreover, as a funder you need to learn about an area to make good decision about what to fund. If you focus on a specific cause, then everything you learn fits together more, building a valuable body of knowledge in the medium term, increasing your effectiveness. If you just fund lots of random interventions, then you don’t build a coherent body of knowledge and connections. This is a major reason why OpenPhil decided to focus on causes as their key way of dividing up the opportunity space.
The same applies even more so to career choice.
I agree with the approach to evaluating interventions. Basically, use cost-effectiveness estimates or CBA where you update based on the strength of the evidence. Plus you can factor in VOI.
I disagree about the cause area and organization being more important than the intervention. To me, it’s all about the intervention in the end. Supporting people that you “believe in” in a cause that you think is important is basically a bet that you are making that a high impact intervention will spring forth. That is one valid way of going about maximizing impact, however, working the other way – starting with the intervention and then supporting those best suited to implement it, is also valid.
The same is true for your point about a funder specializing in one knowledge area so they are in the best position to judge high impact activity within that area. That is a sensible approach to discover high impact interventions, however, as with strategy of supporting people, the reverse method can also be valid. It makes no sense to reject a high potential intervention you happen to come across (assuming its value is fairly obvious) simply because it isn’t in the area that you have been targeting. You’re right, this is what Open Phil does. Nevertheless, rejecting a high potential intervention simply because it wasn’t where you were looking for it is a bias and counter to effective altruism. And I object to your dismissal of interventions from surprising places as “random.” Again, this is completely counter to effective altruism, which is all about maximizing value wherever you find it.
Looks like you linked to this post instead of to 80K’s problem framework page.