The point about ‘dismissing too soon’ comes from the realisation that one doesn’t really evaluate the cost-effectiveness of resources to causes (entire problems), but you only evaluate solutions. Someone who thought they were able to evaluate causes as a whole, and so hadn’t really looked at what you might do, would be liable to discount problems too soon.
This is all fairly abstract, but I suppose I take something like a ‘no shortcuts’ view to cause prioritisation: you actually have to look hard at what you might do, rather than appealing to heuristics to do the work for you.
The point about ‘dismissing too soon’ comes from the realisation that one doesn’t really evaluate the cost-effectiveness of resources to causes (entire problems), but you only evaluate solutions.
The connection between those two ideas is a good point that I don’t think I’d fully taken away from this post, so thanks for making it more explicit.
This is all fairly abstract, but I suppose I take something like a ‘no shortcuts’ view to cause prioritisation: you actually have to look hard at what you might do, rather than appealing to heuristics to do the work for you.
I’m not sure precisely what you mean, so I’m not sure whether I agree. Here’s an attempt to flesh out one thing might mean, which is something I’d agree with:
“Cause prioritisation depends on at least implicit consideration of how cost-effective the most cost-effective set of interventions within a given cause area are. Therefore, to be very confident about the relative priority of different cause areas, you would have to be confident that you’ve determined which interventions within each of those cause areas would be most cost-effective and that you’ve determined the relative cost-effective of those interventions. The further you are from that ideal, the less confident you should be.
However, that ideal would be very hard to reach. It would also require choosing areas to build up expertise and connections in, in order to better determine which interventions could be done, which would be most cost-effective, and how cost-effective they’d be. So in practice, the EA community will do more good if we often use cheaper heuristics to make tentative decisions about which areas to prioritise learning about or working on.
But we should remember that we can’t be very confident about these things, and that it’s quite possible that cause areas we haven’t considered or that we considered and then de-prioritised are actually more important. (And one of the many reasons for this is that we almost certainly ignored some possible interventions in those cause areas, and failed to do good cost-effectiveness analyses of the interventions we did consider.)”
Does this roughly match what you meant / what you believe?
In brief, I’m sceptical there are good heuristics for assessing an entire problem. Ask yourself: what are they, and what is the justification for them? What do, rather, is have intuitive views about how effective particular solutions to given problems are. So we should think more carefully about those.
If it helps, for context, I started writing my thesis is 2015. At that time, EAs (following, I think, Will’s book at 80k’s then analysis) seemed to think you could make enormous progress on what the priorities are by appealing to very vague and abstract heuristics like “the bigger the problem, the higher the EV”. This all seemed and seems v suspicious to me. People don’t do this so much anymore.
Hmm, I feel we might be talking past each other slightly or something.
My impression is that Happier Lives Institute is already taking or planning to take both: (a) actions largely optimised for helping us identify the most cost-effective actions to improve near-term human wellbeing[1], and evaluate their cost-effectiveness, and (b) actions largely optimised for relatively directly improving near-term human wellbeing.
The fact that you’re doing or planning to do (b) implies that you have at least implicitly prioritised near-term human wellbeing over other issues, right? And since you’ve done it before we’ve thoroughly considered a wide range of interventions in a wide range of cause areas and decently evaluated their cost-effectiveness, it seems you are in some sense appealing to heuristics for the purpose of cause prioritisation?
So it seems like maybe what you’re saying is mainly that we should remember that our cause priorities should currently be considered quite preliminary and uncertain, rather than that we can’t have cause priorities yet?
(Also, FWIW, for tentative cause prioritisation it does seem to me that there are a range of heuristics which can be useful, even if they’re not totally decisive. I have in mind things including but not limited to ITN. But there’s already been a lot of debate on the value of many of those specific heuristics, and I imagine you discuss some in your thesis but I haven’t read it.)
[1] I’m not sure if this is precisely how you’d define HLI’s focus. By “near-term” I have in mind something like “within the next 100 years”.
prescriptively, I would add that this contributes to the importance of being open to other people’s ideas about how to do good (even if they are not familiar with EA).
The point about ‘dismissing too soon’ comes from the realisation that one doesn’t really evaluate the cost-effectiveness of resources to causes (entire problems), but you only evaluate solutions. Someone who thought they were able to evaluate causes as a whole, and so hadn’t really looked at what you might do, would be liable to discount problems too soon.
This is all fairly abstract, but I suppose I take something like a ‘no shortcuts’ view to cause prioritisation: you actually have to look hard at what you might do, rather than appealing to heuristics to do the work for you.
The connection between those two ideas is a good point that I don’t think I’d fully taken away from this post, so thanks for making it more explicit.
I’m not sure precisely what you mean, so I’m not sure whether I agree. Here’s an attempt to flesh out one thing might mean, which is something I’d agree with:
“Cause prioritisation depends on at least implicit consideration of how cost-effective the most cost-effective set of interventions within a given cause area are. Therefore, to be very confident about the relative priority of different cause areas, you would have to be confident that you’ve determined which interventions within each of those cause areas would be most cost-effective and that you’ve determined the relative cost-effective of those interventions. The further you are from that ideal, the less confident you should be.
However, that ideal would be very hard to reach. It would also require choosing areas to build up expertise and connections in, in order to better determine which interventions could be done, which would be most cost-effective, and how cost-effective they’d be. So in practice, the EA community will do more good if we often use cheaper heuristics to make tentative decisions about which areas to prioritise learning about or working on.
But we should remember that we can’t be very confident about these things, and that it’s quite possible that cause areas we haven’t considered or that we considered and then de-prioritised are actually more important. (And one of the many reasons for this is that we almost certainly ignored some possible interventions in those cause areas, and failed to do good cost-effectiveness analyses of the interventions we did consider.)”
Does this roughly match what you meant / what you believe?
In brief, I’m sceptical there are good heuristics for assessing an entire problem. Ask yourself: what are they, and what is the justification for them? What do, rather, is have intuitive views about how effective particular solutions to given problems are. So we should think more carefully about those.
If it helps, for context, I started writing my thesis is 2015. At that time, EAs (following, I think, Will’s book at 80k’s then analysis) seemed to think you could make enormous progress on what the priorities are by appealing to very vague and abstract heuristics like “the bigger the problem, the higher the EV”. This all seemed and seems v suspicious to me. People don’t do this so much anymore.
Hmm, I feel we might be talking past each other slightly or something.
My impression is that Happier Lives Institute is already taking or planning to take both: (a) actions largely optimised for helping us identify the most cost-effective actions to improve near-term human wellbeing[1], and evaluate their cost-effectiveness, and (b) actions largely optimised for relatively directly improving near-term human wellbeing.
The fact that you’re doing or planning to do (b) implies that you have at least implicitly prioritised near-term human wellbeing over other issues, right? And since you’ve done it before we’ve thoroughly considered a wide range of interventions in a wide range of cause areas and decently evaluated their cost-effectiveness, it seems you are in some sense appealing to heuristics for the purpose of cause prioritisation?
So it seems like maybe what you’re saying is mainly that we should remember that our cause priorities should currently be considered quite preliminary and uncertain, rather than that we can’t have cause priorities yet?
(Also, FWIW, for tentative cause prioritisation it does seem to me that there are a range of heuristics which can be useful, even if they’re not totally decisive. I have in mind things including but not limited to ITN. But there’s already been a lot of debate on the value of many of those specific heuristics, and I imagine you discuss some in your thesis but I haven’t read it.)
[1] I’m not sure if this is precisely how you’d define HLI’s focus. By “near-term” I have in mind something like “within the next 100 years”.
This matches at least my take on this.
prescriptively, I would add that this contributes to the importance of being open to other people’s ideas about how to do good (even if they are not familiar with EA).