...that said, I guess itâs easy to agree that we shouldnât dismiss something âtoo soonâ, and Iâm not actually sure whether EA is currently erring towards dismissing things too soon or not quickly enough.
And I guess thereâs also a way your statement is in tension with the standard discussion of Cause X, the possibility of an ongoing moral catastrophe, etc.; if we spend more resources (including time) re-assessing causes that have been preliminarily dismissed, this leaves us with fewer resources available for identifying and (further) assessing cause candidates that havenât been dismissed yet.
The point about âdismissing too soonâ comes from the realisation that one doesnât really evaluate the cost-effectiveness of resources to causes (entire problems), but you only evaluate solutions. Someone who thought they were able to evaluate causes as a whole, and so hadnât really looked at what you might do, would be liable to discount problems too soon.
This is all fairly abstract, but I suppose I take something like a âno shortcutsâ view to cause prioritisation: you actually have to look hard at what you might do, rather than appealing to heuristics to do the work for you.
The point about âdismissing too soonâ comes from the realisation that one doesnât really evaluate the cost-effectiveness of resources to causes (entire problems), but you only evaluate solutions.
The connection between those two ideas is a good point that I donât think Iâd fully taken away from this post, so thanks for making it more explicit.
This is all fairly abstract, but I suppose I take something like a âno shortcutsâ view to cause prioritisation: you actually have to look hard at what you might do, rather than appealing to heuristics to do the work for you.
Iâm not sure precisely what you mean, so Iâm not sure whether I agree. Hereâs an attempt to flesh out one thing might mean, which is something Iâd agree with:
âCause prioritisation depends on at least implicit consideration of how cost-effective the most cost-effective set of interventions within a given cause area are. Therefore, to be very confident about the relative priority of different cause areas, you would have to be confident that youâve determined which interventions within each of those cause areas would be most cost-effective and that youâve determined the relative cost-effective of those interventions. The further you are from that ideal, the less confident you should be.
However, that ideal would be very hard to reach. It would also require choosing areas to build up expertise and connections in, in order to better determine which interventions could be done, which would be most cost-effective, and how cost-effective theyâd be. So in practice, the EA community will do more good if we often use cheaper heuristics to make tentative decisions about which areas to prioritise learning about or working on.
But we should remember that we canât be very confident about these things, and that itâs quite possible that cause areas we havenât considered or that we considered and then de-prioritised are actually more important. (And one of the many reasons for this is that we almost certainly ignored some possible interventions in those cause areas, and failed to do good cost-effectiveness analyses of the interventions we did consider.)â
Does this roughly match what you meant /â what you believe?
In brief, Iâm sceptical there are good heuristics for assessing an entire problem. Ask yourself: what are they, and what is the justification for them? What do, rather, is have intuitive views about how effective particular solutions to given problems are. So we should think more carefully about those.
If it helps, for context, I started writing my thesis is 2015. At that time, EAs (following, I think, Willâs book at 80kâs then analysis) seemed to think you could make enormous progress on what the priorities are by appealing to very vague and abstract heuristics like âthe bigger the problem, the higher the EVâ. This all seemed and seems v suspicious to me. People donât do this so much anymore.
Hmm, I feel we might be talking past each other slightly or something.
My impression is that Happier Lives Institute is already taking or planning to take both: (a) actions largely optimised for helping us identify the most cost-effective actions to improve near-term human wellbeing[1], and evaluate their cost-effectiveness, and (b) actions largely optimised for relatively directly improving near-term human wellbeing.
The fact that youâre doing or planning to do (b) implies that you have at least implicitly prioritised near-term human wellbeing over other issues, right? And since youâve done it before weâve thoroughly considered a wide range of interventions in a wide range of cause areas and decently evaluated their cost-effectiveness, it seems you are in some sense appealing to heuristics for the purpose of cause prioritisation?
So it seems like maybe what youâre saying is mainly that we should remember that our cause priorities should currently be considered quite preliminary and uncertain, rather than that we canât have cause priorities yet?
(Also, FWIW, for tentative cause prioritisation it does seem to me that there are a range of heuristics which can be useful, even if theyâre not totally decisive. I have in mind things including but not limited to ITN. But thereâs already been a lot of debate on the value of many of those specific heuristics, and I imagine you discuss some in your thesis but I havenât read it.)
[1] Iâm not sure if this is precisely how youâd define HLIâs focus. By ânear-termâ I have in mind something like âwithin the next 100 yearsâ.
prescriptively, I would add that this contributes to the importance of being open to other peopleâs ideas about how to do good (even if they are not familiar with EA).
I share this view. I also feel like it might tie in somewhat with discussion related to Cause X, the possibility of an ongoing moral catastrophe, etc.
...that said, I guess itâs easy to agree that we shouldnât dismiss something âtoo soonâ, and Iâm not actually sure whether EA is currently erring towards dismissing things too soon or not quickly enough.
And I guess thereâs also a way your statement is in tension with the standard discussion of Cause X, the possibility of an ongoing moral catastrophe, etc.; if we spend more resources (including time) re-assessing causes that have been preliminarily dismissed, this leaves us with fewer resources available for identifying and (further) assessing cause candidates that havenât been dismissed yet.
The point about âdismissing too soonâ comes from the realisation that one doesnât really evaluate the cost-effectiveness of resources to causes (entire problems), but you only evaluate solutions. Someone who thought they were able to evaluate causes as a whole, and so hadnât really looked at what you might do, would be liable to discount problems too soon.
This is all fairly abstract, but I suppose I take something like a âno shortcutsâ view to cause prioritisation: you actually have to look hard at what you might do, rather than appealing to heuristics to do the work for you.
The connection between those two ideas is a good point that I donât think Iâd fully taken away from this post, so thanks for making it more explicit.
Iâm not sure precisely what you mean, so Iâm not sure whether I agree. Hereâs an attempt to flesh out one thing might mean, which is something Iâd agree with:
âCause prioritisation depends on at least implicit consideration of how cost-effective the most cost-effective set of interventions within a given cause area are. Therefore, to be very confident about the relative priority of different cause areas, you would have to be confident that youâve determined which interventions within each of those cause areas would be most cost-effective and that youâve determined the relative cost-effective of those interventions. The further you are from that ideal, the less confident you should be.
However, that ideal would be very hard to reach. It would also require choosing areas to build up expertise and connections in, in order to better determine which interventions could be done, which would be most cost-effective, and how cost-effective theyâd be. So in practice, the EA community will do more good if we often use cheaper heuristics to make tentative decisions about which areas to prioritise learning about or working on.
But we should remember that we canât be very confident about these things, and that itâs quite possible that cause areas we havenât considered or that we considered and then de-prioritised are actually more important. (And one of the many reasons for this is that we almost certainly ignored some possible interventions in those cause areas, and failed to do good cost-effectiveness analyses of the interventions we did consider.)â
Does this roughly match what you meant /â what you believe?
In brief, Iâm sceptical there are good heuristics for assessing an entire problem. Ask yourself: what are they, and what is the justification for them? What do, rather, is have intuitive views about how effective particular solutions to given problems are. So we should think more carefully about those.
If it helps, for context, I started writing my thesis is 2015. At that time, EAs (following, I think, Willâs book at 80kâs then analysis) seemed to think you could make enormous progress on what the priorities are by appealing to very vague and abstract heuristics like âthe bigger the problem, the higher the EVâ. This all seemed and seems v suspicious to me. People donât do this so much anymore.
Hmm, I feel we might be talking past each other slightly or something.
My impression is that Happier Lives Institute is already taking or planning to take both: (a) actions largely optimised for helping us identify the most cost-effective actions to improve near-term human wellbeing[1], and evaluate their cost-effectiveness, and (b) actions largely optimised for relatively directly improving near-term human wellbeing.
The fact that youâre doing or planning to do (b) implies that you have at least implicitly prioritised near-term human wellbeing over other issues, right? And since youâve done it before weâve thoroughly considered a wide range of interventions in a wide range of cause areas and decently evaluated their cost-effectiveness, it seems you are in some sense appealing to heuristics for the purpose of cause prioritisation?
So it seems like maybe what youâre saying is mainly that we should remember that our cause priorities should currently be considered quite preliminary and uncertain, rather than that we canât have cause priorities yet?
(Also, FWIW, for tentative cause prioritisation it does seem to me that there are a range of heuristics which can be useful, even if theyâre not totally decisive. I have in mind things including but not limited to ITN. But thereâs already been a lot of debate on the value of many of those specific heuristics, and I imagine you discuss some in your thesis but I havenât read it.)
[1] Iâm not sure if this is precisely how youâd define HLIâs focus. By ânear-termâ I have in mind something like âwithin the next 100 yearsâ.
This matches at least my take on this.
prescriptively, I would add that this contributes to the importance of being open to other peopleâs ideas about how to do good (even if they are not familiar with EA).