Thanks for this, it’s pretty interesting to get your perspective as someone who’s been (I presume) heavily engaged in the community for some time. I thought your other post on the All-Party Parliamentary Group for Future Generations was awesome, by the way.
You asked for comments including “small” thoughts so here are some from me, for what they’re worth. These are my current views which I can easily see changing if I were to think about this more etc.
I think I basically agree that there doesn’t seem to have been much progress in cause prioritisation in say the last five years, compared to what you might have hoped for.
(mainly written to clarify my own thoughts:) It seems like you can do cause prioritisation work either by comparing different causes, or by investigating a particular cause (especially a cause that’s relatively unknown or poorly investigated), or by doing more “foundational” things like asking “what is moral value anyway?”, “how should we compare options under uncertainty”, etc.
My impression the Effective Altruism community has invested a significant amount of resources into cause prioritisation research, and relative lack of progress is because it’s hard
The Global Priorities Institute is basically doing cause prioritisation (as far as I know, and by the vague definition of cause prioritisation I have in my head) - maybe it’s more on the foundational / academic field building side (i.e. fleshing out and formally writing up existing arguments), but my impression is that it’s mostly stuff that seems worth working through to work out how to do the most good
I think you could give the cause prioritisation label to some of the work from the Future of Humanity Institute’s macrostrategy team(?)
Open Philanthropy Project spends a lot of their resources doing some version of this, as you noted
Rethink Priorities is basically doing this (though I might agree with you that it would be better if they were able to compare across causes rather than investigating a particular cause)
I’d consider work on forecasting / understanding AI progress, as is done by e.g. AI Impacts as cause prioritisation
The above (which is probably far from comprehensive) seems like a decent fraction of the resources of the “longtermist” part of the community (the part I’m familiar with). I suppose I lean towards wanting a larger fraction of resources allocated to cause prioritisation, but I don’t think it’s that obvious either way. Anyway, regardless of whether the right fraction of resources have been spent on this, I think it’s just very hard and that this explains a lot of what you’re describing.
Maybe one reason there’s not much work comparing causes in particular is that there’s so much uncertainty, which makes it very difficult to do well enough that the output is valuable. In particular
people don’t agree on empirical issues that can radically alter the relative importance of different causes (e.g. AI timelines)
people don’t agree on “the correct moral theory” / whatever the ultimate objective is / what you ~call “different views”
Edit: reading the above you could probably get the impression that I think you’re wrong to “raise the alarm” about the need for more / different cause prioritisation, but I don’t think that at all. I think I’m pretty sympathetic to most of what you wrote.
I agree that the cause prioritisation work we need to do now is far harder than the work we were doing ten years ago. I think AI Impacts provides an interesting illustration of that: It was initially set up essentially as a cause prioritisation org. But in doing that work it became clear that whereas in comparing between different global development interventions there was a large published literature to build on, when trying to compare work on AI to other areas, and compare interventions within AI safety, there was far less to go on. That led to the conclusion that the work they should do first was get a better grasp on questions like ‘how fast will AI likely develop, and how discontinuously?’.
I think another thing going on is that the stakes have become higher. When Giving What We Can first started publishing recommendations eg comparing between donating to education or deworming, we only had ~30 members. That’s a lot of money over people’s lifetimes, but it’s nowhere near the resources the EA movement now commands. The huge increase in resources to allocate makes it more worth doing the foundational work that groups like AI Impacts do, and also the theoretic work GPI does. I think that makes it look like there’s less work being done, because there are way fewer actionable results per hour spent.
Hi Ben. Thank you for this. This is exactly what I like, people replying with their impressions of the post, even if rough, so that I get some idea of how people feel and if this resonates. So thank you.
- -
That said I disagree with your claim.
You say “I think it’s just very hard and that this explains a lot of what you’re describing”.
I think it may well be difficult but it is mostly not happening due to underinvestment and lack of coordination in this space. Hence raising a flag.
I make this case above by comparing what I would see as a good coverage of the space with what is actually happening, so don’t have much to add here except that it is interesting that others see it differently.
I note a few counterexamples to the idea it is not done because it is hard (even in the “longtermist” area) such as: 80K’s stated reason for doing less in this space is that they have reached a conclusion (priority paths) that they are happy with, that GPI was only created recently (research agenda is from 2019), Rethink Priorities is following funding, AI strategy is also difficult but is progressing much quicker. etc.
- -
Overall, I don’t have a strong view on this, and maybe you are correct. But this is something that could be looked into more. In particular I have mostly dug into research on websites but if I (or anyone) had more time it would be great talk to people who have worked on this and see if it is difficult or underinvested in (or both). I also think you could with a bit of time somewhat address this question by writing a research agenda and looking for potential low hanging research fruit in this domain.
Hey Sam, just a very quick comment that the post you link to wasn’t meant to imply we intend to do less prioritisation research than before.
The 50/30/20 split we mention there was for how we intend to split delivery efforts across different target audiences, rather than on research vs. delivery. And also note that this means ~50% of effort is going into non-priority paths, which will include new potential priorities & career paths (such as the lists we posted recently).
As Rob notes in another comment, we still intend to spend ~10% of team time on research, similar to the past, and more total time because the team is larger. This would include looking into whether we should add new priority paths or problem areas.
Thank you for flagging – it is super amazing to hear and very excited by that.
I looked at a lot of organisations and tried to extrapolate what they will be doing in this space from the public information rather than reaching out, so it is great to see comments saying that research along these lines will be happening, and sorry for any thing mischaracterised.
Thanks for this, it’s pretty interesting to get your perspective as someone who’s been (I presume) heavily engaged in the community for some time. I thought your other post on the All-Party Parliamentary Group for Future Generations was awesome, by the way.
You asked for comments including “small” thoughts so here are some from me, for what they’re worth. These are my current views which I can easily see changing if I were to think about this more etc.
I think I basically agree that there doesn’t seem to have been much progress in cause prioritisation in say the last five years, compared to what you might have hoped for.
(mainly written to clarify my own thoughts:) It seems like you can do cause prioritisation work either by comparing different causes, or by investigating a particular cause (especially a cause that’s relatively unknown or poorly investigated), or by doing more “foundational” things like asking “what is moral value anyway?”, “how should we compare options under uncertainty”, etc.
My impression the Effective Altruism community has invested a significant amount of resources into cause prioritisation research, and relative lack of progress is because it’s hard
The Global Priorities Institute is basically doing cause prioritisation (as far as I know, and by the vague definition of cause prioritisation I have in my head) - maybe it’s more on the foundational / academic field building side (i.e. fleshing out and formally writing up existing arguments), but my impression is that it’s mostly stuff that seems worth working through to work out how to do the most good
I think you could give the cause prioritisation label to some of the work from the Future of Humanity Institute’s macrostrategy team(?)
Open Philanthropy Project spends a lot of their resources doing some version of this, as you noted
Rethink Priorities is basically doing this (though I might agree with you that it would be better if they were able to compare across causes rather than investigating a particular cause)
I’d consider work on forecasting / understanding AI progress, as is done by e.g. AI Impacts as cause prioritisation
The above (which is probably far from comprehensive) seems like a decent fraction of the resources of the “longtermist” part of the community (the part I’m familiar with). I suppose I lean towards wanting a larger fraction of resources allocated to cause prioritisation, but I don’t think it’s that obvious either way. Anyway, regardless of whether the right fraction of resources have been spent on this, I think it’s just very hard and that this explains a lot of what you’re describing.
Maybe one reason there’s not much work comparing causes in particular is that there’s so much uncertainty, which makes it very difficult to do well enough that the output is valuable. In particular
people don’t agree on empirical issues that can radically alter the relative importance of different causes (e.g. AI timelines)
people don’t agree on “the correct moral theory” / whatever the ultimate objective is / what you ~call “different views”
Edit: reading the above you could probably get the impression that I think you’re wrong to “raise the alarm” about the need for more / different cause prioritisation, but I don’t think that at all. I think I’m pretty sympathetic to most of what you wrote.
I agree that the cause prioritisation work we need to do now is far harder than the work we were doing ten years ago. I think AI Impacts provides an interesting illustration of that: It was initially set up essentially as a cause prioritisation org. But in doing that work it became clear that whereas in comparing between different global development interventions there was a large published literature to build on, when trying to compare work on AI to other areas, and compare interventions within AI safety, there was far less to go on. That led to the conclusion that the work they should do first was get a better grasp on questions like ‘how fast will AI likely develop, and how discontinuously?’.
I think another thing going on is that the stakes have become higher. When Giving What We Can first started publishing recommendations eg comparing between donating to education or deworming, we only had ~30 members. That’s a lot of money over people’s lifetimes, but it’s nowhere near the resources the EA movement now commands. The huge increase in resources to allocate makes it more worth doing the foundational work that groups like AI Impacts do, and also the theoretic work GPI does. I think that makes it look like there’s less work being done, because there are way fewer actionable results per hour spent.
Hi Ben. Thank you for this. This is exactly what I like, people replying with their impressions of the post, even if rough, so that I get some idea of how people feel and if this resonates. So thank you.
- -
That said I disagree with your claim.
You say “I think it’s just very hard and that this explains a lot of what you’re describing”.
I think it may well be difficult but it is mostly not happening due to underinvestment and lack of coordination in this space. Hence raising a flag.
I make this case above by comparing what I would see as a good coverage of the space with what is actually happening, so don’t have much to add here except that it is interesting that others see it differently.
I note a few counterexamples to the idea it is not done because it is hard (even in the “longtermist” area) such as: 80K’s stated reason for doing less in this space is that they have reached a conclusion (priority paths) that they are happy with, that GPI was only created recently (research agenda is from 2019), Rethink Priorities is following funding, AI strategy is also difficult but is progressing much quicker. etc.
- -
Overall, I don’t have a strong view on this, and maybe you are correct. But this is something that could be looked into more. In particular I have mostly dug into research on websites but if I (or anyone) had more time it would be great talk to people who have worked on this and see if it is difficult or underinvested in (or both). I also think you could with a bit of time somewhat address this question by writing a research agenda and looking for potential low hanging research fruit in this domain.
Hey Sam, just a very quick comment that the post you link to wasn’t meant to imply we intend to do less prioritisation research than before.
The 50/30/20 split we mention there was for how we intend to split delivery efforts across different target audiences, rather than on research vs. delivery. And also note that this means ~50% of effort is going into non-priority paths, which will include new potential priorities & career paths (such as the lists we posted recently).
As Rob notes in another comment, we still intend to spend ~10% of team time on research, similar to the past, and more total time because the team is larger. This would include looking into whether we should add new priority paths or problem areas.
Hi Ben,
Thank you for flagging – it is super amazing to hear and very excited by that.
I looked at a lot of organisations and tried to extrapolate what they will be doing in this space from the public information rather than reaching out, so it is great to see comments saying that research along these lines will be happening, and sorry for any thing mischaracterised.
This comment below is also relevant: https://forum.effectivealtruism.org/posts/MSYhEatxkEfg46j3D/the-case-of-the-missing-cause-prioritisation-research?commentId=RGX9f6PXvWkBvCEoK