I really appreciated this post, and think there is a ton of room for more impact with more frequent and rigorous cross-cause prioritization work. Your post prompted me to finally write up a related quick take I’ve been meaning to share for a while (which I’ll reproduce below), so thank you!
***
I’ve been feeling increasingly strongly over the last couple of years that EA organizations and individuals (myself very much included) could be allocating resources and doing prioritization much more effectively. That said, I think we’re doing extremely well in relative terms, and greatly appreciate the community’s willingness to engage in such difficult prioritization.
Reasons why I think we’re not realizing our potential:
Not realizing our lack of clarity about how to most impactfully allocate resources (time, money, attention, etc). Relatedly, an implicit assumption that we’re doing a pretty good job, and couldn’t be doing significantly better. I speculate that FTX trauma has significantly reduced the community’s ambition to an unwarranted degree.
Path dependence + insufficient intervention prioritization research (in terms of quality, quantity, and frequency). I thought this post brought up good points about the relative lack of cross-cause prioritization research + dissemination in the EA community despite its importance.
Being insufficiently open-minded about which areas and interventions might warrant resources/attention, and unwarranted deference to EA canon, 80K, Open Phil, EA/rationality thought leaders, etc.
Poorly applying heuristics as a substitute to prioritization work. See this comment and discussion about the neglectedness heuristic causing us to miss out on impact. FWIW I believe this very strongly and think the community has missed out on a ton of impact because of this specific mistake, but I’m unable to write about specifics in detail publicly. Feel free to reach out if you’d like to discuss this in private.
Aversion to feeling uncertainty and confusion (likely exacerbated by stress about AGI timelines).
Attachment to feeling certainty, comfort and assurance about the ethical and epistemic justification of our past actions, thinking, and deference.
Being slow to re-orient to important, quickly-evolving technological and geopolitical developments, and being unresponsive to certain kinds of evidence (e.g. inside AI world—not taking into account important political developments, outside of AI world—not taking AGI into account).
Strong non-impact tracking incentives (e.g. strong social incentives to have certain beliefs, work at certain orgs, focus on certain topics), and weak incentives to figure out and act on what is actually most impactful. We don’t hear future beings (or current beings for the most part) letting us know that we could be helping them much more effectively by taking one action over another. We do feel judgment from our friends/in group very saliently.
Lacking the self-confidence/agency/courage/hero-licensing/interest/time/etc to figure things out ourselves, and share what we believe (and what we’re confused about) with others—especially when it diverges from the aforementioned common sources of deference.
This is a shame given how brilliant, dedicated, and experienced members of the community are, and how much insight people have to offer – both within the community, and to the broader world.
I’m collaborating on a research project exploring how to most effectively address concentration of power risks (which I think the community has been neglecting) to improve the LTF/mitigate x-risk, considering implications of AGI and potentially short timelines, and the current political landscape (mostly focused on the US, and to a lesser extent China). We’re planning to collate, ideate, and prioritize among concrete interventions to work on and donate to, and compare their effectiveness against other longtermist/x-risk mitigation interventions. I’d be excited to collaborate with others interested in getting more clarity on how to best spend time, money, and other resources on longtermist grounds. Reach out (e.g. by EA Forum DM) if you’re interested. :)
I would also love to see more individuals and orgs conduct, fund, and share more cross-cause prioritization analyses (especially in areas under-explored by the community) with discretion about when to share publicly vs. privately.
Thank you for your comment Kuhanj. I share your belief that the EA movement would benefit from the type of suggestions you outlined on your quick take. I particularly valued seeing more discussions on heuristics, for they are often as limited as they are useful!
Regarding your ‘Being slow to re-orient’ suggestion, an important nuance comes to mind: movements can equally falter by pivoting too rapidly. When a community glimpses promise in a new X direction, there’s a risk of hastily redirecting significant resources, infrastructure, and attention toward it prematurely. The wisdom accumulated through longer reflection and careful evidence collection often contains (at least some) genuine insight, and we should be cautious about abandoning established priorities to chase every emerging “crucial consideration” that surfaces.
As ever, the challenge lies in finding that delicate balance between responsiveness and steadfastness — being neither calcified in thinking nor swept away by every new intellectual current.
I really appreciated this post, and think there is a ton of room for more impact with more frequent and rigorous cross-cause prioritization work. Your post prompted me to finally write up a related quick take I’ve been meaning to share for a while (which I’ll reproduce below), so thank you!
***
I’ve been feeling increasingly strongly over the last couple of years that EA organizations and individuals (myself very much included) could be allocating resources and doing prioritization much more effectively. That said, I think we’re doing extremely well in relative terms, and greatly appreciate the community’s willingness to engage in such difficult prioritization.
Reasons why I think we’re not realizing our potential:
Not realizing our lack of clarity about how to most impactfully allocate resources (time, money, attention, etc). Relatedly, an implicit assumption that we’re doing a pretty good job, and couldn’t be doing significantly better. I speculate that FTX trauma has significantly reduced the community’s ambition to an unwarranted degree.
Path dependence + insufficient intervention prioritization research (in terms of quality, quantity, and frequency). I thought this post brought up good points about the relative lack of cross-cause prioritization research + dissemination in the EA community despite its importance.
Being insufficiently open-minded about which areas and interventions might warrant resources/attention, and unwarranted deference to EA canon, 80K, Open Phil, EA/rationality thought leaders, etc.
Poorly applying heuristics as a substitute to prioritization work. See this comment and discussion about the neglectedness heuristic causing us to miss out on impact. FWIW I believe this very strongly and think the community has missed out on a ton of impact because of this specific mistake, but I’m unable to write about specifics in detail publicly. Feel free to reach out if you’d like to discuss this in private.
Aversion to feeling uncertainty and confusion (likely exacerbated by stress about AGI timelines).
Attachment to feeling certainty, comfort and assurance about the ethical and epistemic justification of our past actions, thinking, and deference.
Being slow to re-orient to important, quickly-evolving technological and geopolitical developments, and being unresponsive to certain kinds of evidence (e.g. inside AI world—not taking into account important political developments, outside of AI world—not taking AGI into account).
Strong non-impact tracking incentives (e.g. strong social incentives to have certain beliefs, work at certain orgs, focus on certain topics), and weak incentives to figure out and act on what is actually most impactful. We don’t hear future beings (or current beings for the most part) letting us know that we could be helping them much more effectively by taking one action over another. We do feel judgment from our friends/in group very saliently.
Lacking the self-confidence/agency/courage/hero-licensing/interest/time/etc to figure things out ourselves, and share what we believe (and what we’re confused about) with others—especially when it diverges from the aforementioned common sources of deference.
This is a shame given how brilliant, dedicated, and experienced members of the community are, and how much insight people have to offer – both within the community, and to the broader world.
I’m collaborating on a research project exploring how to most effectively address concentration of power risks (which I think the community has been neglecting) to improve the LTF/mitigate x-risk, considering implications of AGI and potentially short timelines, and the current political landscape (mostly focused on the US, and to a lesser extent China). We’re planning to collate, ideate, and prioritize among concrete interventions to work on and donate to, and compare their effectiveness against other longtermist/x-risk mitigation interventions. I’d be excited to collaborate with others interested in getting more clarity on how to best spend time, money, and other resources on longtermist grounds. Reach out (e.g. by EA Forum DM) if you’re interested. :)
I would also love to see more individuals and orgs conduct, fund, and share more cross-cause prioritization analyses (especially in areas under-explored by the community) with discretion about when to share publicly vs. privately.
Thank you for your comment Kuhanj. I share your belief that the EA movement would benefit from the type of suggestions you outlined on your quick take. I particularly valued seeing more discussions on heuristics, for they are often as limited as they are useful!
Regarding your ‘Being slow to re-orient’ suggestion, an important nuance comes to mind: movements can equally falter by pivoting too rapidly. When a community glimpses promise in a new X direction, there’s a risk of hastily redirecting significant resources, infrastructure, and attention toward it prematurely. The wisdom accumulated through longer reflection and careful evidence collection often contains (at least some) genuine insight, and we should be cautious about abandoning established priorities to chase every emerging “crucial consideration” that surfaces.
As ever, the challenge lies in finding that delicate balance between responsiveness and steadfastness — being neither calcified in thinking nor swept away by every new intellectual current.