I strongly agree! Improving the cost-effectiveness (and cost-efficiency) of non-EA resources seems underexplored in EA discussions. I’d argue this applies to talent, not just funding.
In mainstream fields like global development and climate change, there are many talented, impact-driven professionals who don’t know EA or wouldn’t join the EA community (perhaps disagreeing with cause-neutrality or the utilitarian foundations). Yet many of these professionals would be quite willing to put in a lot of effort and energy into high-impact projects if exposed to important agendas and projects they’re well-positioned to tackle. There could be significant value in shaping agendas and channeling these professionals toward more impactful (not necessarily “most impactful” by EA standards) work within their existing domains.
I should note this point is less relevant/salient for AI Safety field-building, where there already seem to be more pathways for non-EA people and broader engagement beyond the EA-aligned community.
”Moving some funders from an overall lower cost effectiveness to a still relatively low or middling level of cost effectiveness can be highly competitive with, and, in some cases, more effective than working with highly cost-effective funders.”
I strongly agree! Improving the cost-effectiveness (and cost-efficiency) of non-EA resources seems underexplored in EA discussions. I’d argue this applies to talent, not just funding.
In mainstream fields like global development and climate change, there are many talented, impact-driven professionals who don’t know EA or wouldn’t join the EA community (perhaps disagreeing with cause-neutrality or the utilitarian foundations). Yet many of these professionals would be quite willing to put in a lot of effort and energy into high-impact projects if exposed to important agendas and projects they’re well-positioned to tackle. There could be significant value in shaping agendas and channeling these professionals toward more impactful (not necessarily “most impactful” by EA standards) work within their existing domains.
I should note this point is less relevant/salient for AI Safety field-building, where there already seem to be more pathways for non-EA people and broader engagement beyond the EA-aligned community.
On an additional note: Rethink Priorities’ A Model Estimating the Value of Research Influencing Funders report had a relevant point:
”Moving some funders from an overall lower cost effectiveness to a still relatively low or middling level of cost effectiveness can be highly competitive with, and, in some cases, more effective than working with highly cost-effective funders.”
Discussed on the forum here