This is a big part of why I find the ‘EA is talent constrained not funding constrained’ meme to be a bit silly. The obvious counter is to spend money learning how to convert money into talent. I haven’t heard of anyone focusing on this problem as a core area, but if it’s an ongoing bottleneck then it ‘should’ be scoring high on effective actions.
There is a lot of outside view research on this that could be collected and analyzed.
The obvious counter is to spend money learning how to convert money into talent. I haven’t heard of anyone focusing on this problem as a core area, but if it’s an ongoing bottleneck then it ‘should’ be scoring high on effective actions.
This is what many of the core organisations are focused on :) You could see it as 80k’s whole purpose. It’s also why CEA is doing things like EA Grants, and Open Phil is doing the AI Fellowship.
I pretty much agree with this—though I would add that you could also spend the money on just attracting existing talent. I doubt the Venn diagram of ‘people who would plausibly be the best employee for any given EA job’ and ‘people who would seriously be interested in it given a relatively low EA wage’ always forms a perfect circle.
Even if we had funds, the problem of who to fund is a hard one and perhaps it would be better spent simply hiring more staff for EA orgs? The best way to know that you can trust someone is to know them personally, but distributing funds in that manner creates concerns about favouritism, ect.
I strongly disagree on the grounds that these sorts of generators are, while not fully general counterarguments, sufficiently general that I think they are partially responsible for EAs having a bias towards inaction. Also, trying more cheap experiments when money is supposedly not the bottleneck.
This is a big part of why I find the ‘EA is talent constrained not funding constrained’ meme to be a bit silly. The obvious counter is to spend money learning how to convert money into talent. I haven’t heard of anyone focusing on this problem as a core area, but if it’s an ongoing bottleneck then it ‘should’ be scoring high on effective actions.
There is a lot of outside view research on this that could be collected and analyzed.
This is what many of the core organisations are focused on :) You could see it as 80k’s whole purpose. It’s also why CEA is doing things like EA Grants, and Open Phil is doing the AI Fellowship.
It’s also a central internal challenge for any org that has funding and is trying to scale. But it’s not easy to solve: https://blog.givewell.org/2013/08/29/we-cant-simply-buy-capacity/
Is 80k trying to figure out how to interview the very best recruiters and do some judgmental bootstrapping?
I pretty much agree with this—though I would add that you could also spend the money on just attracting existing talent. I doubt the Venn diagram of ‘people who would plausibly be the best employee for any given EA job’ and ‘people who would seriously be interested in it given a relatively low EA wage’ always forms a perfect circle.
Even if we had funds, the problem of who to fund is a hard one and perhaps it would be better spent simply hiring more staff for EA orgs? The best way to know that you can trust someone is to know them personally, but distributing funds in that manner creates concerns about favouritism, ect.
I strongly disagree on the grounds that these sorts of generators are, while not fully general counterarguments, sufficiently general that I think they are partially responsible for EAs having a bias towards inaction. Also, trying more cheap experiments when money is supposedly not the bottleneck.