I would really like to see EA funding orgs more explicitly discuss the costs of the uncertainty their one-grant-at-a-time funding models plus short notice times of (non-)renewal impose on so many people in the EA community. I realise EA funding took a big hit last year, but for years before FTX Foundation was announced, 80k were claiming EA was talent-constrained rather than funding constrained, and than most EAs should not be earning to give. The net result is that there are a bunch of people with EA-relevant talents that aren’t particularly applicable outside the EAsphere, who are struggling to make ends meet, or whose livelihood could disappear with little warning.
After hearing multiple experiences like this it’s really hard for me to encourage anyone go into EA meta work until the landscape gets a lot smoother.
I do dislike this feature of EA, but I don’t think the solution is to transition away from a one-grant-at-a-time model. Probably better would be to have exit-coaches to help EAs find a new career outside EA, if they built up a bunch of skills because funding sources or other generally EA-endorsed sources told them they would give them money if they used such skills for the benefit of the universe.
What talents do you think aren’t applicable outside the EAsphere?
(Edit: I do also note that I believe 80k should be taken a lot less seriously than they present themselves, and that most EAs take them. Their incorrect claims of EA being talent constrained one of many reasons I distrust them)
I’m not sure what the solution is—more experimentation seems generally like a good idea, but EA fundmakers seem quite conservative in the way they operate, at least once they’ve locked in a modus operandi.
For what it’s worth, my instinct is to try a model with more ‘grantmakers’ who take a more active, product-managery/ownery role, where they make fewer grants, but the grants are more like contracts of employment, such that the grantmakers take some responsibility for the ultimate output (and can terminate a contract like a normal employer if the ‘grant recipient’ underperforms). This would need a lot more work-hours, but I can imagine it more than paying itself back through the greater security of the grant recipients and the increased accountability for both recipients and grantmakers.
What talents do you think aren’t applicable outside the EAsphere?
Community building doesn’t seem to have that much carryover—that’s not to say it’s useless, just that it’s not going to look anywhere as good to most employers as something vaguely for-profit equivalent, like being a consultant at some moderately prestigious firm. Research seems comparable. It’s unlikely to be taken seriously for academic jobs, and likely to be far too abstract for for-profits. In general, grantees and even employees at small EA orgs get little if any peer support or training budgets, which will stymie their professional development even when they’re working in roles that have direct for-profit equivalents (I’ve written a little about this phenomenon for the specific case of EA tech work here).
I agree that a one-grant-at-a-time funding model has downsides, but mostly I see many EA-meta projects funded with little to no feedback loops or oversight.
In for-profit jobs, people usually have managers, and if their work doesn’t get the expected results they get negative feedback and improvement plans before being fired or moved to different roles.
In meta-EA, I often see people get funding with no strings attached and no measurement of effectiveness, the only feedback they get is if they have their grants renewed one year later. I think a better solution than multi-year no-strings-attached funding would be to have way more regular feedback from funders, to get advice or at least not be surprised if they decide to not renew your grant.
I also think this has very bad selection effects in people optimizing their grant applications instead of their positive impact, since the applications is often the ~only information that funders have, and I’m worried that some funded EA meta projects that spent a lot of time on their grant applications are actually having negative counterfactual impact.
I also think that as long as you have clear, (ideally measurable) counterfactual results and a strong theory of change, it’s relatively easy to get funding for EA meta work (compared to e.g. animal welfare or global health).
For work in global health, you should stop getting funded if your work is less cost-effective than buying more bednets. Similarly, in EA meta you should stop getting funded if your work is less cost-effective than buying more ads for 80k (as a random example of highly effective infinitely scalable intervention, I don’t know what the ideal benchmark should be). If EA Poland could show that their programs are more cost-effective than e.g. more ads for 80k, I think people that are currently funding 80k ads would fund them instead.
The net result is that there are a bunch of people with EA-relevant talents that aren’t particularly applicable outside the EAsphere
I think this is extremely bad regardless of the funding model and funding situation, and people should try very hard to avoid this. This would lead to terrible incentives and dynamics, and probably make you less effective in your EA role (including community building). See My mistakes on the path to impact, I recommend reading the whole post, but here’s one quote
I could have noticed the conflict between the talent-constrained message as echoed by the community with the actual 80,000 Hours advice to keep your options open and having Plan A, B and Z.
I would really like to see EA funding orgs more explicitly discuss the costs of the uncertainty their one-grant-at-a-time funding models plus short notice times of (non-)renewal impose on so many people in the EA community. I realise EA funding took a big hit last year, but for years before FTX Foundation was announced, 80k were claiming EA was talent-constrained rather than funding constrained, and than most EAs should not be earning to give. The net result is that there are a bunch of people with EA-relevant talents that aren’t particularly applicable outside the EAsphere, who are struggling to make ends meet, or whose livelihood could disappear with little warning.
After hearing multiple experiences like this it’s really hard for me to encourage anyone go into EA meta work until the landscape gets a lot smoother.
I do dislike this feature of EA, but I don’t think the solution is to transition away from a one-grant-at-a-time model. Probably better would be to have exit-coaches to help EAs find a new career outside EA, if they built up a bunch of skills because funding sources or other generally EA-endorsed sources told them they would give them money if they used such skills for the benefit of the universe.
What talents do you think aren’t applicable outside the EAsphere?
(Edit: I do also note that I believe 80k should be taken a lot less seriously than they present themselves, and that most EAs take them. Their incorrect claims of EA being talent constrained one of many reasons I distrust them)
I’m not sure what the solution is—more experimentation seems generally like a good idea, but EA fundmakers seem quite conservative in the way they operate, at least once they’ve locked in a modus operandi.
For what it’s worth, my instinct is to try a model with more ‘grantmakers’ who take a more active, product-managery/ownery role, where they make fewer grants, but the grants are more like contracts of employment, such that the grantmakers take some responsibility for the ultimate output (and can terminate a contract like a normal employer if the ‘grant recipient’ underperforms). This would need a lot more work-hours, but I can imagine it more than paying itself back through the greater security of the grant recipients and the increased accountability for both recipients and grantmakers.
Community building doesn’t seem to have that much carryover—that’s not to say it’s useless, just that it’s not going to look anywhere as good to most employers as something vaguely for-profit equivalent, like being a consultant at some moderately prestigious firm. Research seems comparable. It’s unlikely to be taken seriously for academic jobs, and likely to be far too abstract for for-profits. In general, grantees and even employees at small EA orgs get little if any peer support or training budgets, which will stymie their professional development even when they’re working in roles that have direct for-profit equivalents (I’ve written a little about this phenomenon for the specific case of EA tech work here).
I agree that a one-grant-at-a-time funding model has downsides, but mostly I see many EA-meta projects funded with little to no feedback loops or oversight.
In for-profit jobs, people usually have managers, and if their work doesn’t get the expected results they get negative feedback and improvement plans before being fired or moved to different roles.
In meta-EA, I often see people get funding with no strings attached and no measurement of effectiveness, the only feedback they get is if they have their grants renewed one year later. I think a better solution than multi-year no-strings-attached funding would be to have way more regular feedback from funders, to get advice or at least not be surprised if they decide to not renew your grant.
I also think this has very bad selection effects in people optimizing their grant applications instead of their positive impact, since the applications is often the ~only information that funders have, and I’m worried that some funded EA meta projects that spent a lot of time on their grant applications are actually having negative counterfactual impact.
I also think that as long as you have clear, (ideally measurable) counterfactual results and a strong theory of change, it’s relatively easy to get funding for EA meta work (compared to e.g. animal welfare or global health).
For work in global health, you should stop getting funded if your work is less cost-effective than buying more bednets. Similarly, in EA meta you should stop getting funded if your work is less cost-effective than buying more ads for 80k (as a random example of highly effective infinitely scalable intervention, I don’t know what the ideal benchmark should be). If EA Poland could show that their programs are more cost-effective than e.g. more ads for 80k, I think people that are currently funding 80k ads would fund them instead.
I think this is extremely bad regardless of the funding model and funding situation, and people should try very hard to avoid this. This would lead to terrible incentives and dynamics, and probably make you less effective in your EA role (including community building). See My mistakes on the path to impact, I recommend reading the whole post, but here’s one quote