I agree that a one-grant-at-a-time funding model has downsides, but mostly I see many EA-meta projects funded with little to no feedback loops or oversight.
In for-profit jobs, people usually have managers, and if their work doesn’t get the expected results they get negative feedback and improvement plans before being fired or moved to different roles.
In meta-EA, I often see people get funding with no strings attached and no measurement of effectiveness, the only feedback they get is if they have their grants renewed one year later. I think a better solution than multi-year no-strings-attached funding would be to have way more regular feedback from funders, to get advice or at least not be surprised if they decide to not renew your grant.
I also think this has very bad selection effects in people optimizing their grant applications instead of their positive impact, since the applications is often the ~only information that funders have, and I’m worried that some funded EA meta projects that spent a lot of time on their grant applications are actually having negative counterfactual impact.
I also think that as long as you have clear, (ideally measurable) counterfactual results and a strong theory of change, it’s relatively easy to get funding for EA meta work (compared to e.g. animal welfare or global health).
For work in global health, you should stop getting funded if your work is less cost-effective than buying more bednets. Similarly, in EA meta you should stop getting funded if your work is less cost-effective than buying more ads for 80k (as a random example of highly effective infinitely scalable intervention, I don’t know what the ideal benchmark should be). If EA Poland could show that their programs are more cost-effective than e.g. more ads for 80k, I think people that are currently funding 80k ads would fund them instead.
The net result is that there are a bunch of people with EA-relevant talents that aren’t particularly applicable outside the EAsphere
I think this is extremely bad regardless of the funding model and funding situation, and people should try very hard to avoid this. This would lead to terrible incentives and dynamics, and probably make you less effective in your EA role (including community building). See My mistakes on the path to impact, I recommend reading the whole post, but here’s one quote
I could have noticed the conflict between the talent-constrained message as echoed by the community with the actual 80,000 Hours advice to keep your options open and having Plan A, B and Z.
I agree that a one-grant-at-a-time funding model has downsides, but mostly I see many EA-meta projects funded with little to no feedback loops or oversight.
In for-profit jobs, people usually have managers, and if their work doesn’t get the expected results they get negative feedback and improvement plans before being fired or moved to different roles.
In meta-EA, I often see people get funding with no strings attached and no measurement of effectiveness, the only feedback they get is if they have their grants renewed one year later. I think a better solution than multi-year no-strings-attached funding would be to have way more regular feedback from funders, to get advice or at least not be surprised if they decide to not renew your grant.
I also think this has very bad selection effects in people optimizing their grant applications instead of their positive impact, since the applications is often the ~only information that funders have, and I’m worried that some funded EA meta projects that spent a lot of time on their grant applications are actually having negative counterfactual impact.
I also think that as long as you have clear, (ideally measurable) counterfactual results and a strong theory of change, it’s relatively easy to get funding for EA meta work (compared to e.g. animal welfare or global health).
For work in global health, you should stop getting funded if your work is less cost-effective than buying more bednets. Similarly, in EA meta you should stop getting funded if your work is less cost-effective than buying more ads for 80k (as a random example of highly effective infinitely scalable intervention, I don’t know what the ideal benchmark should be). If EA Poland could show that their programs are more cost-effective than e.g. more ads for 80k, I think people that are currently funding 80k ads would fund them instead.
I think this is extremely bad regardless of the funding model and funding situation, and people should try very hard to avoid this. This would lead to terrible incentives and dynamics, and probably make you less effective in your EA role (including community building). See My mistakes on the path to impact, I recommend reading the whole post, but here’s one quote