To play devil’s advocate (these don’t actually represent my beliefs):
I can’t remember any EA orgs failing to reach a fundraising target.
This doesn’t necessarily mean much, because fundraising targets have a lot to do with how much money EA orgs believe they can raise.
Open Phil has recently posted about an org they wish existed but doesn’t and funder-initiated startups.
It’s pretty hard to get funding for a new organization, e.g. Spencer and I put a lot of effort into it without much success. The general problem I see is a lack of “angel investing” or its equivalent–the idea of putting money into small, experimental organizations and funding them further as they grow. (As a counter-counterpoint, EA Ventures seems well poised to function as an angel investor in the nonprofit world.)
Also, to address the general point that EA is talent-constrained, the problem might be that there are very few people with the skills needed, and more funding can be used to train people, like MIRI is doing with the summer fellows program. In that case earning to give is still a good solution to the talent constraint.
It’s pretty hard to get funding for a new organization, e.g. Spencer and I put a lot of effort into it without much success. The general problem I see is a lack of “angel investing” or its equivalent–the idea of putting money into small, experimental organizations and funding them further as they grow.
I agree with this. Moreover, I think there’s a serious lack of funding in the ‘fringe’ areas of EA like biosecurity, systemic change in global poverty, rationality training, animal rights, or personal development. These areas arguably have the greatest impact, but it’s difficult to attract the major funders.
For example, I think the Swiss EA groups are quite funding-constrained, but they aren’t well-known to the major funders and movement-building lacks robust evidence.
It’s correct that the Swiss EA organizations are currently funding-constrained. We haven’t pitched any projects to the international community yet, but we’re considering it if an opportunity arises where this makes sense.
I also think that funding is going to be less of an issue once more people in the movement transition from still being students to etg.
This doesn’t necessarily mean much, because fundraising targets have a lot to do with how much money EA orgs believe they can raise.
I agree that this could confound the result, but it’s still some evidence!
The general problem I see is a lack of “angel investing” or its equivalent–the idea of putting money into small, experimental organizations and funding them further as they grow. (As a counter-counterpoint, EA Ventures seems well poised to function as an angel investor in the nonprofit world.)
It’s hard to say for sure without knowing the fraction of solicited EA startups that get funding, but GiveWell has made some angel-esque investments in the past (e.g. New Incentives), and I think some large individual donors have as well.
the problem might be that there are very few people with the skills needed, and more funding can be used to train people, like MIRI is doing with the summer fellows program.
This is pretty plausible for AI risk, but not so obvious for generic organization-starting, IMO. Are there specific skills you can think of that might be a factor here?
It’s hard to say for sure without knowing the fraction of solicited EA startups that get funding, but GiveWell has made some angel-esque investments in the past (e.g. New Incentives), and I think some large individual donors have as well.
I get the impression that these are going mostly to programs that already have a lot of evidence and aren’t really exploring the space of possible interventions. I tend to believe that the effectiveness of projects probably follows a power law, and that therefore the most effective interventions are probably ones people haven’t tried yet, so funding variants on existing programs doesn’t help us find those interventions.
This is pretty plausible for AI risk, but not so obvious for generic organization-starting, IMO. Are there specific skills you can think of that might be a factor here?
GiveWell style research seems very trainable, and it is plausible that GiveWell could hire less experienced people & provide more training if they had significantly more money (I have no information on this though.)
The right way to learn organization-starting skills might be to start an organization; Paul Graham suggests that this is the right way to learn startup-building skills. In that case we’d want to fund more people running experimental EA projects.
I wouldn’t say that New Incentives has “a lot of evidence and aren’t really exploring the space of possible interventions.” But again, this is just dueling anecdata for now.
GiveWell style research seems very trainable, and it is plausible that GiveWell could hire less experienced people & provide more training if they had significantly more money
GiveWell already hires and trains a number of people with 0 experience (perhaps most of their hires).
The right way to learn organization-starting skills might be to start an organization; Paul Graham suggests that this is the right way to learn startup-building skills. In that case we’d want to fund more people running experimental EA projects.
Ah, good point. This seems like a pretty plausible mechanism.
So if starting new projects and enterprises is the constraint, then surely ETG is still less marginally effective than doing and facilitating support for these endeavours where they have high expected value?
To play devil’s advocate (these don’t actually represent my beliefs):
This doesn’t necessarily mean much, because fundraising targets have a lot to do with how much money EA orgs believe they can raise.
It’s pretty hard to get funding for a new organization, e.g. Spencer and I put a lot of effort into it without much success. The general problem I see is a lack of “angel investing” or its equivalent–the idea of putting money into small, experimental organizations and funding them further as they grow. (As a counter-counterpoint, EA Ventures seems well poised to function as an angel investor in the nonprofit world.)
Also, to address the general point that EA is talent-constrained, the problem might be that there are very few people with the skills needed, and more funding can be used to train people, like MIRI is doing with the summer fellows program. In that case earning to give is still a good solution to the talent constraint.
I agree with this. Moreover, I think there’s a serious lack of funding in the ‘fringe’ areas of EA like biosecurity, systemic change in global poverty, rationality training, animal rights, or personal development. These areas arguably have the greatest impact, but it’s difficult to attract the major funders.
For example, I think the Swiss EA groups are quite funding-constrained, but they aren’t well-known to the major funders and movement-building lacks robust evidence.
Have the Swiss EA groups tried to raise funding from the broader community? I had no idea they were funding-constrained until you mentioned it.
It’s correct that the Swiss EA organizations are currently funding-constrained. We haven’t pitched any projects to the international community yet, but we’re considering it if an opportunity arises where this makes sense.
I also think that funding is going to be less of an issue once more people in the movement transition from still being students to etg.
Also,
Do you have other evidence on this than Satvik’s? Have you also tried to get angel funding or something?
I agree that this could confound the result, but it’s still some evidence!
It’s hard to say for sure without knowing the fraction of solicited EA startups that get funding, but GiveWell has made some angel-esque investments in the past (e.g. New Incentives), and I think some large individual donors have as well.
This is pretty plausible for AI risk, but not so obvious for generic organization-starting, IMO. Are there specific skills you can think of that might be a factor here?
I get the impression that these are going mostly to programs that already have a lot of evidence and aren’t really exploring the space of possible interventions. I tend to believe that the effectiveness of projects probably follows a power law, and that therefore the most effective interventions are probably ones people haven’t tried yet, so funding variants on existing programs doesn’t help us find those interventions.
GiveWell style research seems very trainable, and it is plausible that GiveWell could hire less experienced people & provide more training if they had significantly more money (I have no information on this though.)
The right way to learn organization-starting skills might be to start an organization; Paul Graham suggests that this is the right way to learn startup-building skills. In that case we’d want to fund more people running experimental EA projects.
I wouldn’t say that New Incentives has “a lot of evidence and aren’t really exploring the space of possible interventions.” But again, this is just dueling anecdata for now.
GiveWell already hires and trains a number of people with 0 experience (perhaps most of their hires).
Ah, good point. This seems like a pretty plausible mechanism.
Oh, cool! I definitely didn’t realize this.
So if starting new projects and enterprises is the constraint, then surely ETG is still less marginally effective than doing and facilitating support for these endeavours where they have high expected value?