This doesn’t necessarily mean much, because fundraising targets have a lot to do with how much money EA orgs believe they can raise.
I agree that this could confound the result, but it’s still some evidence!
The general problem I see is a lack of “angel investing” or its equivalent–the idea of putting money into small, experimental organizations and funding them further as they grow. (As a counter-counterpoint, EA Ventures seems well poised to function as an angel investor in the nonprofit world.)
It’s hard to say for sure without knowing the fraction of solicited EA startups that get funding, but GiveWell has made some angel-esque investments in the past (e.g. New Incentives), and I think some large individual donors have as well.
the problem might be that there are very few people with the skills needed, and more funding can be used to train people, like MIRI is doing with the summer fellows program.
This is pretty plausible for AI risk, but not so obvious for generic organization-starting, IMO. Are there specific skills you can think of that might be a factor here?
It’s hard to say for sure without knowing the fraction of solicited EA startups that get funding, but GiveWell has made some angel-esque investments in the past (e.g. New Incentives), and I think some large individual donors have as well.
I get the impression that these are going mostly to programs that already have a lot of evidence and aren’t really exploring the space of possible interventions. I tend to believe that the effectiveness of projects probably follows a power law, and that therefore the most effective interventions are probably ones people haven’t tried yet, so funding variants on existing programs doesn’t help us find those interventions.
This is pretty plausible for AI risk, but not so obvious for generic organization-starting, IMO. Are there specific skills you can think of that might be a factor here?
GiveWell style research seems very trainable, and it is plausible that GiveWell could hire less experienced people & provide more training if they had significantly more money (I have no information on this though.)
The right way to learn organization-starting skills might be to start an organization; Paul Graham suggests that this is the right way to learn startup-building skills. In that case we’d want to fund more people running experimental EA projects.
I wouldn’t say that New Incentives has “a lot of evidence and aren’t really exploring the space of possible interventions.” But again, this is just dueling anecdata for now.
GiveWell style research seems very trainable, and it is plausible that GiveWell could hire less experienced people & provide more training if they had significantly more money
GiveWell already hires and trains a number of people with 0 experience (perhaps most of their hires).
The right way to learn organization-starting skills might be to start an organization; Paul Graham suggests that this is the right way to learn startup-building skills. In that case we’d want to fund more people running experimental EA projects.
Ah, good point. This seems like a pretty plausible mechanism.
So if starting new projects and enterprises is the constraint, then surely ETG is still less marginally effective than doing and facilitating support for these endeavours where they have high expected value?
I agree that this could confound the result, but it’s still some evidence!
It’s hard to say for sure without knowing the fraction of solicited EA startups that get funding, but GiveWell has made some angel-esque investments in the past (e.g. New Incentives), and I think some large individual donors have as well.
This is pretty plausible for AI risk, but not so obvious for generic organization-starting, IMO. Are there specific skills you can think of that might be a factor here?
I get the impression that these are going mostly to programs that already have a lot of evidence and aren’t really exploring the space of possible interventions. I tend to believe that the effectiveness of projects probably follows a power law, and that therefore the most effective interventions are probably ones people haven’t tried yet, so funding variants on existing programs doesn’t help us find those interventions.
GiveWell style research seems very trainable, and it is plausible that GiveWell could hire less experienced people & provide more training if they had significantly more money (I have no information on this though.)
The right way to learn organization-starting skills might be to start an organization; Paul Graham suggests that this is the right way to learn startup-building skills. In that case we’d want to fund more people running experimental EA projects.
I wouldn’t say that New Incentives has “a lot of evidence and aren’t really exploring the space of possible interventions.” But again, this is just dueling anecdata for now.
GiveWell already hires and trains a number of people with 0 experience (perhaps most of their hires).
Ah, good point. This seems like a pretty plausible mechanism.
Oh, cool! I definitely didn’t realize this.
So if starting new projects and enterprises is the constraint, then surely ETG is still less marginally effective than doing and facilitating support for these endeavours where they have high expected value?