It’s hard to say for sure without knowing the fraction of solicited EA startups that get funding, but GiveWell has made some angel-esque investments in the past (e.g. New Incentives), and I think some large individual donors have as well.
I get the impression that these are going mostly to programs that already have a lot of evidence and aren’t really exploring the space of possible interventions. I tend to believe that the effectiveness of projects probably follows a power law, and that therefore the most effective interventions are probably ones people haven’t tried yet, so funding variants on existing programs doesn’t help us find those interventions.
This is pretty plausible for AI risk, but not so obvious for generic organization-starting, IMO. Are there specific skills you can think of that might be a factor here?
GiveWell style research seems very trainable, and it is plausible that GiveWell could hire less experienced people & provide more training if they had significantly more money (I have no information on this though.)
The right way to learn organization-starting skills might be to start an organization; Paul Graham suggests that this is the right way to learn startup-building skills. In that case we’d want to fund more people running experimental EA projects.
I wouldn’t say that New Incentives has “a lot of evidence and aren’t really exploring the space of possible interventions.” But again, this is just dueling anecdata for now.
GiveWell style research seems very trainable, and it is plausible that GiveWell could hire less experienced people & provide more training if they had significantly more money
GiveWell already hires and trains a number of people with 0 experience (perhaps most of their hires).
The right way to learn organization-starting skills might be to start an organization; Paul Graham suggests that this is the right way to learn startup-building skills. In that case we’d want to fund more people running experimental EA projects.
Ah, good point. This seems like a pretty plausible mechanism.
So if starting new projects and enterprises is the constraint, then surely ETG is still less marginally effective than doing and facilitating support for these endeavours where they have high expected value?
I get the impression that these are going mostly to programs that already have a lot of evidence and aren’t really exploring the space of possible interventions. I tend to believe that the effectiveness of projects probably follows a power law, and that therefore the most effective interventions are probably ones people haven’t tried yet, so funding variants on existing programs doesn’t help us find those interventions.
GiveWell style research seems very trainable, and it is plausible that GiveWell could hire less experienced people & provide more training if they had significantly more money (I have no information on this though.)
The right way to learn organization-starting skills might be to start an organization; Paul Graham suggests that this is the right way to learn startup-building skills. In that case we’d want to fund more people running experimental EA projects.
I wouldn’t say that New Incentives has “a lot of evidence and aren’t really exploring the space of possible interventions.” But again, this is just dueling anecdata for now.
GiveWell already hires and trains a number of people with 0 experience (perhaps most of their hires).
Ah, good point. This seems like a pretty plausible mechanism.
Oh, cool! I definitely didn’t realize this.
So if starting new projects and enterprises is the constraint, then surely ETG is still less marginally effective than doing and facilitating support for these endeavours where they have high expected value?