This is a complex question. I wouldn’t say it has literally resulted in founders who meet our criteria not getting into the program, as we prioritize at a slightly higher level. However, the practical shift has been a change in AIM staff focus, moving from outreach/​research toward mid-stage funding and philanthropic ecosystem-building. On a smaller scale, I think it has made us more likely to discourage experiments or founders who have some potential but are less promising than other projects (e.g., solo founders).
I also don’t think the bottleneck is really AIM’s funding itself. It’s more related to the size of the ecosystem we are bringing charities into. For example, it might cost AIM around $150k to get a charity started, plus $150k for seed funding. That $300k is an enabling factor, but the charity may require that level of support each year for the next three years. So it’s more about the $900k gap in the mid-stage ecosystem than direct funding for AIM. One potential solution I see is funding circles (meta write-up here, circles we started here).
Taking all of this into account, I think a reasonable proxy would be around $1M per year donated to mid-stage/​AIM charities, which would be worthwhile versus one additional founder. However, I think the variance across cause areas is substantial (it could be half this for animals/​mental health and double for global health, or even four times higher for EA meta). I also think personal variance changes things a lot. For example, a top-third founder, I would say, would be twice as expensive as an average one.
Taking all of this into account, I think a reasonable proxy would be around $1M per year donated to mid-stage/​AIM charities, which would be worthwhile versus one additional founder.
Does your 1 M$/​year refer to the value of a random founder relative to nothing, or relative to the best rejected founder? If the best rejected founder had a value of 0.5 M$/​year, then it would make sense for a random accepted founder to earn to give if that increased their donations by more than 0.5 M$/​year (= (1 − 0.5)*10^6).
Hey Vasco,
This is a complex question. I wouldn’t say it has literally resulted in founders who meet our criteria not getting into the program, as we prioritize at a slightly higher level. However, the practical shift has been a change in AIM staff focus, moving from outreach/​research toward mid-stage funding and philanthropic ecosystem-building. On a smaller scale, I think it has made us more likely to discourage experiments or founders who have some potential but are less promising than other projects (e.g., solo founders).
I also don’t think the bottleneck is really AIM’s funding itself. It’s more related to the size of the ecosystem we are bringing charities into. For example, it might cost AIM around $150k to get a charity started, plus $150k for seed funding. That $300k is an enabling factor, but the charity may require that level of support each year for the next three years. So it’s more about the $900k gap in the mid-stage ecosystem than direct funding for AIM. One potential solution I see is funding circles (meta write-up here, circles we started here).
Taking all of this into account, I think a reasonable proxy would be around $1M per year donated to mid-stage/​AIM charities, which would be worthwhile versus one additional founder. However, I think the variance across cause areas is substantial (it could be half this for animals/​mental health and double for global health, or even four times higher for EA meta). I also think personal variance changes things a lot. For example, a top-third founder, I would say, would be twice as expensive as an average one.
Thanks, Joey!
Does your 1 M$/​year refer to the value of a random founder relative to nothing, or relative to the best rejected founder? If the best rejected founder had a value of 0.5 M$/​year, then it would make sense for a random accepted founder to earn to give if that increased their donations by more than 0.5 M$/​year (= (1 − 0.5)*10^6).
We would not accept the next funder down the list so the relative to nothing bar is the correct bar to use.