I suspect some of this comes down to funding constraints, which are quite significant in both global health & animal welfare. “[R]un[ning] a mentoring program and suss[ing] them out” is expensive. “[E]nlarg[ing] your budgets to accommodate” an extra person at launch is expensive. The harsh reality is that much of the money invested in AIM or early-stage charities would have counterfactually gone to GiveWell or ACE-recommended charities instead. That’s a high bar.
If you think the AIM ecosystem’s cost-effectiveness is about the same as the counterfactual use of donor resources under the current operating procedures, then reducing the founder selection bar to meet a quota, or accepting project proposals that are in expectancy not as strong as AIM’s own proposals, could have a net negative effect on the world once the counterfactuals were considered.
(Not all of your ideas would raise this concern—for example, if you are right that AIM puts too much weight on academic pedigree, then adjusting that weight downward should improve cost-effectiveness).
I appreciate what you’re saying. If you want to be EA orthodox. I’m talking about evolving EA and changing some things.
It is most assuredly NOT expensive to run a mentoring program. It’s a hugely significant pipeline of candidates coming in that you are getting a much deeper insight into than just a short application process, and it’s value greatly exceeds its cost. All you have to do is spread it to all the members of AIM, rather than pay for a new department, ie. each member of AIM takes on mentoring a few people. That’s great for the whole org and infuses it with exactly the culture a charity enabling org should have.
As for the quota, isn’t that self set, so the budget is there to choose 25 people. If instead you choose only 20 because your criteria rejected the rest, now you have a budget excess. Don’t do that, it’s far better to take a risk on the five you weren’t sure of, because the budget is already there, and if even one of them turned out to be a hidden gem, you got them instead of losing them.
In hits based thinking, which is an EA staple, that’s what you do, you sign 25 bands and only a few make a hit record, but that’s enough to support the whole process. The point is that the researcher’s in AIMs back room choosing the cause priorities are most definitely not the arbiters able to pick the best hits. They can’t do it alone, they never will, and it was a fools errand to think they could. Everyone knows there’s no magic formula for picking a hit record, you just have to sign a bunch of bands and let them go crazy and see what happens. EA is effectively saying, “No. We think we can use science and spreadsheets to pick the hit songs”. Nope. Doesn’t work (world too big). But most definitely get some people in the back room working on that (Go researchers, we love you!), just don’t let that be the only thing you do...you also have to go out to the clubs and see what the kids are dancing to. (ie. bring in more veteran field workers into the process, have a mentoring pipeline, change your criteria a lot to include more and reject less, rather than just researchers in the back room).
Both in science, in music, in movies, nobody knows where the next hit will come from, so get broader and accept more.
Thanks—this is helpful in understanding different assumptions at play here.
At the outset, I’m inclined to defer to AIM—not because I am inclined to defer to EA orgs in general, but because experience suggests that nonprofits (and other types of entities witout market discipline as well) are much more likely to err by expanding to fill available budget than by constricting their activities. So while it’s certainly possible that AIM has misjudged the tradeoffs, I start at a place of some deference.
All you have to do is spread it to all the members of AIM, rather than pay for a new department, ie. each member of AIM takes on mentoring a few people.
If the AIM staff have available bandwidth, I’m guessing there are a number of different projects they could take on. We don’t know what the counterfactual use of staff time would be.
As for the quota, isn’t that self set, so the budget is there to choose 25 people. If instead you choose only 20 because your criteria rejected the rest, now you have a budget excess.
An organization does not have to spend its budget; it can spend less and then has to fundraise less for next year. If I recall correctly, AIM is very conscious of what the likely counterfactual use of funds donated to it would be.
I think this is a stronger point insofar as the relevant costs are fixed / sunk at the point of selection. I know some are, and some aren’t, but don’t know the relative proportions. (Note that I would include the broader ecosystem’s costs, such as seed funding from non-AIM sources.)
Everyone knows there’s no magic formula for picking a hit record, you just have to sign a bunch of bands and let them go crazy and see what happens.
This metaphor doesn’t work for me very well. In GHD/AW work, we have the ability to get a great return off of the existing catalog of “artists” (non-profits). In contrast, assume it is hard to invest money at good returns in any musical artist who has already proven themselves. Also, the hypothetical music investor should be willing to make investments as long as the expected value of the investment is positive; there is no pre-determined hard cap on funding available (and the investor should be able to get as much funding as they can find good investment targets). In contrast, the charitable “investor” obtains impact (which isn’t convertible into money) and so is limited by the size of their bankroll.
There’s a limited amount of seed and mid-stage funding available, so I would have concerns about exceeding the ecosystem’s carrying capacity. The practical effect of significantly increased cohort sizes may be moving more of the culling decisions from AIM to the early funders. That strikes me as having some upsides and downsides.
To the extent that one thinks there should be more seed/mid-stage funding (and is willing to accept the counterfactual reduction in funding for established charities), that isn’t really in AIM’s power to control.
In the end, my napkin model (low confidence) goes something like this:
Investing in AIM and its early-stage incubated charities, under current operating conditions, is slightly more cost-effective than the GiveWell or ACE alternatives.
AIM has moderate ability to identify founders that are more likely to be successful.
AIM has moderate ability to identify projects that are more likely to be successful.
That model doesn’t rely on a belief that AIM is great at identifying good founders or projects. But it does suggest that being less selective could easily flip the decision to donate within the AIM ecosystem vs. the GiveWell or ACE ones.
I suspect some of this comes down to funding constraints, which are quite significant in both global health & animal welfare. “[R]un[ning] a mentoring program and suss[ing] them out” is expensive. “[E]nlarg[ing] your budgets to accommodate” an extra person at launch is expensive. The harsh reality is that much of the money invested in AIM or early-stage charities would have counterfactually gone to GiveWell or ACE-recommended charities instead. That’s a high bar.
If you think the AIM ecosystem’s cost-effectiveness is about the same as the counterfactual use of donor resources under the current operating procedures, then reducing the founder selection bar to meet a quota, or accepting project proposals that are in expectancy not as strong as AIM’s own proposals, could have a net negative effect on the world once the counterfactuals were considered.
(Not all of your ideas would raise this concern—for example, if you are right that AIM puts too much weight on academic pedigree, then adjusting that weight downward should improve cost-effectiveness).
I appreciate what you’re saying. If you want to be EA orthodox. I’m talking about evolving EA and changing some things.
It is most assuredly NOT expensive to run a mentoring program. It’s a hugely significant pipeline of candidates coming in that you are getting a much deeper insight into than just a short application process, and it’s value greatly exceeds its cost. All you have to do is spread it to all the members of AIM, rather than pay for a new department, ie. each member of AIM takes on mentoring a few people. That’s great for the whole org and infuses it with exactly the culture a charity enabling org should have.
As for the quota, isn’t that self set, so the budget is there to choose 25 people. If instead you choose only 20 because your criteria rejected the rest, now you have a budget excess. Don’t do that, it’s far better to take a risk on the five you weren’t sure of, because the budget is already there, and if even one of them turned out to be a hidden gem, you got them instead of losing them.
In hits based thinking, which is an EA staple, that’s what you do, you sign 25 bands and only a few make a hit record, but that’s enough to support the whole process. The point is that the researcher’s in AIMs back room choosing the cause priorities are most definitely not the arbiters able to pick the best hits. They can’t do it alone, they never will, and it was a fools errand to think they could. Everyone knows there’s no magic formula for picking a hit record, you just have to sign a bunch of bands and let them go crazy and see what happens. EA is effectively saying, “No. We think we can use science and spreadsheets to pick the hit songs”. Nope. Doesn’t work (world too big). But most definitely get some people in the back room working on that (Go researchers, we love you!), just don’t let that be the only thing you do...you also have to go out to the clubs and see what the kids are dancing to. (ie. bring in more veteran field workers into the process, have a mentoring pipeline, change your criteria a lot to include more and reject less, rather than just researchers in the back room).
Both in science, in music, in movies, nobody knows where the next hit will come from, so get broader and accept more.
Thanks—this is helpful in understanding different assumptions at play here.
At the outset, I’m inclined to defer to AIM—not because I am inclined to defer to EA orgs in general, but because experience suggests that nonprofits (and other types of entities witout market discipline as well) are much more likely to err by expanding to fill available budget than by constricting their activities. So while it’s certainly possible that AIM has misjudged the tradeoffs, I start at a place of some deference.
If the AIM staff have available bandwidth, I’m guessing there are a number of different projects they could take on. We don’t know what the counterfactual use of staff time would be.
An organization does not have to spend its budget; it can spend less and then has to fundraise less for next year. If I recall correctly, AIM is very conscious of what the likely counterfactual use of funds donated to it would be.
I think this is a stronger point insofar as the relevant costs are fixed / sunk at the point of selection. I know some are, and some aren’t, but don’t know the relative proportions. (Note that I would include the broader ecosystem’s costs, such as seed funding from non-AIM sources.)
This metaphor doesn’t work for me very well. In GHD/AW work, we have the ability to get a great return off of the existing catalog of “artists” (non-profits). In contrast, assume it is hard to invest money at good returns in any musical artist who has already proven themselves. Also, the hypothetical music investor should be willing to make investments as long as the expected value of the investment is positive; there is no pre-determined hard cap on funding available (and the investor should be able to get as much funding as they can find good investment targets). In contrast, the charitable “investor” obtains impact (which isn’t convertible into money) and so is limited by the size of their bankroll.
There’s a limited amount of seed and mid-stage funding available, so I would have concerns about exceeding the ecosystem’s carrying capacity. The practical effect of significantly increased cohort sizes may be moving more of the culling decisions from AIM to the early funders. That strikes me as having some upsides and downsides.
To the extent that one thinks there should be more seed/mid-stage funding (and is willing to accept the counterfactual reduction in funding for established charities), that isn’t really in AIM’s power to control.
In the end, my napkin model (low confidence) goes something like this:
Investing in AIM and its early-stage incubated charities, under current operating conditions, is slightly more cost-effective than the GiveWell or ACE alternatives.
AIM has moderate ability to identify founders that are more likely to be successful.
AIM has moderate ability to identify projects that are more likely to be successful.
That model doesn’t rely on a belief that AIM is great at identifying good founders or projects. But it does suggest that being less selective could easily flip the decision to donate within the AIM ecosystem vs. the GiveWell or ACE ones.