Why doesn’t AIM maintain a ‘near-miss’ founder pool between CEIP rounds?
Acknowledgements: me, myself, and AI.
I’d like to raise a question about AIM’s charity entrepreneurship incubation program (CEIP) process that I don’t have an answer to, and which seems worth asking from a talent allocation and cost-effectiveness perspective.
My understanding (which may be wrong in places) is:
- AIM runs a competitive recruitment round roughly twice per year.
- A large number of applicants apply. I was told by a member of the team that there were over 4000 a couple of years ago.
- A much smaller number pass written stages and work tests.
- An even smaller number reach final interviews (the last 1%, I think).
- A subset of those are selected to found charities.
- The process then resets with a new pool.
The question is: why does the hiring effort reset each round, rather than treating near-miss candidates as a pre-vetted talent pool for future founding rounds?
Maybe this is happening informally. But from the outside, it looks like the system repeatedly pays high selection costs which may not be fully necessary.
People who reach the final stages but aren’t selected are valuable data points. They’ve already cleared filters that are expensive to run: work tests, interviews, reference checks. In most fields, when you identify strong candidates but don’t place them in a specific role, you keep them warm for future openings because the costly part (working out whether they’re any good) is already behind you.
Steelmanning
There are sensible explanations for why a near-miss pool might not be heavily used:
People who were strong candidates one year may no longer be available or willing to found in the next.
If AIM is refining its selection process over time, earlier impressions become less reliable.
There seems to be a strong norm in EA around open processes*. Drawing heavily from a semi-closed pool could undermine that. If this is the explanation, then the question becomes ‘could undermining that norm produce a better outcome from a cost effectiveness perspective and, if so, is it worth departing from the standard process for that reason?’
*These are often considered to be ‘fairer’ than closed rounds. However, they may also discard some types of information that are hard to capture in standardised rounds (e.g. unusually high initiative, ability to build relationships in the field, demonstrable commitment to EA principles).
These are all real considerations, and I’m not sure which (if any) reflect the true reasoning.
Running large selection rounds is expensive in staff time. If many strong candidates reach late stages and then drift to other sectors, that’s a talent allocation problem for EA more broadly.
My tentative view is that there’s probably a group of people who are good enough to found something valuable, not quite at the top of the list in a given AIM round, but considerably stronger than the average applicant in the next one. If so, re-running the full funnel each time rather than drawing from a pre-vetted group is wasteful.
Future rounds could check whether anyone on the ‘bench’ wants to found one of this round’s ideas; if there aren’t enough matches, then run a new public round to replenish both the bench and current pipeline; and track outcomes to see whether ‘bench’ candidates who later found perform comparably to first-time-successful candidates.
Uncertainties
I have no data on how many near-miss candidates go on to found things independently, or how much of this already happens informally. AIM may have already worked through all of this and concluded the current system is optimal for reasons not visible from outside.
Maybe…
Near-miss candidates might be a high-leverage talent pool in EA that isn’t being systematically tracked (as far as I’m aware).
I’d be interested in how AIM currently thinks about this, and whether others have had similar thoughts.
One other reason worth considering is that AIM may be hiring to a set bar, rather than filling a pre-determined number of seats. People who miss this bar, in AIM’s eyes, won’t make exceptional founders. Of course, many people grow and improve between rounds, but equally, many don’t.
AIM do maintain a list of second-placers internally, who are offered as candidates for high-level roles within their incubated charities.
(Disclosure: I know people on the team, but don’t actually know if this is the case)
They may be, although absolute candidate quality doesn’t seem to be the only consideration; For context, the feedback I received aa a finalist suggested a different mechanism. I was told that a main consideration was that I was relatively locked into a single idea, and that this idea was among the most popular in the cohort. ‘This does mean we have to make the difficult decision of turning down talented potential founders like yourself who are better suited to some popular ideas.’
The implication was that they expected to be able to find founders for that idea regardless, so they prioritised more flexible candidates or those interested in less popular ideas in order to maximise the total number of charities launched.
So I don’t think there’s a pure ‘fixed-bar’. Some candidates who clear the bar might still be turned down due to idea-level constraints.
I’m not sure to what extent both of these are operating simultaneously (e.g. a minimum bar + then matching), but if the latter is a meaningful factor, it seems to strengthen the case for tracking near-miss candidates across rounds rather than resetting the pool each time.
(Technically not working at AIM anymore, but I was the CEO until recently.)
So I think the broad question is that AIM, and others, should aim to create value from people who get close, not just those who get in. “You get into AIM = huge win vs you just miss and get nothing” is not ideal. This is a solid idea, and I think AIM (and anyone with strong application pools) should probably spend more time on this.
Re: Matching people / helping them found outside of AIM
We did try this a couple of times, but it did not result in especially strong charities. For a charity to become top tier (which is where almost all our modeled EV comes from), many things have to go right simultaneously. Even if two near-misses connect, it is harder for them without cohort benefits, seed funders, and time to test matching deeply before committing. In general, a 50% version of AIM does not yield 50% of the EV, more like 5%.
Re: Closed vs open rounds
If we did closed rounds including people from prior cohorts, we would likely lose ~50% of our talent pool but save ~90% of team time on vetting and comms. However, this is not a good trade-off. Smaller cohorts mean worse matching and fewer charities. I am strongly in favor of open application rounds and think this is still underused in the EA movement.
Re: What could work for near-miss folks
What we have seen work are training programs for people going early into adjacent career paths (AIM has run some of these; HIP and Impactful Policy are strong current examples). Another approach is placing people in high-impact charities (often older AIM charities), as Huw mentions. We could be more systematic about both. More broadly, organizations with strong applicant pools should think carefully about how to create value from strong but non-selected candidates.
P.S I originally posted this as a quick take and have been thinking about this question in more detail since.