Why doesn’t AIM maintain a ‘near-miss’ founder pool between CEIP rounds?

Acknowledgements: me, myself, and AI.

I’d like to raise a question about AIM’s charity entrepreneurship incubation program (CEIP) process that I don’t have an answer to, and which seems worth asking from a talent allocation and cost-effectiveness perspective.

My understanding (which may be wrong in places) is:

- AIM runs a competitive recruitment round roughly twice per year.

- A large number of applicants apply. I was told by a member of the team that there were over 4000 a couple of years ago.

- A much smaller number pass written stages and work tests.

- An even smaller number reach final interviews (the last 1%, I think).

- A subset of those are selected to found charities.

- The process then resets with a new pool.

The question is: why does the hiring effort reset each round, rather than treating near-miss candidates as a pre-vetted talent pool for future founding rounds?

Maybe this is happening informally. But from the outside, it looks like the system repeatedly pays high selection costs which may not be fully necessary.

People who reach the final stages but aren’t selected are valuable data points. They’ve already cleared filters that are expensive to run: work tests, interviews, reference checks. In most fields, when you identify strong candidates but don’t place them in a specific role, you keep them warm for future openings because the costly part (working out whether they’re any good) is already behind you.

Steelmanning

There are sensible explanations for why a near-miss pool might not be heavily used:

  1. People who were strong candidates one year may no longer be available or willing to found in the next.

  2. If AIM is refining its selection process over time, earlier impressions become less reliable.

  3. There seems to be a strong norm in EA around open processes*. Drawing heavily from a semi-closed pool could undermine that. If this is the explanation, then the question becomes ‘could undermining that norm produce a better outcome from a cost effectiveness perspective and, if so, is it worth departing from the standard process for that reason?’

    *These are often considered to be ‘fairer’ than closed rounds. However, they may also discard some types of information that are hard to capture in standardised rounds (e.g. unusually high initiative, ability to build relationships in the field, demonstrable commitment to EA principles).


These are all real considerations, and I’m not sure which (if any) reflect the true reasoning.

Running large selection rounds is expensive in staff time. If many strong candidates reach late stages and then drift to other sectors, that’s a talent allocation problem for EA more broadly.

My tentative view is that there’s probably a group of people who are good enough to found something valuable, not quite at the top of the list in a given AIM round, but considerably stronger than the average applicant in the next one. If so, re-running the full funnel each time rather than drawing from a pre-vetted group is wasteful.

Future rounds could check whether anyone on the ‘bench’ wants to found one of this round’s ideas; if there aren’t enough matches, then run a new public round to replenish both the bench and current pipeline; and track outcomes to see whether ‘bench’ candidates who later found perform comparably to first-time-successful candidates.

Uncertainties

I have no data on how many near-miss candidates go on to found things independently, or how much of this already happens informally. AIM may have already worked through all of this and concluded the current system is optimal for reasons not visible from outside.

Maybe…

Near-miss candidates might be a high-leverage talent pool in EA that isn’t being systematically tracked (as far as I’m aware).

I’d be interested in how AIM currently thinks about this, and whether others have had similar thoughts.