Improving Impact Infrastructure in the Talent Space

Link post

TL;DR

Funding for talent-focused organisations (TFOs) in EA remains highly concentrated, with more than 80% coming from Open Philanthropy and EA Funds. To understand whether clearer impact reporting could unlock new funding for the space, this pre-study interviewed seven funders.

While historical impact data isn’t the main decision criterion for most funders, an improved impact reporting infrastructure could potentially open up more funding to the talent space. In addition, a shared reporting framework could support funders in making more informed decisions, and provide clearer funder expectations for the TFOs.

We suggest three next steps: (1) develop standardised indicators with input from both funders and TFOs, (2) gather TFO perspectives to complement this funder-focused study, and (3) explore the value of an independent evaluation agency for the talent space.

We define a TFO as any organisation whose explicit goal is to help people increase the impact they have through their careers. Some examples are meta EA groups like EA Sweden or EA Netherlands, and coaching or training organisations like 80k, SuccessIf, and BlueDot Impact.

Background

The 2024 EA Meta Funding Landscape Report found that 80% of funding in the talent field is coming from Open Philanthropy and EA Funds, with the share being even higher when only talent-focused organisations (TFOs) are included. In a memo for the Meta Coordination Forum 2024, Patrick Gruban argues that the lack of legibility in TFO’s impact reporting might be a hurdle for new funders to enter the field, and that some impactful TFOs are underfunded due to this.

In the fall of 2025, Emil Wasteson Wallén led a pre-study to investigate this assumption and whether a shared impact reporting framework for TFOs could increase the total funding to the talent space.

Below, we summarise the findings and outline three potential interventions for taking this project forward.

Methodology

Seven “funders”—individual donors, grantmakers, and philanthropic advisors—were interviewed to understand the funders’ perspective. Two were established funders within the talent space. Three others had made, or seriously considered making, grants in the talent space at least once. The remaining two had no direct experience in the talent space but were active funders in other parts of the EA meta ecosystem (primarily effective giving), making them particularly interesting as potential future funders to the talent space.

The interviews aimed to explore:

  1. What evaluation criteria the funders use

  2. Whether a lack of legible impact reporting is a bottleneck

  3. What values a shared impact reporting framework could provide

  4. What important factors would be in such a framework

Findings

1. Historical impact data isn’t the main decision criterion

Three funders reported using historical impact data as a key criterion for decision-making. But even for them, it wasn’t the main decision criterion. A pattern was that larger and more established funders placed greater weight on historical data.

The criteria that most funders highlighted as most decisive in their decisions were:

  • Strategy: Whether the project’s approach seems sound and well-targeted

  • Impact potential: How big the impact could be if the project did really well

  • Team: The competence, motivation, and track record of the people involved

Naturally, the younger a project or organisation is, the less relevant historical impact data becomes relative to these other factors.

Finally, four funders mentioned that Open Philanthropy’s funding recommendations and decisions significantly influenced their own.

2. Impact reporting could potentially make the talent space more accessible for new funders

Of the five funders who were not already established in the talent space, one noted that uncertainty about an organisation’s impact had delayed their decision by nearly ten months, and that clearer impact reporting could have accelerated the process.

The remaining four did not view the lack of robust historical impact data as a major gap in their decision-making. However, three of them said they would likely take such data into account if it were easily accessible and commonly used by other funders in the field.

This suggests that a shared impact reporting framework could potentially increase the total funding for the talent space.

3. Additional benefits of a shared impact reporting framework

In addition to potentially unlocking new funding for the talent space, the funders highlighted three other ways in which a shared impact reporting framework could add value:

  • Improved decision quality: Existing funders could make more informed and consistent grant decisions, resulting in better resource allocation and a higher overall impact within the talent space.

  • Greater clarity for organisations: TFOs would gain a clearer understanding of funder expectations and decision criteria.

  • Efficiency gains: Standardised reporting and other shared resources could reduce the need for each organisation to reinvent the wheel, saving time and administrative resources.

4. Factors important in a framework

Assuming a shared impact reporting framework is developed, two aspects were highlighted as especially important to include for it to be useful. The first one is focused on the framework’s content, and the other on its adoption.

  • Standardised indicators: All but one funder emphasised that establishing standardised indicators would be essential, mainly to enable comparison between organisations in the talent space.

  • Adoption and distribution: Nearly all funders stressed that a great risk—regardless of the framework’s quality—is low adoption. If only a small number of funders or TFOs use it, most of its potential value would be lost.

Recommendations

Based on the findings, we remain uncertain whether developing a full shared impact reporting framework would be valuable. We, however, believe there is evidence to support continued work. Below, we outline three promising future interventions, presented in increasing order of complexity.

1. Standardise indicators

As a first low-effort intervention, we recommend establishing a few small sets of indicators for evaluating TFOs. This would help existing funders in the space to compare TFOs more accurately, provide new funders with a better understanding of the impact TFOs create, and enable TFOs to have a clearer understanding of how they are evaluated.

We believe it’s essential that both funders and TFOs are actively involved in developing these indicators, to ensure diverse perspectives are represented and to build buy-in on both sides. More research and conversations are needed to determine the most useful indicators, but some examples mentioned by the funders include:

  • Number of placements: individuals transitioning into high-impact roles

  • Attribution: The organisation’s contribution to those transitions

  • Time to impact: The lag between intervention and observed career change

  • Impact per placement: A quantified estimate of the total impact created through each transition

2. Better understand the needs of TFOs

This pre-study focused on the perspectives of funders. We think there is substantial value in complementing this with insights from TFOs themselves. They may, for example, identify bottlenecks, considerations, or practical challenges that funders may have overlooked.

These findings could also be beneficial to improve coordination and collaboration between the TFOs, in addition to a funder-facing framework. This might take the form of a shared Monitoring, Evaluation & Learning (MEL) playbook or a resource bank with best practices and templates.

3. Explore the value of an independent evaluation agency in the talent space

Just as GiveWell is a trusted evaluator for global health interventions, and Animal Charity Evaluators in the animal welfare space, an independent evaluator in the talent space (and potentially in the EA meta space at large) might be desirable. The recommendations from such an agency would not only guide how the more than USD 100 million spent annually in the space is allocated, but also strengthen the credibility of the talent ecosystem.

That said, it’s not obvious that such an agency would be desirable. One argument would be that historical data isn’t the most important decision criterion. Another that TFO’s theories of change might vary too much in their approaches, making it hard to make fair comparisons. A third that it wouldn’t affect the funding allocation enough to justify its own costs.

Still, we believe this idea warrants deeper investigation to assess its potential in more depth.


We hope these findings spark further discussion among funders, TFOs, and other actors in the ecosystem. If you would be interested in contributing to the next phase of this work, please reach out to Emil or Patrick. While neither of us will have the capacity to continue leading the project, we’d be happy to share insights, material, and connections to help others take it forward.

A big thanks to David Moss for the help with the interview outline, and to Devon Fritz and Marieke de Visscher for the support with introductions to funders.