Bottlenecks and Solutions for the X-Risk Ecosystem

This is a summary of a report we wrote to clarify the bottlenecks of the x-risk ecosystem in a pragmatic solution-oriented manner, outlining some of their top-level contours and possible solutions. The report focuses on what we consider to be the two current central bottlenecks: lack of senior talent and lack of projects that have a high positive expected value.

Background

This is the culmination of a strategic needs assessment of the x-risk ecosystem, which included a detailed literature review and interviews with 20+ individuals from across the x-risk ecosystem (including staff at FHI, CEA, CHAI and CSER, aspiring x-risk talent, and leaders of independent x-risk mitigation projects).

Insights gleaned from interviews have been kept anonymous, to protect the privacy of the interviewees. Our literature review identified relatively scarce quantitative data with regards to the x-risk ecosystem. Our recommendations thus stem from the subjective analysis of largely qualitative data, much of which cannot be cited. Footnotes and details of our methodology are included in the appendices of the full report.

While the main bottlenecks are already well-known within the community, we think this report can be valuable by providing additional details and concrete action plans.

This was a voluntary self-study project conducted by principal investigators:

  • Florent Berthet, co-founder of EA France

  • Oliver Bramford, freelance digital marketing consultant

With strategic support from Konrad Seifert and Max Stauffer, co-founders of EA Geneva.

Summary

X-risk mitigation is a fledgling field; there is a lot of work to be done, many people motivated to help, and extensive funding available. So why is more not being done now? What is holding back the progress of the field, and how can we overcome these obstacles?

We identified four key solution areas to optimise the x-risk ecosystem. In order of priority, with highest priority first:

  1. Optimise the senior talent pipeline

  2. Optimise the pipeline of projects with a high positive expected value (EV)

  3. Bridge the policy implementation gap

  4. Optimise the junior talent pipeline

Our analysis found that 1 and 2 currently limit 3 and 4; in order to make good progress on policy work and to onboard junior talent effectively, we need more senior talent and more projects with a high EV.

On top of this double-bottleneck, the field of x-risk mitigation and some sub-fields, including AI safety, are still at risk of failure due to insufficient credibility. This is an important consideration when reasoning about how the field should make progress.

We recommend ongoing work on all four solution areas; balanced progress on all these fronts is necessary for optimum long term growth of the field. For the sake of brevity and prioritisation, this report focuses on solutions to the most immediate problems, the current lack of senior talent and lack of projects that have a high positive expected value (EV+).

The x-risk ecosystem is young, rapidly evolving, and has lots of room for optimisation. The most critical needs right now are to expedite the senior talent pipeline and the project pipeline. In both cases, the solution requires professionals with specialist skills to make careful implementation plans, in consultation with x-risk leaders and the broader x-risk community.

Conclusion for Solution Area 1 (optimise the senior talent pipeline)

The lack of senior talent is currently the central bottleneck in the x-risk ecosystem. If suitable resources are dedicated to this challenge right away, we suspect that senior talent will remain the central bottleneck for no more than 3 years, and possibly a lot less. Many of the most promising ways to expedite this pipeline are highly neglected and fairly tractable; in the longer term, the x-risk community should be able to attract all the senior talent it needs.

In the short term, senior hires are most likely to come from finding and onboarding people who already have the required skills, experience, credentials and intrinsic motivation to reduce x-risks. This should include investment in an experienced professional services team to develop and help implement high quality branding, marketing, recruitment, talent management and partnership strategies for x-risk organisations and the community as a whole.

Conclusion for Solution Area 2 (optimise the project pipeline)

Expediting the pipeline for projects with a high positive EV is a very important, neglected, and tractable way to help mitigate x-risks.

We can increase the x-risk community’s standards of project planning and evaluation by creating user-friendly guidelines to develop high EV+ projects. We can matchmake top project ideas with top talent, with the help of a unified searchable priority project database. Community members could effectively develop promising project ideas from the project database, assisted by resources, events, and a dedicated project incubation team. We can partially automate and scale the project pipeline with an online project incubation platform, complete with step-by-step project planning guidelines and a project self-scoring system.

The project self-scoring system and project database would provide a robust, data-rich foundation for ongoing optimisation of the project pipeline. Feedback from potential users may result in a very different implementation path to what has been presented in this report.

Who should read this report?

This report is for both x-risk experts and aspiring x-risk talent. The following sections are particularly relevant to the following audiences:

Solution Area 1 (optimise the senior talent pipeline):

  • Leaders of x-risk and EA orgs

  • Senior operations talent (both working within and outside of x-risk orgs)

  • Growth strategy professionals (including talent aiming to work in x-risk)

Solution Area 2 (optimise the project pipeline):

  • Funders (anyone who makes or influences x-risk funding decisions)

  • Grant applicants (and prospective applicants) for x-risk work

  • Cause prioritisation practitioners (including relevant researchers, professionals, volunteers, students, and earn-to-givers)

Prioritisation Uncertainties

Confidence in our Findings

We are confident that the general analysis and conclusions in this report are largely correct. We are also very confident that this report misses lots of subtleties and nuances, which are important to clarify in order to get the details right when it comes to implementing solutions.

Choice of Solution Areas

We are aware that our four solution areas are not a comprehensive representation of the x-risk ecosystem, and we are unclear to what extent we may be missing other important factors.

We suspect that these four areas are sufficiently broad and overarching that most other important dimension of optimising the x-risk ecosystem would be implicitly entailed in their scope. Nonetheless, we may be neglecting some important dimensions due to our framing of the solution areas, for example:

  • Optimising location: developing x-risk hubs, geographic diversity

  • Risks of value misaligned individuals becoming too powerful within the x-risk field

Causal Relationships Between Solution Areas

Our prioritisation tries to take into account the complex causal relationships between each solution area, however, our analysis has probably missed some important feedback dynamics between these different solution areas, especially as they play out over the longer term.

Overview of Bottlenecks

Credibility and Prestige

  • Lack of credibility is a major risk, and perhaps the greatest risk, to successfully reducing x-risk as much as possible; credibility is required to be taken seriously, and to affect the necessary policy changes to mitigate x-risks.

  • The fields of x-risk and AI safety face some existing credibility challenges:

    • They are young fields, with (mainly) young researchers

    • The research does not fit neatly into pre-existing fields, so is hard to validate

    • They seem counterintuitive to many; there is no precedent for x-risks

  • Lots of senior x-risk professionals are particularly concerned about improving the prestige of AI safety & x-risk research; these fields are currently not seen as very prestigious.

  • Lack of prestige is a barrier for potential senior research talent entering the field as it is seen as a risky career move (especially for technical AI safety researchers, who could take high salaried capability research roles in the for-profit sector).

  • Producing excellent research is the key driver to increase credibility and prestige. This is a chicken-and-egg situation: to increase prestige requires more senior research talent, and to attract more senior research talent requires more prestige.

Talent Gap

  • The senior talent gap is the most central bottleneck in the x-risk ecosystem, and will probably continue to be for the next 2-3 years.

  • The senior talent gap is a major bottleneck in the junior talent pipeline; the senior staff are not in place to mentor and coordinate more junior staff to do highly EV+ work.

  • To be employable in an x-risk mitigation role requires exceptionally good judgement and decision-making skills; many talented people who aspire to work in x-risk may not meet the very high bar to be employable in the field.

  • Senior research staff’s time is in very high demand on many fronts: doing research, supervising junior researchers, recruiting, working on grant proposals, engaging with policy-makers, keeping up to date with each others’ work, etc.

Funding Gap

  • The handful of larger x-risk organisations are more talent constrained than funding constrained, whereas small and startup projects (and individuals) are more funding constrained than talent constrained.

  • It’s very resource-intensive to evaluate small projects and individuals.

  • The reason small (and large) projects don’t get funding is typically not because there is a lack of funding available, but rather because funders don’t have a high confidence that the projects have a high positive expected value.

  • To get funding, a project team must clearly signal to funders that their project has a high expected value. Many small projects with a high expected value are probably unfunded or underfunded, but it’s hard to tell which ones.

  • Funding and hiring decisions rely heavily on a network of trust; unintended biases are built into this network of trust, which tend to favour longer-standing community members (due to them having a more established social network and track record in the community) and disadvantage relative newcomers. Talented people that are relatively new to the community, to some degree, are systematically overlooked.

Coordination Gap

  • Historically there has been a lack of coordination between the core x-risk orgs. The handful of key x-risk orgs, and the staff within them, are now making an effort to communicate more and coordinate more closely.

  • There are numerous public lists of project ideas produced by x-risk experts, and probably many more project ideas from experts that have not been made public. Many of these projects may have the potential to have a high expected value, but there is currently no organised way of matching projects with appropriate talent and mentorship.

  • Many established, well-funded organisations are doing good work in areas closely related to x-risks, but typically do not self-identify as trying to mitigate x-risks. There is some engagement between many of these ‘periphery’ organisations and the ‘core’ x-risk organisations, but probably not enough.

  • It’s unclear to what extent these periphery organisations can contribute effectively to mitigate x-risks, and the pipeline for enabling this is also unclear.

Policy Implementation Gap

  • To date, x-risk mitigation work has focused primarily on research. To actually reduce x-risks in practice, some of this research needs to be translated into policy (corporate and national, with global coordination).

  • Policy work may be extremely urgent; as more authoritative voices enter the conversation around AI, the AI safety community’s voice could get drowned out.

  • Even without clear policy recommendations, there is lots of room now for sensitisation efforts, to warm policymakers up to the right way of thinking about x-risk and AI safety policy (e.g. enabling policymakers to better appreciate different risk levels and think on longer timelines).

  • Some sensitisation and policy implementation work is better left until a later date but more should probably be done now, especially where there is a need to build relationships, which naturally takes time.

Recommended next steps

1. To onboard more senior talent:

Set up a 12 month project to enable leading x-risk organisations to establish robust internal systems to effectively attract and retain senior talent. This would include:

  1. HR strategy

  2. Brand strategy (for AI safety, x-risk and individual organisations’ brands)

  3. Partnerships strategy

2. To launch more projects with a high positive expected value (EV):

Set up a 12 month project to enable leading x-risk organisations to establish robust project incubation infrastructure:

  1. Write clear detailed guidelines for what gives a project high positive EV

  2. Set up a centralised database of the most promising project ideas

  3. Create and train a project incubation team

  4. Consult with x-risk funders and partners to matchmake top talent with top projects

This work would increase the rate at which new projects with high positive EV get started.

What we are doing to make progress

As part of a team of five, we have established a group of EA-aligned professional consultants, with the skills to implement the above recommendations. Together we created an EA consultancy called Good Growth (website).

We have already started helping some EA orgs. On the long term, contingent on funding and expert feedback, we intend to conduct ongoing strategic research and consultancy work across all four solution areas to:

  • Prioritise the best ways to make progress in each area

  • Clarify which organisations and people are best-placed to take responsibility for what

  • Support implementation, through consultancy and outsourced solutions

  • Specify and track a series of lead-indicator metrics and lead ongoing optimisation

What can you do to help?

  • Funders: donate to commission the recommended next steps outlined above

  • X-risk experts: become an advisor to our team

  • X-risk organisation leaders: tell us about your pain points, needs and objectives

  • EA-aligned professionals: apply to join our team of consultants

  • X-risk workers and volunteers: read this report, discuss it with your colleagues, share your feedback in the comments, and propose concrete projects to partner on.

To explore any of these options further, please email us directly:

For a more detailed picture of our recommendations, see the full report here.