The Long-Term Future Fund is looking for a full-time fund chair

EDIT 2023/​11/​21

We have finished evaluating the first batch of our applications.

As we have not yet finished the hiring process, people are still welcome to apply to us. We are still very likely to skim future applications, however we have no firm commitments to do so.

Candidates are welcome to message me either here or in other channels to flag their late applications with us.

This is a linkpost for our job ad on Notion.

The Long-Term Future Fund is looking for a full-time fund chair. You can apply here.

Summary

  • The Long-Term Future Fund is seeking a new full-time fund chair to lead strategy, fundraising, management, and grant evaluation.

  • The chair will articulate a vision for the fund, coordinate stakeholders, improve processes, represent the fund, and make final decisions. [more]

  • The role offers a competitive salary with location flexibility, starting with a 3-month trial period. [more]

  • The fund aims to distribute $5M-$15M (last year, we distributed over $10M) annually to reduce existential risk, especially from AI. We expect the LTFF chair to contribute substantially to our strategy and fundraising efforts.

  • We have a strong preference for a full-time candidate though we will seriously consider part-time candidates. [more]

  • Applications are open now through October 23rd.

    • The form might take up to an hour, including screening questions.

Why is LTFF looking for a fund chair?

The Long-Term Future Fund (LTFF) has had a significant impact on the long-termist and AI safety funding ecosystems:

  • We are one of the few significant sources of funding for longtermist or AI safety funding, allowing (some) worldview diversification away from Open Phil.

  • In 2022, LTFF was operationally able to distribute 12 million dollars to over 250 small projects. This entails clearing nontrivial logistical hurdles in following nonprofit law across multiple countries, consistent operational capacity, and a careful eye towards downside risk mitigation.

  • We have the largest “always open” application form, where anybody can apply, connecting funding with a wide range of excellent grantees without pre-existing networks.

  • We are one of the primary donation options for new donors in longtermism, with a relatively well-known brand and accessibility.

  • We believe we are able to fund a wide range of excellent small longtermist projects, and increase grant capacity in the ecosystem overall.

  • We have contributed to improvements in the epistemics and transparency around the longtermist funding ecosystem, e.g. by writing very detailed and frequently viewed posts about our past grants, concerns, and how we make decisions.

However, despite the successes, there have been some challenges to increased scale and other significant limitations:

  • We have significant strategic confusion as a fund, with significant uncertainty and disagreements on questions like:

    • How much should we really envision ourselves as “longtermist”, as opposed to just focused on near- and medium- term catastrophic risks?

    • Should we focus more on AI, vs be willing to fund a wide range of interventions to reduce other catastrophic risks like engineered pandemics?

    • Should we be willing to be more directly antagonistic towards the interests of the big AI labs?

    • How good is independent research and upskilling, vs funding established organizations and programs?

    • How important is good forecasting, relative to direct x-risk reduction interventions?

  • We do not get back to candidates as quickly as we would like. Our current median response time is 28 days, with a long tail of much slower replies. While we are likely still faster than many other funders, we believe that our response times inhibit both our and our grantees’ abilities to move nimbly, in addition to causing unnecessary stress with potential grantees.

  • We do not (yet) communicate much with our donors nor try to proactively reach out to new donors. This likely limits our fundraising, particularly from donors who are less familiar with longtermism, existential security, and catastrophic AI risk.

  • Our part-time fund manager setup reduces reliability and consistency. While it has many advantages, relying entirely on part-time fund managers can make it difficult to plan and/​or move quickly and deliberately.

  • We have limited ability and capacity to help our grantees’ goals and push them to excel. Because of our limited capacity constraints and the apparent deluge of new applications, we never seem to have enough time to regularly provide detailed feedback nor set up processes to reliably help our grantees excel (other than via providing money).

We think a good Long-Term Future Fund chair can help us remove or at least ameliorate many of the above challenges, while maintaining or enhancing the aspects of LTFF that are currently excellent.

Responsibilities and fit

As a fund chair, you will be responsible for:

  • Strategy

    • Articulating and shepherding a vision and strategy for LTFF going forwards.

    • Keeping the focus of LTFF on trying to cost-effectively solve important but hard problems like AI alignment; consistently being willing to pivot and adjust the strategy, processes, personnel, or other aspects of the fund to preserve a focus on longer-term impact.

    • Holding firm and staying laser-focused on what matters. Push back against individual and institutional incentives that may push you towards myopic bureaucratic practices and short-term goals.

  • Fundraising

    • Creating a product that impact-oriented donors are excited to donate to.

    • Leading or helping with fundraising such that LTFF reliably has enough resources to fund high-impact projects

    • Provide high donor transparency by leading the fund in a transparent and high-integrity manner.

  • Management

    • Being the face and core decisionmaker at LTFF, speaking on behalf of LTFF and setting the relevant institutional policies.

    • Maintaining and improving relationships with key stakeholders: donors, grantees, advisors, and decision-makers at other foundations.

    • Coordinating between and resolving disputes between the internal members of LTFF and close affiliates: different grantmakers, operations staff, EV, and future donor and/​or grantee liaisons.

    • Creating good hiring and other institutional processes for integrity; be able to guard against fund managers or grantees abusing power they derive from LTFF, such as by wrongfully using collective resources for personal gain.

  • Grant evaluation

    • Ensuring that the grantee experience is as smooth and hassle-free as possible

    • Managing other fund managers, doing grant evaluations yourself, and creating new processes to resolve any increased load in grant evaluations so that grant evaluations are done quickly, consistently, and with good judgment, while maintaining high upside potential and with an eye towards mitigating downside risk.

I (Linch) think a well-run LTFF with a good chair could reliably distribute ~$5M-$15M yearly to high-impact projects going forwards. A good chair can potentially be responsible for ~$500K-~$5M of value per year, including via both improved decision quality and better counterfactual fundraising from new donors.

You might be a good fit if you:

  • Have experience leading or managing a significant project

  • Have a fairly deep understanding of technical AI alignment, and/​or other emerging scientific fields of interest

    • E.g. 3+ years of direct research experience as part of academia, in a research-focused corporate lab, or independent research

  • Possess experience in fundraising or are comfortable with the concept and execution of fundraising initiatives

  • People you know and respect generally consider you to have good judgment on difficult and subjective decisions.

  • Have a strong internal sense of honesty and integrity,

  • Are a reasonably good “peacemaker”: are frequently capable of finding mutually beneficial trades across a variety of stakeholders

  • Are excited about making difficult calls in a fast-moving domain

  • Are generally considered industrious and reliable

    • E.g. tend to be good at making deadlines and meetings, strong executive function, rarely drop tasks

  • Are good at Fermi estimates and back of the envelope calculations; regularly factor in uncertainty into your numerical analysis

Evidence you might be a poor fit:

  • Have never been responsible for projects other than ones initiated by a boss or advisor.

  • You are an “ideas person” to the extent that you’ve never delved deeply into a technical or otherwise complex subject outside of formal schooling

  • Strongly prefer to be solely focused on a single project at a time.

  • Dreads the idea of meetings

  • Find asking for money and/​or being responsible for weighty decisions extremely stressful and/​or repulsive

  • Really don’t like disappointing people

  • Finds integrity hard and have repeatedly been criticized in the past for being bad at navigating boundaries, conflicts-of-interest, etc.

  • Frequently adopt a “my way or the highway” approach to disagreements

  • Have below 30th percentile conscientiousness on OCEAN (Big Five)

Nice-to-haves:

  • People management experience and ability

    • You will be navigating a number of internal stakeholders like fund managers, but the fund managers are relatively independent (and usually have other day jobs) so not being a great people manager is okay.

  • Communication ability

    • Being good at oral and written communication is a strong plus but others on the fund (e.g. myself) can largely cover for this specific deficiency

  • Substantial prior experience grantmaking

    • Having someone here who significant prior experience grantmaking will de-risk the LTFF chair role a lot, but we’re tentatively willing to take on that risk for otherwise great candidates

  • Fundraising experience

  • Willingness to live in the Bay Area

  • A strong network in the longtermist and/​or AI safety and/​or biosecurity fields

Practical Details

Compensation: We aim for our salaries to be competitive with nonprofit counterfactuals like researchers at AI safety nonprofits or mid-level scientific program officers at large private foundations. We expect to pay between $120,000 and $240,000, depending on years of experience, location, and how remunerative your skill-sets are elsewhere. The higher end would be for people living in the Bay Area (as opposed to remote), people with many years of relevant experience, and people with deep technological knowledge and expertise. After the first year, you will be one of the main people setting the policies that will play a large role in determining your future salary.

Salaries will likely be prorated for temporary and part-time candidates. We are likely also willing to raise the salary for excellent candidates.

Flexibility: We have a strong preference for a full-time candidate. We will seriously consider part-time (ideally 20h/​week or more; basically we want LTFF to be your main professional responsibility) candidates as well if we cannot find a good full-time fit.

Location: We have a strong preference for a fund chair who lives in the SF Bay Area. However, we will seriously consider remote candidates. Please note that we will probably sponsor visas for hires willing to relocate to the SF Bay Area.

Timeline and Process: We will evaluate applications on a rolling basis, with preference for applications submitted before October 23. We will offer interviews and (paid) trial tasks to approximately 30 candidates. Based on the selection process, we may then make an offer to you for a 3-month trial period as LTFF chair, with a negotiable starting date. If you do as well as expected or better in the trial period, we will make a permanent offer.

EDIT: We will close applications soon. For remaining people who are interested, Please apply by 2023/​11/​17 11:59 PM Pacific Time if you want us to look at your application!

Benefits

(The benefits below are inherited from EV, our current fiscal sponsor. As LTFF fund chair, you will likely be the second or third employee at EA funds, and can play a large role in setting the benefits package that works best for you and the organization).

Our benefits reflect our belief in investing in our people to build the strongest possible team. We want everyone to be able to be able to perform at their best as we provide world-class support and maximize our positive impact.

  • Prioritized health & wellbeing
    We provide private medical, vision and dental insurance, up to 3 weeks’ paid sick leave, and a mental health allowance of $6,000 each year.

  • Flexible working
    You’re generally free to set your own schedule (with some overlapping hours with colleagues).

  • Generous vacation
    We provide all team members with 25 days’ holiday each year, plus public holidays.

  • Professional development opportunities
    We offer a $6,000 allowance each year and build in opportunities for career growth through on-the-job learning, increasing responsibility, and role progression pathways.

  • Parental leave and support
    New parents have up to 14 weeks fully-paid leave and up to 52 weeks leave in total. We also provide financial support for child care to help parents balance child care needs.

  • Pension and income protection plans
    We offer a 10% employer /​ 0% employee 401K contribution, and income protection insurance.

  • Equipment to help your productivity
    We will pay for high-quality and ergonomic equipment (laptop, monitors, chair, etc.), in the office or at home if you work remotely.

  • Work environment with catered meals, gym, ergonomic equipment, and ample opportunities to cowork with members of other organizations working on the world’s most pressing problems

To apply:
Please apply here (We estimate applications to take about an hour). Applications will be evaluated on a rolling basis, with preference for applications submitted before October 23.

EA Funds serves a global community, and our team works with people and organizations all over the world. We’re committed to fostering a culture of inclusion, and we encourage individuals with diverse backgrounds and experiences to apply. We especially encourage applications from women, gender minorities, and people of color who are excited about contributing to our mission. We’re an equal-opportunity employer.

Appendix A: A Typical Day as LTFF Fund Chair

(Note that this is very hypothetical: we’ve never had a full-time fund chair before, and the schedule below assumes a more competent LTFF than currently exists).

9am-11am: You review your notes on a proposed more quantitative alternative to the way LTFF currently does grant evaluations. One of your contractors made a scrappy mathematical model that tentatively suggests the proposed evaluation method is slightly higher EV, but you’re concerned that it’s not worth the switching costs in practice. You also notice a hole in the model. After two hours, you’re still not sure and you regrettably decide that you need to spend a bit more time evaluating this project again.

11am-1140am: An applicant for a fairly complicated AI Safety research proposal emailed LTFF three days ago saying that they needed a more urgent response than they said in the application. Their primary grant investigator is on vacation. You dedicated this time today to doing the grant evaluation yourself. You look through the PI’s notes and some of the public outputs of the applicant and come to a tentative conclusion. You write down your notes, give a tentative score for the grant application, and put the application into Voting.

1140am-12pm: You jot some notes for how to change processes such that urgent applications are less likely to slip through the cracks going forwards.

1pm-2pm: You facilitate the weekly LTFF grantmaker meeting. You spend the first 20 minutes soliciting feedback on the new proposed grant evaluation alternative, and open up the next 40 minutes to discuss potentially controversial grants.

2pm-3pm: You answer and send off some emails and Slack threads (you usually do this after lunch but today the grantmaker meeting happened first). Among other things, you pass along a geoengineering grant application to a different grantmaking fund in your network that specializes in extreme climate risks.

3pm-4pm: You vote on grant applications. You read through the applications quickly, look through the notes from the respective primary investigators of each grant, and try your best to form an independent evaluation of the impact of each grant.

4pm-430pm: You take a short break. Sometimes you nap, but today you decide to argue with people on LessWrong instead.

430pm-5pm: You take a call with a whistleblower for one of your grantees. You ascertain the problem (alleged issues with managerial incompetence and academic integrity). You take notes, ask some questions, and ascertain relevant confidentiality policy (okay to share within LTFF and Comm Health, but please ask before sharing elsewhere).

5pm-530pm: You try to make a decision for next steps on the whistleblowing case. You also share your notes with Comm Health.

530pm-6pm: You do a daily review and decide what to work on the next day.

1030pm-11pm: You take a call with a new earning-to-give donor from Eastern Europe. The donor is interested in making another donation, but this one specifically earmarked for AI governance. You explain your policy on this matter: it isn’t feasible for LTFF to have narrowly targeted donations to specific focus areas without funging, and trying to set up an alternative system would not be worth the overhead. The donor is understanding and says she’ll circle back on whether it makes more sense for her to make that donation to LTFF anyway vs give directly to GovAI or a similar organization. You wish her luck.