Program Associate at Open Philanthropy and chair of the Long-Term Future Fund. I spend half my time on AI and half my time on EA community-building. Any views I express on the forum are my own, not the views of my employer.
abergal
Funding for programs and events on global catastrophic risk, effective altruism, and other topics
Funding for work that builds capacity to address risks from transformative AI
Updates to Open Phil’s career development and transition funding program
The Long-Term Future Fund is looking for a full-time fund chair
Hey Ryan:
- Thanks for flagging that the EA Funds form still says that the funds will definitely get back in 8 weeks; I think that’s real bad.
- I agree that it would be good to have a comprehensive plan—personally, I think that if the LTFF fails to hire additional FT staff in the next few months (in particular, a FT chair), the fund should switch back to a round-based application system. But it’s ultimately not my call.
[Speaking for myself, not Open Philanthropy]
Empirically, I’ve observed some but not huge amounts of overlap between higher-rated applicants to the LTFF and applicants to Open Philanthropy’s programs; I’d estimate around 10%. And my guess is the “best historical grant opportunities” that Habryka is referring to[1] are largely in object-level AI safety work, which Open Philanthropy doesn’t have any open applications for right now (though it’s still funding individuals and research groups sourced through other means, and I think it may fund some of the MATS scholars in particular).
More broadly, many grantmakers at Open Philanthropy (including myself, and Ajeya, who is currently the only person full-time on technical AI safety grantmaking), are currently extremely capacity-constrained, so I wouldn’t make strong inferences that a given project isn’t cost-effective purely on the basis that Open Philanthropy hasn’t already funded it.
- ^
I don’t know exactly which grants this refers to and haven’t looked at our current highest-rated grants in-depth; I’m not intending to imply that I necessarily agree (or disagree) with Habryka’s statement.
- ^
Long-Term Future Fund Ask Us Anything (September 2023)
I’m planning on notifying relevant applicants this week (if/assuming we don’t get a sudden increase in donations).
Re: deemphasizing expertise:
I feel kind of confused about this—I agree in theory re: EV of marginal grants, but my own experience interacting with grant evaluations from people who I’ve felt were weaker has been that sometimes they’re in favor of rejecting a grant that I think would be really good, or missing a consideration that I think would make a grant pretty bad, and furthermore it’s often hard to quickly tell if this is the case, e.g. they’ll give a stylized summary of what’s going on with the applicant, but I won’t know how much to trust that summary, so feel compelled to read the full grant application (which is bad, because I already bottleneck the process so much).
I basically feel pretty confident that lowering the bar for fund managers would lead to worse grants by my lights, but I don’t think I have a great grasp on the full space of trade-offs (i.e. how much worse, exactly? is the decrease in quality worth it to be able to get through more grants in a timely way?); it’s totally plausible to me there would be some set-up that would overall be better than the current one.
Re: comparing to FTXFF and Manifund:
I think the pitch for being a regrantor for the FTXFF or Manifund is pretty different than the one for the LTFF, both in terms of structure and raw number of hours.
As a regrantor, you get to opt-in to making the grants you’re most excited about on your own time, whereas on the LTFF, you’re responsible for spending a certain number of hours per week (historically, we’ve asked for a minimum of 5, though in practice people work less than that) evaluating incoming grant applications. (As a concrete instance of this, Adam Gleave is currently a regrantor at Manifund but left the LTFF a while ago—this isn’t to cast aspersions on Adam; just to illustrate that people have differing preferences between the two.)
I do think a possible restructure for the LTFF would be to switch to an opt-in regranting set-up, but that would be a pretty different way of operating. (I’d guess a bunch of good grants coming in through our application form would be missed by default, but it could still be overall preferable from the perspective of being more sustainable for fund managers.)
Reflections on my time on the Long-Term Future Fund
Long-Term Future Fund: April 2023 grant recommendations
I’m commenting here to say that while I don’t plan to participate in public discussion of the FTX situation imminently (for similar reasons to the ones Holden gives above, though I don’t totally agree with some of Holden’s explanations here, and personally put more weight on some considerations here than others), I am planning to do so within the next several months. I’m sorry for how frustrating that is, though I endorse my choice.
The poster is currently a resident at OpenAI on the reinforcement learning team.
We’re currently planning on keeping it open at least for the next month, and we’ll provide at least a month of warning if we close it down.
Sorry about the delay on this answer. I do think it’s important that organizers genuinely care about the objectives of their group (which I think can be different from being altruistic, especially for non-effective altruism groups). I think you’re right that that’s worth listing in the must-have criteria, and I’ve added it now.
I assume the main reason this criteria wouldn’t be true is if someone wanted to do organizing work just for the money, which I think we should be trying hard to select against.
Long-Term Future Fund: December 2021 grant recommendations
“even if the upside of them working out could really be quite valuable” is the part I disagree with most in your comment. (Again, speaking just for myself), I don’t think any of the projects I remember us rejecting seemed like they had a huge amount of upside; my overall calculus was something like “this doesn’t seem like it has big upside (because the policy asks don’t seem all that good), and also has some downside (because of person/project-specific factors)”. It would be nice if we did quantified risk analysis for all of our grant applications, but ultimately we have limited time, and I think it makes sense to focus attention on cases where it does seem like the upside is unusually high.
On potential risk factors:
I agree that (1) and (2) above are very unlikely for most grants (and are correlated with being unusually successful at getting things implemented).
I feel less in agreement about (3)-- my sense is that people who want to interact with policymakers will often succeed at taking up the attention of someone in the space, and the people interacting with them form impressions of them based on those interactions, whether or not they make progress on pushing that policy through.
I think (4) indeed isn’t specific to the policy space, but is a real downside that I’ve observed affecting other EA projects– I don’t expect the main factor to be that there’s only one channel for interacting with policymakers, but rather that other long-term-focused actors will perceive the space to be taken, or will feel some sense of obligation to work with existing projects / awkwardness around not doing so.
Caveating a lot of the above: as I said before, my views on specific grants have been informed heavily by others I’ve consulted, rather than coming purely from some inside view.
FWIW, I think this kind of questioning is fairly Habryka-specific and not really standard for our policy applicants; I think in many cases I wouldn’t expect that it would lead to productive discussions (and in fact could be counterproductive, in that it might put off potential allies who we might want to work with later).
I make the calls on who is the primary evaluator for which grants; as Habryka said, I think he is probably most skeptical of policy work among people on the LTFF, and hasn’t been the primary evaluator for almost any (maybe none?) of the policy-related grants we’ve had. In your case, I thought it was unusually likely that a discussion between you and Habryka would be productive and helpful for my evaluation of the grant (though I was interested primarily in different but related questions, not “whether policy work as a whole is competitive with other grants”), because I generally expect people more embedded in the community (and in the case above, you (Sam) in particular, which I really appreciate), to be more open to pretty frank discussions about the effectiveness of particular plans, lines of work, etc.
Rebecca Kagan is currently working as a fund manager for us (sorry for the not-up-to-date webpage).
[I work at Open Philanthropy] Hi Linda–-- thanks for flagging this. After checking internally, I’m not sure what project you’re referring to here; generally speaking, I agree with you/others in this thread that it’s not good to fully funge against incoming funds from other grantmakers in the space after agreeing to fund something, but I’d want to have more context on the specifics of the situation.
It totally makes sense that you don’t want to name the source or project, but if you or your source would feel comfortable sharing more information, feel free to DM me or ask your source to DM me (or use Open Phil’s anonymous feedback form). (And just to flag explicitly, we would/do really appreciate this kind of feedback.)