Program Associate at Open Philanthropy and chair of the Long-Term Future Fund. I spend half my time on AI and half my time on EA community-building. Any views I express on the forum are my own, not the views of my employer.
abergal
Hey Ryan:
- Thanks for flagging that the EA Funds form still says that the funds will definitely get back in 8 weeks; I think that’s real bad.
- I agree that it would be good to have a comprehensive plan—personally, I think that if the LTFF fails to hire additional FT staff in the next few months (in particular, a FT chair), the fund should switch back to a round-based application system. But it’s ultimately not my call.
[Speaking for myself, not Open Philanthropy]
Empirically, I’ve observed some but not huge amounts of overlap between higher-rated applicants to the LTFF and applicants to Open Philanthropy’s programs; I’d estimate around 10%. And my guess is the “best historical grant opportunities” that Habryka is referring to[1] are largely in object-level AI safety work, which Open Philanthropy doesn’t have any open applications for right now (though it’s still funding individuals and research groups sourced through other means, and I think it may fund some of the MATS scholars in particular).
More broadly, many grantmakers at Open Philanthropy (including myself, and Ajeya, who is currently the only person full-time on technical AI safety grantmaking), are currently extremely capacity-constrained, so I wouldn’t make strong inferences that a given project isn’t cost-effective purely on the basis that Open Philanthropy hasn’t already funded it.
- ^
I don’t know exactly which grants this refers to and haven’t looked at our current highest-rated grants in-depth; I’m not intending to imply that I necessarily agree (or disagree) with Habryka’s statement.
- ^
I’m planning on notifying relevant applicants this week (if/assuming we don’t get a sudden increase in donations).
Re: deemphasizing expertise:
I feel kind of confused about this—I agree in theory re: EV of marginal grants, but my own experience interacting with grant evaluations from people who I’ve felt were weaker has been that sometimes they’re in favor of rejecting a grant that I think would be really good, or missing a consideration that I think would make a grant pretty bad, and furthermore it’s often hard to quickly tell if this is the case, e.g. they’ll give a stylized summary of what’s going on with the applicant, but I won’t know how much to trust that summary, so feel compelled to read the full grant application (which is bad, because I already bottleneck the process so much).
I basically feel pretty confident that lowering the bar for fund managers would lead to worse grants by my lights, but I don’t think I have a great grasp on the full space of trade-offs (i.e. how much worse, exactly? is the decrease in quality worth it to be able to get through more grants in a timely way?); it’s totally plausible to me there would be some set-up that would overall be better than the current one.
Re: comparing to FTXFF and Manifund:
I think the pitch for being a regrantor for the FTXFF or Manifund is pretty different than the one for the LTFF, both in terms of structure and raw number of hours.
As a regrantor, you get to opt-in to making the grants you’re most excited about on your own time, whereas on the LTFF, you’re responsible for spending a certain number of hours per week (historically, we’ve asked for a minimum of 5, though in practice people work less than that) evaluating incoming grant applications. (As a concrete instance of this, Adam Gleave is currently a regrantor at Manifund but left the LTFF a while ago—this isn’t to cast aspersions on Adam; just to illustrate that people have differing preferences between the two.)
I do think a possible restructure for the LTFF would be to switch to an opt-in regranting set-up, but that would be a pretty different way of operating. (I’d guess a bunch of good grants coming in through our application form would be missed by default, but it could still be overall preferable from the perspective of being more sustainable for fund managers.)
I’m commenting here to say that while I don’t plan to participate in public discussion of the FTX situation imminently (for similar reasons to the ones Holden gives above, though I don’t totally agree with some of Holden’s explanations here, and personally put more weight on some considerations here than others), I am planning to do so within the next several months. I’m sorry for how frustrating that is, though I endorse my choice.
The poster is currently a resident at OpenAI on the reinforcement learning team.
We’re currently planning on keeping it open at least for the next month, and we’ll provide at least a month of warning if we close it down.
Sorry about the delay on this answer. I do think it’s important that organizers genuinely care about the objectives of their group (which I think can be different from being altruistic, especially for non-effective altruism groups). I think you’re right that that’s worth listing in the must-have criteria, and I’ve added it now.
I assume the main reason this criteria wouldn’t be true is if someone wanted to do organizing work just for the money, which I think we should be trying hard to select against.
“even if the upside of them working out could really be quite valuable” is the part I disagree with most in your comment. (Again, speaking just for myself), I don’t think any of the projects I remember us rejecting seemed like they had a huge amount of upside; my overall calculus was something like “this doesn’t seem like it has big upside (because the policy asks don’t seem all that good), and also has some downside (because of person/project-specific factors)”. It would be nice if we did quantified risk analysis for all of our grant applications, but ultimately we have limited time, and I think it makes sense to focus attention on cases where it does seem like the upside is unusually high.
On potential risk factors:
I agree that (1) and (2) above are very unlikely for most grants (and are correlated with being unusually successful at getting things implemented).
I feel less in agreement about (3)-- my sense is that people who want to interact with policymakers will often succeed at taking up the attention of someone in the space, and the people interacting with them form impressions of them based on those interactions, whether or not they make progress on pushing that policy through.
I think (4) indeed isn’t specific to the policy space, but is a real downside that I’ve observed affecting other EA projects– I don’t expect the main factor to be that there’s only one channel for interacting with policymakers, but rather that other long-term-focused actors will perceive the space to be taken, or will feel some sense of obligation to work with existing projects / awkwardness around not doing so.
Caveating a lot of the above: as I said before, my views on specific grants have been informed heavily by others I’ve consulted, rather than coming purely from some inside view.
FWIW, I think this kind of questioning is fairly Habryka-specific and not really standard for our policy applicants; I think in many cases I wouldn’t expect that it would lead to productive discussions (and in fact could be counterproductive, in that it might put off potential allies who we might want to work with later).
I make the calls on who is the primary evaluator for which grants; as Habryka said, I think he is probably most skeptical of policy work among people on the LTFF, and hasn’t been the primary evaluator for almost any (maybe none?) of the policy-related grants we’ve had. In your case, I thought it was unusually likely that a discussion between you and Habryka would be productive and helpful for my evaluation of the grant (though I was interested primarily in different but related questions, not “whether policy work as a whole is competitive with other grants”), because I generally expect people more embedded in the community (and in the case above, you (Sam) in particular, which I really appreciate), to be more open to pretty frank discussions about the effectiveness of particular plans, lines of work, etc.
Rebecca Kagan is currently working as a fund manager for us (sorry for the not-up-to-date webpage).
Hey, Sam – first, thanks for taking the time to write this post, and running it by us. I’m a big fan of public criticism, and I think people are often extra-wary of criticizing funders publicly, relative to other actors of the space.
Some clarifications on what we have and haven’t funded:I want to make a distinction between “grants that work on policy research” and “grants that interact with policymakers”.
I think our bar for projects that involve the latter is much higher than for projects that are just doing the former.
I think we regularly fund “grants that work on policy research” – e.g., we’ve funded the Centre for Governance of AI, and regularly fund individuals who are doing PhDs or otherwise working on AI governance research.
I think we’ve funded a very small number of grants that involve interactions with policymakers – I can think of three such grants in the last year, two of which were for new projects. (In one case, the grantee has requested that we not report the grant publicly).
Responding to the rest of the post:
I think it’s roughly correct that I have a pretty high bar for funding projects that interact with policymakers, and I endorse this policy. (I don’t want to speak for the Long-Term Future Fund as a whole, because it acts more like a collection of fund managers than a single entity, but I suspect many others on the fund also have a high bar, and that my opinion in particular has had a big influence on our past decisions.)
Some other things in your post that I think are roughly true:
Previous experience in policy has been an important factor in my evaluations of these grants, and all else equal I think I am much more likely to fund applicants who are more senior (though I think the “20 years experience” bar is too high).
There have been cases where we haven’t funded projects (more broadly than in policy) because an individual has given us information about or impressions of them that led us to think the project would be riskier or less impactful than we initially believed, and we haven’t shared the identity or information with the applicant to preserve the privacy of the individual.
We have a higher bar for funding organizations than other projects, because they are more likely to stick around even if we decide they’re not worth funding in the future.
When evaluating the more borderline grants in this space, I often ask and rely heavily on the advice of others working in the policy space, weighted by how much I trust their judgment. I think this is basically a reasonable algorithm to follow, given that (a) they have a lot of context that I don’t, and (b) I think the downside risks of poorly-executed policy projects have spillover effects to other policy projects, which means that others in policy are genuine stakeholders in these decisions.
That being said, I think there’s a surprising amount of disagreement in what projects others in policy think are good, so I think the particular choice of advisors here makes a big difference.
I do think projects interacting with policymakers have substantial room for downside, including:
Pushing policies that are harmful
Making key issues partisan
Creating an impression (among policymakers or the broader world) that people who care about the long-term future are offputting, unrealistic, incompetent, or otherwise undesirable to work with
“Taking up the space” such that future actors who want to make long-term future-focused asks are encouraged or expected to work through or coordinate with the existing project
I suspect we also differ in our views of the upsides of some of this work– a lot of the projects we’ve rejected have wanted to do AI-focused policy work, and I tend to think that we don’t have very good concrete asks for policymakers in this space.
- 9 Sep 2022 11:49 UTC; 19 points) 's comment on Preventing an AI-related catastrophe—Problem profile by (
- 13 Aug 2022 8:12 UTC; 5 points) 's comment on Some concerns about policy work funding and the Long Term Future Fund by (
- 3 Jan 2023 8:46 UTC; 3 points) 's comment on MichaelA’s Quick takes by (
Here are answers to some other common questions about the University Organizer Fellowship that I received in office hours:
If I apply and get rejected, is there a “freezing period” where I can’t apply again?We don’t have an official freezing period, but I think we generally won’t spend time reevaluating someone within 3 months of when they last applied, unless they give some indication on the application that something significant has changed in that time.
If you’re considering applying, I really encourage you to not to wait– I think for the vast majority of people considering applying, it won’t make a difference whether you apply now or a month from now.
Should I have prior experience doing group organizing or running EA projects before applying?
No – I care primarily about the criteria outlined here. Prior experience can be a plus, but it’s definitely not necessary, and it’s generally not the main factor in deciding whether or not to fund someone.
I’m not sure that I agree with the premise of the question – I don’t think EA is trying all that hard to build a mainstream following (and I’m not sure that it should).
Interpreting this as “who is responsible for evaluating whether the Century Fellowship is a good use of time and money”, the answer is: someone on our team will probably try and do a review of how the program is going after it’s been running for a while longer; we will probably share that evaluation with Holden, co-CEO of Open Phil, as well as possibly other advisors and relevant stakeholders. Holden approves longtermist Open Phil grants and broadly thinks about which grants are/aren’t the best uses of money.
Each application has a primary evaluator who is on our team (current evaluators: me, Bastian Stern, Eli Rose, Kasey Shibayama, and Claire Zabel). We also generally consult / rely heavily on assessments from references or advisors, e.g. other staff at Open Phil or organizations who we work closely with, especially for applicants hoping to do work in domains we have less expertise in.
- 7 Aug 2022 1:49 UTC; 2 points) 's comment on [AMA] Announcing Open Phil’s University Group Organizer and Century Fellowships by (
When we were originally thinking about the fellowship, one of the cases for impact was making community building a more viable career (hence the emphasis in this post), but it’s definitely intended more broadly for people working on the long-term future. I’m pretty unsure how the fellowship will shake out in terms of community organizers vs researchers vs entrepreneurs long-term – we’ve funded a mix so far (including several people who I’m not sure how to categorize / are still unsure about what they want to do).
(The cop-out answer is “I would like the truth-seeking organizers to be more ambitious, and the ambitious organizers to be more truth-seeking”.)
If I had to choose one, I think I’d go with truth-seeking. It doesn’t feel very close to me, especially among existing university group effective altruism-related organizers (maybe Claire disagrees), largely because I think there’s already been a big recent push towards ambition there, so I think people are generally already thinking pretty ambitiously.I feel differently about e.g. rationality local group organizers, I wish they would be more ambitious.
i)
“Full-time-equivalent” is intended to mean “if you were working full-time, this is how much funding you would receive”. The fellowship is intended for people working significantly less than full-time, and most of our grants have been for 15 hours per week of organizer time or less. I definitely don’t expect undergraduates to be organizing for 40 hours per week.
I think our page doesn’t make this clear enough early on, thanks for flagging it– I’ll make some changes to try and make this clearer.
I think anyone who’s doing student organizing for more than 5 hours per semester should strongly consider applying. I’m sympathetic to people feeling weird about this, but want to emphasize that I think people should consider applying even if they would have volunteered to do the same activities, for two reasons:
I think giving people funding generally causes them to do higher-quality work.
I think receiving funding as an organizer makes it clearer to others that we value this work and that you don’t have to make huge sacrifices to do it, which makes it more likely that other people consider student organizing work.
We’re up for funding any number of organizers per group– in the case you described, I would encourage all the organizers to apply. (We also let group leaders ask for funding for organizers working less than 10 hours per week in their own applications. If two of the organizers were working 10 hours per week or less, it might be faster for one organizer to just include them on their application.)
ii)
(Let me know if I’m answering your question here, it’s possible I’ve misunderstood it.)
I think it’s ultimately up to the person on what they want to do– I think the fellowship will generally allow more freedom than funding for a specific project, come with more benefits (see our program page), and would probably pay a higher rate in terms of personal compensation than many other funding opportunities would. It also has a much higher bar for funding than I would generally apply for funding specific projects.
In the application form, we ask people if they would be interested in receiving a separate grant for their project or plans if they weren’t offered the Century Fellowship– we’ve funded many applicants who were below the bar for the fellowship itself that way. So if someone’s interested in both, I think it makes sense to just apply to the Century Fellowship, and we can also consider them for alternative funding.
For both programs, we don’t have an explicit referral system, but we do take into account what references have to say about the applicant (if the applicant provides references).
[I work at Open Philanthropy] Hi Linda–-- thanks for flagging this. After checking internally, I’m not sure what project you’re referring to here; generally speaking, I agree with you/others in this thread that it’s not good to fully funge against incoming funds from other grantmakers in the space after agreeing to fund something, but I’d want to have more context on the specifics of the situation.
It totally makes sense that you don’t want to name the source or project, but if you or your source would feel comfortable sharing more information, feel free to DM me or ask your source to DM me (or use Open Phil’s anonymous feedback form). (And just to flag explicitly, we would/do really appreciate this kind of feedback.)