As mentioned in the original post, I’m not a Fund manager, but I sometimes advise the LTFF as part of my role as Head of EA Funds, and I’ve also been thinking about the longer-term strategy for EA Funds as a whole.
Some thoughts on this question:
LTFF strategy: There is no official 3-10 year vision or strategy for the LTFF yet, but I hope we will get there sometime soon. My own best guess for the LTFF’s vision (which I haven’t yet discussed with the LTFF) is: ‘Thoughtful people have the resources they need to successfully implement highly impactful projects to improve the long-term future.’ My best guess for the LTFF’s mission/strategy is ‘make judgment-driven grants to individuals and small organizations and proactively seed new longtermist projects.’ A plausible goal could be to allocate $15 million per year to effective longtermist projects by 2025 (where ‘effective’ means something like ‘significantly better than Open Phil’s last dollar, similar to the current quality of grants’).
Grantmaking capacity: To get there, we need 1) more grantmaking capacity (especially for active grantmaking), 2) more ideas that would be impactful if implemented well, and 3) more people capable of implementing these ideas. EA Funds can primarily improve the first factor, and I think this is the main limiting factor right now (though this could change within a few months). I am currently implementing the first iteration of a fund manager appointment process, where we invite potential grantmakers to apply as Fund managers, and are also considering hiring a full-time grantmaking grant specialist. Hopefully, this will allow the LTFF to increase the number of grants it can evaluate, and its active grantmaking capacity in particular.
Types of grants: Areas in which I expect the LTFF to be able to substantially expand its current grantmaking include academic teaching buy-outs, scholarships and top-up funding for poorly paid academics, research assistants for academics, and proactively seeding new longtermist organizations and research projects (active grantmaking).
Structural changes: I think having multiple fund managers on a committee rather than a single decision-maker leads to improved diversity of networks and opinions, and increased robustness in decision-making. Increasing the number of committee members on a single committee leads to disproportionately larger coordination overhead, so the way to scale this might be to create multiple committees. I also think a committee model would benefit from having one or more full-time staff who can dedicate their full attention to EA Funds or the LTFF and collaborate with a committee of part-time/volunteer grantmakers, so I may want to look into hiring for such positions.
Legible longtermist fund: Donating to the LTFF currently requires a lot of trust in the Fund managers because many of the grants are speculative and hard to understand for people less involved in EA. While I think the current LTFF grants are plausibly the most effective use of longtermist funding, there is significant donor demand for a more legible longtermist donation option (i.e., one that isn’t subject to massive information asymmetry and thus doesn’t rely on trust as much). This may speak in favor of setting up a second, more ‘mainstream’ long-term future fund. That fund might give to most longtermist institutes and would have a lot of fungibility with Open Phil’s funding, but seems likely a better way to introduce interested donors to longtermism.
Perhaps EA Funds shouldn’t focus on grantmaking as much: At a higher level, I’m not sure whether EA Funds’ strategy should be to build a grantmaking organization, or to become the #1 website on the internet for giving effectively, or something else. Regarding the LTFF and longtermism in particular, Open Phil has expanded its activities, Survival And Flourishing (SAF) has launched, and other donors and grantmakers (such as Longview Philanthropy) continue to be active in the area to some degree, which means that effective projects may get funded even if the LTFF doesn’t expand its grantmaking. It’s pretty plausible to me that EA Funds should pursue a strategy that’s less focused on grantmaking than what I wrote in the above paragraphs, which would mean that I might not dedicate as much attention to expanding the LTFF in the ways suggested above. I’m still thinking about this; the decision will likely depend on external feedback and experiments (e.g., how quickly we can make successful active grants).
If anyone has any feedback, thoughts, or questions about the above, I’d be interested in hearing from you (here or via PM).
Perhaps EA Funds shouldn’t focus on grantmaking as much: At a higher level, I’m not sure whether EA Funds’ strategy should be to build a grantmaking organization, or to become the #1 website on the internet for giving effectively, or something else
I found this point interesting, and have a vague intuition that EA Funds (and especially the LTFF) are really trying to do two different things:
Having a default place for highly engaged EAs to donate, that is willing to take on large risks, fund things that seem weird, and rely heavily on social connections, the community and grantmaker intuitions
Have a default place for risk-neutral donors who feel value aligned with EA to donate to, who don’t necessarily have high trust for the community
Having something doing (1) seems really valuable, and I would feel sad if the LTFF reined back the kinds of things it funded to have a better public image. But I also notice that, eg, when giving donation advice to friends who broadly agree with EA ideas but aren’t really part of the community, that I don’t feel comfortable recommending EA Funds. And think that a bunch of the grants seem weird to anyone with moderately skeptical priors. (This is partially an opinion formed from the April 2019 grants, and I feel this less strongly for more recent grants).
And it would be great to have a good, default place to recommend my longtermist friends donate to, analogous to being able to point people to GiveWell top charities.
The obvious solution to this is to have two separate institutions, trying to do these two different things? But I’m not sure how workable that is here (and I’m not sure what a ’longtermist fund that tries to be legible and public facing, but without OpenPhil scale of money would actually look like!)
The obvious solution to this is to have two separate institutions, trying to do these two different things?
Do you mean this as distinct from Jonas’s suggestion of:
setting up a second, more ‘mainstream’ long-term future fund. That fund might give to most longtermist institutes and would have a lot of fungibility with Open Phil’s funding, but seems likely a better way to introduce interested donors to longtermism.
It seems to me that that could address this issue well. But maybe you think the other institution should have a more different structure or be totally separate from EA Funds?
But I’m not sure how workable that is here (and I’m not sure what a ’longtermist fund that tries to be legible and public facing, but without OpenPhil scale of money would actually look like!)
FWIW, my initial reaction is “Seems like it should be very workable? Just mostly donate to organisations that have relatively easy to understand theories of change, have already developed a track record, and/or have mainstream signals of credibility or prestige (e.g. affiliations with impressive universities). E.g., Center for Health Security, FHI, GPI, maybe CSET, maybe 80,000 Hours, maybe specific programs from prominent non-EA think tanks.”
Do you think this is harder than I’m imagining? Or maybe that the ideal would be to give to different types of things?
Do you mean this as distinct from Jonas’s suggestion of:
Nah, I think Jonas’ suggestion would be a good implementation of what I’m suggesting. Though as part of this, I’d want the LTFF to be less public facing and obvious—if someone googled ‘effective altruism longtermism donate’ I’d want them to be pointed to this new fund.
Hmm, I agree that a version of this fund could be implemented pretty easily—eg just make a list of the top 10 longtermist orgs and give 10% to each. My main concern is that it seems easy to do in a fairly disingenuous and manipulative way, if we expect all of its money to just funge against OpenPhil. And I’m not sure how to do it well and ethically.
As mentioned in the original post, I’m not a Fund manager, but I sometimes advise the LTFF as part of my role as Head of EA Funds, and I’ve also been thinking about the longer-term strategy for EA Funds as a whole.
Some thoughts on this question:
LTFF strategy: There is no official 3-10 year vision or strategy for the LTFF yet, but I hope we will get there sometime soon. My own best guess for the LTFF’s vision (which I haven’t yet discussed with the LTFF) is: ‘Thoughtful people have the resources they need to successfully implement highly impactful projects to improve the long-term future.’ My best guess for the LTFF’s mission/strategy is ‘make judgment-driven grants to individuals and small organizations and proactively seed new longtermist projects.’ A plausible goal could be to allocate $15 million per year to effective longtermist projects by 2025 (where ‘effective’ means something like ‘significantly better than Open Phil’s last dollar, similar to the current quality of grants’).
Grantmaking capacity: To get there, we need 1) more grantmaking capacity (especially for active grantmaking), 2) more ideas that would be impactful if implemented well, and 3) more people capable of implementing these ideas. EA Funds can primarily improve the first factor, and I think this is the main limiting factor right now (though this could change within a few months). I am currently implementing the first iteration of a fund manager appointment process, where we invite potential grantmakers to apply as Fund managers, and are also considering hiring a full-time grantmaking grant specialist. Hopefully, this will allow the LTFF to increase the number of grants it can evaluate, and its active grantmaking capacity in particular.
Types of grants: Areas in which I expect the LTFF to be able to substantially expand its current grantmaking include academic teaching buy-outs, scholarships and top-up funding for poorly paid academics, research assistants for academics, and proactively seeding new longtermist organizations and research projects (active grantmaking).
Structural changes: I think having multiple fund managers on a committee rather than a single decision-maker leads to improved diversity of networks and opinions, and increased robustness in decision-making. Increasing the number of committee members on a single committee leads to disproportionately larger coordination overhead, so the way to scale this might be to create multiple committees. I also think a committee model would benefit from having one or more full-time staff who can dedicate their full attention to EA Funds or the LTFF and collaborate with a committee of part-time/volunteer grantmakers, so I may want to look into hiring for such positions.
Legible longtermist fund: Donating to the LTFF currently requires a lot of trust in the Fund managers because many of the grants are speculative and hard to understand for people less involved in EA. While I think the current LTFF grants are plausibly the most effective use of longtermist funding, there is significant donor demand for a more legible longtermist donation option (i.e., one that isn’t subject to massive information asymmetry and thus doesn’t rely on trust as much). This may speak in favor of setting up a second, more ‘mainstream’ long-term future fund. That fund might give to most longtermist institutes and would have a lot of fungibility with Open Phil’s funding, but seems likely a better way to introduce interested donors to longtermism.
Perhaps EA Funds shouldn’t focus on grantmaking as much: At a higher level, I’m not sure whether EA Funds’ strategy should be to build a grantmaking organization, or to become the #1 website on the internet for giving effectively, or something else. Regarding the LTFF and longtermism in particular, Open Phil has expanded its activities, Survival And Flourishing (SAF) has launched, and other donors and grantmakers (such as Longview Philanthropy) continue to be active in the area to some degree, which means that effective projects may get funded even if the LTFF doesn’t expand its grantmaking. It’s pretty plausible to me that EA Funds should pursue a strategy that’s less focused on grantmaking than what I wrote in the above paragraphs, which would mean that I might not dedicate as much attention to expanding the LTFF in the ways suggested above. I’m still thinking about this; the decision will likely depend on external feedback and experiments (e.g., how quickly we can make successful active grants).
If anyone has any feedback, thoughts, or questions about the above, I’d be interested in hearing from you (here or via PM).
I found this point interesting, and have a vague intuition that EA Funds (and especially the LTFF) are really trying to do two different things:
Having a default place for highly engaged EAs to donate, that is willing to take on large risks, fund things that seem weird, and rely heavily on social connections, the community and grantmaker intuitions
Have a default place for risk-neutral donors who feel value aligned with EA to donate to, who don’t necessarily have high trust for the community
Having something doing (1) seems really valuable, and I would feel sad if the LTFF reined back the kinds of things it funded to have a better public image. But I also notice that, eg, when giving donation advice to friends who broadly agree with EA ideas but aren’t really part of the community, that I don’t feel comfortable recommending EA Funds. And think that a bunch of the grants seem weird to anyone with moderately skeptical priors. (This is partially an opinion formed from the April 2019 grants, and I feel this less strongly for more recent grants).
And it would be great to have a good, default place to recommend my longtermist friends donate to, analogous to being able to point people to GiveWell top charities.
The obvious solution to this is to have two separate institutions, trying to do these two different things? But I’m not sure how workable that is here (and I’m not sure what a ’longtermist fund that tries to be legible and public facing, but without OpenPhil scale of money would actually look like!)
This sounds right to me.
Do you mean this as distinct from Jonas’s suggestion of:
It seems to me that that could address this issue well. But maybe you think the other institution should have a more different structure or be totally separate from EA Funds?
FWIW, my initial reaction is “Seems like it should be very workable? Just mostly donate to organisations that have relatively easy to understand theories of change, have already developed a track record, and/or have mainstream signals of credibility or prestige (e.g. affiliations with impressive universities). E.g., Center for Health Security, FHI, GPI, maybe CSET, maybe 80,000 Hours, maybe specific programs from prominent non-EA think tanks.”
Do you think this is harder than I’m imagining? Or maybe that the ideal would be to give to different types of things?
Nah, I think Jonas’ suggestion would be a good implementation of what I’m suggesting. Though as part of this, I’d want the LTFF to be less public facing and obvious—if someone googled ‘effective altruism longtermism donate’ I’d want them to be pointed to this new fund.
Hmm, I agree that a version of this fund could be implemented pretty easily—eg just make a list of the top 10 longtermist orgs and give 10% to each. My main concern is that it seems easy to do in a fairly disingenuous and manipulative way, if we expect all of its money to just funge against OpenPhil. And I’m not sure how to do it well and ethically.
Yeah, we could simply explain transparently that it would funge with Open Phil’s longtermist budget.