Perhaps EA Funds shouldn’t focus on grantmaking as much: At a higher level, I’m not sure whether EA Funds’ strategy should be to build a grantmaking organization, or to become the #1 website on the internet for giving effectively, or something else
I found this point interesting, and have a vague intuition that EA Funds (and especially the LTFF) are really trying to do two different things:
Having a default place for highly engaged EAs to donate, that is willing to take on large risks, fund things that seem weird, and rely heavily on social connections, the community and grantmaker intuitions
Have a default place for risk-neutral donors who feel value aligned with EA to donate to, who don’t necessarily have high trust for the community
Having something doing (1) seems really valuable, and I would feel sad if the LTFF reined back the kinds of things it funded to have a better public image. But I also notice that, eg, when giving donation advice to friends who broadly agree with EA ideas but aren’t really part of the community, that I don’t feel comfortable recommending EA Funds. And think that a bunch of the grants seem weird to anyone with moderately skeptical priors. (This is partially an opinion formed from the April 2019 grants, and I feel this less strongly for more recent grants).
And it would be great to have a good, default place to recommend my longtermist friends donate to, analogous to being able to point people to GiveWell top charities.
The obvious solution to this is to have two separate institutions, trying to do these two different things? But I’m not sure how workable that is here (and I’m not sure what a ’longtermist fund that tries to be legible and public facing, but without OpenPhil scale of money would actually look like!)
The obvious solution to this is to have two separate institutions, trying to do these two different things?
Do you mean this as distinct from Jonas’s suggestion of:
setting up a second, more ‘mainstream’ long-term future fund. That fund might give to most longtermist institutes and would have a lot of fungibility with Open Phil’s funding, but seems likely a better way to introduce interested donors to longtermism.
It seems to me that that could address this issue well. But maybe you think the other institution should have a more different structure or be totally separate from EA Funds?
But I’m not sure how workable that is here (and I’m not sure what a ’longtermist fund that tries to be legible and public facing, but without OpenPhil scale of money would actually look like!)
FWIW, my initial reaction is “Seems like it should be very workable? Just mostly donate to organisations that have relatively easy to understand theories of change, have already developed a track record, and/or have mainstream signals of credibility or prestige (e.g. affiliations with impressive universities). E.g., Center for Health Security, FHI, GPI, maybe CSET, maybe 80,000 Hours, maybe specific programs from prominent non-EA think tanks.”
Do you think this is harder than I’m imagining? Or maybe that the ideal would be to give to different types of things?
Do you mean this as distinct from Jonas’s suggestion of:
Nah, I think Jonas’ suggestion would be a good implementation of what I’m suggesting. Though as part of this, I’d want the LTFF to be less public facing and obvious—if someone googled ‘effective altruism longtermism donate’ I’d want them to be pointed to this new fund.
Hmm, I agree that a version of this fund could be implemented pretty easily—eg just make a list of the top 10 longtermist orgs and give 10% to each. My main concern is that it seems easy to do in a fairly disingenuous and manipulative way, if we expect all of its money to just funge against OpenPhil. And I’m not sure how to do it well and ethically.
I found this point interesting, and have a vague intuition that EA Funds (and especially the LTFF) are really trying to do two different things:
Having a default place for highly engaged EAs to donate, that is willing to take on large risks, fund things that seem weird, and rely heavily on social connections, the community and grantmaker intuitions
Have a default place for risk-neutral donors who feel value aligned with EA to donate to, who don’t necessarily have high trust for the community
Having something doing (1) seems really valuable, and I would feel sad if the LTFF reined back the kinds of things it funded to have a better public image. But I also notice that, eg, when giving donation advice to friends who broadly agree with EA ideas but aren’t really part of the community, that I don’t feel comfortable recommending EA Funds. And think that a bunch of the grants seem weird to anyone with moderately skeptical priors. (This is partially an opinion formed from the April 2019 grants, and I feel this less strongly for more recent grants).
And it would be great to have a good, default place to recommend my longtermist friends donate to, analogous to being able to point people to GiveWell top charities.
The obvious solution to this is to have two separate institutions, trying to do these two different things? But I’m not sure how workable that is here (and I’m not sure what a ’longtermist fund that tries to be legible and public facing, but without OpenPhil scale of money would actually look like!)
This sounds right to me.
Do you mean this as distinct from Jonas’s suggestion of:
It seems to me that that could address this issue well. But maybe you think the other institution should have a more different structure or be totally separate from EA Funds?
FWIW, my initial reaction is “Seems like it should be very workable? Just mostly donate to organisations that have relatively easy to understand theories of change, have already developed a track record, and/or have mainstream signals of credibility or prestige (e.g. affiliations with impressive universities). E.g., Center for Health Security, FHI, GPI, maybe CSET, maybe 80,000 Hours, maybe specific programs from prominent non-EA think tanks.”
Do you think this is harder than I’m imagining? Or maybe that the ideal would be to give to different types of things?
Nah, I think Jonas’ suggestion would be a good implementation of what I’m suggesting. Though as part of this, I’d want the LTFF to be less public facing and obvious—if someone googled ‘effective altruism longtermism donate’ I’d want them to be pointed to this new fund.
Hmm, I agree that a version of this fund could be implemented pretty easily—eg just make a list of the top 10 longtermist orgs and give 10% to each. My main concern is that it seems easy to do in a fairly disingenuous and manipulative way, if we expect all of its money to just funge against OpenPhil. And I’m not sure how to do it well and ethically.
Yeah, we could simply explain transparently that it would funge with Open Phil’s longtermist budget.