There are wide grey areas when attempting to delineate principles-first EA from cause-specific EA and the effective giving examples in this post stand out to me as one thorny area. I think it may make sense not to fund an AI-specific or an animal-specific effective giving project through EAIF (and the LTTF and AWF are more appropriate), but an effective giving project that e.g. takes a longtermist approach or is focused on near-term human and nonhuman welfare seems different to me. Put differently: How do you think about projects that don’t cover all of EA, but also aren’t limited to one cause area?
I think it’s fine for us to evaluate projects that don’t cover all of EA. I think the thing we want to avoid is funding things that are clearly focused on a specific cause area. We can always transfer grants to other funds in EA Funds if it’s a bit confusing for the applicant. In the examples that you gave, the LTFF would evaluate the AI-specific thing, but the EAIF is probably a better fit for the neartermist cross-cause fundraising.
Maybe Lightspeed? But I worry there isn’t currently other coverage for funding needs of this sort.
I don’t think this is open right now, and it’s not clear when it will be open again.
I’m worried about people couching cause-specific projects as principles-first, but there is already a heavy tide pushing people to couch principles-first projects as x-risk-specific, so this might not be a concern.
(A few more responses to your comment)
I think it’s fine for us to evaluate projects that don’t cover all of EA. I think the thing we want to avoid is funding things that are clearly focused on a specific cause area. We can always transfer grants to other funds in EA Funds if it’s a bit confusing for the applicant. In the examples that you gave, the LTFF would evaluate the AI-specific thing, but the EAIF is probably a better fit for the neartermist cross-cause fundraising.
I don’t think this is open right now, and it’s not clear when it will be open again.
Yes, I’m worried about this too.