I feel generally very positively about this update and have personally felt confused about the scope of EAIF when referring other people to it.
There are wide grey areas when attempting to delineate principles-first EA from cause-specific EA and the effective giving examples in this post stand out to me as one thorny area. I think it may make sense not to fund an AI-specific or an animal-specific effective giving project through EAIF (and the LTTF and AWF are more appropriate), but an effective giving project that e.g. takes a longtermist approach or is focused on near-term human and nonhuman welfare seems different to me. Put differently: How do you think about projects that don’t cover all of EA, but also aren’t limited to one cause area?
For this out-of-scope example in particular, I’m not sure where I would route someone to pursue alternative funding in a timely fashion:
Funding a very promising biology PhD student to attend a one-month program run by a prestigious US think tank to understand better how the intelligence community monitors various kinds of risk, such as biological threats ($6,000)
Maybe Lightspeed? But I worry there isn’t currently other coverage for funding needs of this sort.
I’m worried about people couching cause-specific projects as principles-first, but there is already a heavy tide pushing people to couch principles-first projects as x-risk-specific, so this might not be a concern.
I’m really happy to see you thinking about digital minds and (seemingly) how to grow s-risk projects.
Makes sense, thank you! Maybe my follow-up questions would be: How confident would they need to be that they’d use the experience to work on biorisk vs. global health before applying to the LTFF? And if they were, say, 75:25 between the two, would EAIF become the right choice—or what ratio would bring this grant into EAIF territory?
I think this is pretty unclear; I think we’d mostly be looking for people who are using EA principles to guide their career decision-making (scope sensitivity, impartiality etc.) as opposed to thinking primarily about future cause areas. I agree it’s fuzzy, though I don’t want to share concrete criteria. I am excited about here out of worries of goodharting.
Ultimately, we can transfer apps between funds, so it’s not a huge deal. I think at 75:25 should probably apply to EAIF (my very off-the-cuff view).
There are wide grey areas when attempting to delineate principles-first EA from cause-specific EA and the effective giving examples in this post stand out to me as one thorny area. I think it may make sense not to fund an AI-specific or an animal-specific effective giving project through EAIF (and the LTTF and AWF are more appropriate), but an effective giving project that e.g. takes a longtermist approach or is focused on near-term human and nonhuman welfare seems different to me. Put differently: How do you think about projects that don’t cover all of EA, but also aren’t limited to one cause area?
I think it’s fine for us to evaluate projects that don’t cover all of EA. I think the thing we want to avoid is funding things that are clearly focused on a specific cause area. We can always transfer grants to other funds in EA Funds if it’s a bit confusing for the applicant. In the examples that you gave, the LTFF would evaluate the AI-specific thing, but the EAIF is probably a better fit for the neartermist cross-cause fundraising.
Maybe Lightspeed? But I worry there isn’t currently other coverage for funding needs of this sort.
I don’t think this is open right now, and it’s not clear when it will be open again.
I’m worried about people couching cause-specific projects as principles-first, but there is already a heavy tide pushing people to couch principles-first projects as x-risk-specific, so this might not be a concern.
Scattered first impressions:
I feel generally very positively about this update and have personally felt confused about the scope of EAIF when referring other people to it.
There are wide grey areas when attempting to delineate principles-first EA from cause-specific EA and the effective giving examples in this post stand out to me as one thorny area. I think it may make sense not to fund an AI-specific or an animal-specific effective giving project through EAIF (and the LTTF and AWF are more appropriate), but an effective giving project that e.g. takes a longtermist approach or is focused on near-term human and nonhuman welfare seems different to me. Put differently: How do you think about projects that don’t cover all of EA, but also aren’t limited to one cause area?
For this out-of-scope example in particular, I’m not sure where I would route someone to pursue alternative funding in a timely fashion:
Maybe Lightspeed? But I worry there isn’t currently other coverage for funding needs of this sort.
I’m worried about people couching cause-specific projects as principles-first, but there is already a heavy tide pushing people to couch principles-first projects as x-risk-specific, so this might not be a concern.
I’m really happy to see you thinking about digital minds and (seemingly) how to grow s-risk projects.
Thanks for your comment. I’m not able to respond to the whole comment right now but I think the bio career grant is squarely in the scope of the LTFF.
Makes sense, thank you! Maybe my follow-up questions would be: How confident would they need to be that they’d use the experience to work on biorisk vs. global health before applying to the LTFF? And if they were, say, 75:25 between the two, would EAIF become the right choice—or what ratio would bring this grant into EAIF territory?
I think this is pretty unclear; I think we’d mostly be looking for people who are using EA principles to guide their career decision-making (scope sensitivity, impartiality etc.) as opposed to thinking primarily about future cause areas. I agree it’s fuzzy, though I don’t want to share concrete criteria. I am excited about here out of worries of goodharting.
Ultimately, we can transfer apps between funds, so it’s not a huge deal. I think at 75:25 should probably apply to EAIF (my very off-the-cuff view).
(A few more responses to your comment)
I think it’s fine for us to evaluate projects that don’t cover all of EA. I think the thing we want to avoid is funding things that are clearly focused on a specific cause area. We can always transfer grants to other funds in EA Funds if it’s a bit confusing for the applicant. In the examples that you gave, the LTFF would evaluate the AI-specific thing, but the EAIF is probably a better fit for the neartermist cross-cause fundraising.
I don’t think this is open right now, and it’s not clear when it will be open again.
Yes, I’m worried about this too.