Buck, Max, and yourself are enthusiastic longtermists (…) it would seem to follow you could (/should?) put the vast majority of the EAIIF towards long-terminist projects
In my view, being an enthusiastic longtermist is compatible with finding neartermist worldviews plausible and allocating some funding to them. See, e.g., Ajeya Cotra on the 80,000 Hours podcast. I personally feel excited to fund high-quality projects that develop or promote EA principles, whether they’re longtermist or not. (And Michelle suggested this as well.) For the EAIF, I would evaluate a project like HLI based on whether it seems like it overall furthers the EA project (i.e., makes EA thinking more sophisticated, leads to more people making important decisions according to EA principles, etc.).
Let me try again with a more specific case. Suppose you are choosing between projects A and B—perhaps they have each asked for $100k but you only have $100k left. Project A is only eligible for funding from EAIF—the other EA funds consider it outside their respective purviews. Project B is eligible for funding from one of the other EA funds, but so happens to have applied to EAIF. Suppose, further, you think B is more cost-effective at doing good.
FWIW, I think this example is pretty unrealistic, as I don’t think funding constraints will become relevant in this way. I also want to note that funding A violates some principles of donor coordination. In practice, I would probably recommend a split between A and B (recommending my ‘fair share’ to B, and the rest to A); I would probably coordinate this explicitly with the other funds. I would probably also try to refer both A and B to other funders to ensure both get fully funded.
In my view, being an enthusiastic longtermist is compatible with finding neartermist worldviews plausible and allocating some funding to them
Thanks for this reply, which I found reassuring.
FWIW, I think this example is pretty unrealistic, as I don’t think funding constraints will become relevant in this way. I also want to note that funding A violates some principles of donor coordination
Okay, this is interesting and helpful to know. I’m trying to put my finger on the source of what seems to be a perspectival difference, and I wonder if this relates to the extent to which fund managers should be trying to instantiate donor’s wishes vs fund managers allocating the money by their own lights of what’s best (i.e. as if it were just their money). I think this is probably a matter of degree, but I lean towards the former, not least for long-term concerns about reputation, integrity, and people just taking their money elsewhere.
To explain how this could lead us to different conclusions, if I believed I had been entrusted with money to give to A but not B, then I should give to A, even if I personally thought B was better.
I suspect you would agree with this in principle: you wouldn’t want an EA fund manager to recommend a grant clearly/wildly outside the scope of their fund even if they sincerely thought it was great, e.g. the animal welfare fund recommended something that only benefitted humans even if they thought it was more cost-effective than something animal-focused.
However, I imagine you would disagree that this is a problem in practice, because donors expect there to be some overlap between funds and, in any case, fund managers will not recommend things wildly outside their fund’s remit. (I am not claiming this is a problem in practice; might concern is that it may become one and I want to avoid that.)
I haven’t thought lots about the topic, but all these concerns strike me as a reason to move towards a set of funds that are mutually exclusive and collectively exhaustive—this gives donors greater choice and minimises worries about permissible fund allocation.
the extent to which fund managers should be trying to instantiate donor’s wishes vs fund managers allocating the money by their own lights of what’s best (i.e. as if it were just their money). I think this is probably a matter of degree, but I lean towards the former
This is a longer discussion, but I lean towards the latter, both because I think this will often lead to better decisions, and because many donors I’ve talked to actually want the fund managers to spend the money that way (the EA Funds pitch is “defer to experts” and donors want to go all in on that, with only minimal scope constraints).
To explain how this could lead us to different conclusions, if I believed I had been entrusted with money to give to A but not B, then I should give to A, even if I personally thought B was better.
I suspect you would agree with this in principle: you wouldn’t want an EA fund manager to recommend a grant clearly/wildly outside the scope of their fund even if they sincerely thought it was great, e.g. the animal welfare fund recommended something that only benefitted humans even if they thought it was more cost-effective than something animal-focused.
Yeah, I agree that all grants should be broadly in scope – thanks for clarifying.
I haven’t thought lots about the topic, but all these concerns strike me as a reason to move towards a set of funds that are mutually exclusive and collectively exhaustive—this gives donors greater choice and minimises worries about permissible fund allocation.
Fund scope definitions are always a bit fuzzy, many grants don’t fit into a particular bucket very neatly, and there are lots of edge cases. So while I’m sympathetic to the idea in principle, I think it would be really hard to do in practice. See Max’s comment.
In my view, being an enthusiastic longtermist is compatible with finding neartermist worldviews plausible and allocating some funding to them. See, e.g., Ajeya Cotra on the 80,000 Hours podcast. I personally feel excited to fund high-quality projects that develop or promote EA principles, whether they’re longtermist or not. (And Michelle suggested this as well.) For the EAIF, I would evaluate a project like HLI based on whether it seems like it overall furthers the EA project (i.e., makes EA thinking more sophisticated, leads to more people making important decisions according to EA principles, etc.).
FWIW, I think this example is pretty unrealistic, as I don’t think funding constraints will become relevant in this way. I also want to note that funding A violates some principles of donor coordination. In practice, I would probably recommend a split between A and B (recommending my ‘fair share’ to B, and the rest to A); I would probably coordinate this explicitly with the other funds. I would probably also try to refer both A and B to other funders to ensure both get fully funded.
Thanks for this reply, which I found reassuring.
Okay, this is interesting and helpful to know. I’m trying to put my finger on the source of what seems to be a perspectival difference, and I wonder if this relates to the extent to which fund managers should be trying to instantiate donor’s wishes vs fund managers allocating the money by their own lights of what’s best (i.e. as if it were just their money). I think this is probably a matter of degree, but I lean towards the former, not least for long-term concerns about reputation, integrity, and people just taking their money elsewhere.
To explain how this could lead us to different conclusions, if I believed I had been entrusted with money to give to A but not B, then I should give to A, even if I personally thought B was better.
I suspect you would agree with this in principle: you wouldn’t want an EA fund manager to recommend a grant clearly/wildly outside the scope of their fund even if they sincerely thought it was great, e.g. the animal welfare fund recommended something that only benefitted humans even if they thought it was more cost-effective than something animal-focused.
However, I imagine you would disagree that this is a problem in practice, because donors expect there to be some overlap between funds and, in any case, fund managers will not recommend things wildly outside their fund’s remit. (I am not claiming this is a problem in practice; might concern is that it may become one and I want to avoid that.)
I haven’t thought lots about the topic, but all these concerns strike me as a reason to move towards a set of funds that are mutually exclusive and collectively exhaustive—this gives donors greater choice and minimises worries about permissible fund allocation.
This is a longer discussion, but I lean towards the latter, both because I think this will often lead to better decisions, and because many donors I’ve talked to actually want the fund managers to spend the money that way (the EA Funds pitch is “defer to experts” and donors want to go all in on that, with only minimal scope constraints).
Yeah, I agree that all grants should be broadly in scope – thanks for clarifying.
Fund scope definitions are always a bit fuzzy, many grants don’t fit into a particular bucket very neatly, and there are lots of edge cases. So while I’m sympathetic to the idea in principle, I think it would be really hard to do in practice. See Max’s comment.