Hello Michelle. Thanks for replying, but I was hoping you would engage more with the substance of my question—your comment doesn’t really give me any more information than I already had about what to expect.
Let me try again with a more specific case. Suppose you are choosing between projects A and B—perhaps they have each asked for $100k but you only have $100k left. Project A is only eligible for funding from EAIF—the other EA funds consider it outside their respective purviews. Project B is eligible for funding from one of the other EA funds, but so happens to have applied to EAIF. Suppose, further, you think B is more cost-effective at doing good.
What would you do? I can’t think of any other information you would need.
FWIW, I think you must pick A. I think we can assume donors expect the funds not to be overlapping—otherwise, why even have different ones? - and they don’t want their money to go to another fund’s area—otherwise, that’s where they have put it. Hence, picking B would be tantamount to a breach of trust.
(By the same token, if I give you £50, ask you to put it in the collection box for a guide dog charity, and you agree, I don’t think you should send the money to AMF, even if you think AMF is better. If you decide you want to spend my money somewhere else from what we agreed to, you should tell me and offer to return the money.)
Buck, Max, and yourself are enthusiastic longtermists (…) it would seem to follow you could (/should?) put the vast majority of the EAIIF towards long-terminist projects
In my view, being an enthusiastic longtermist is compatible with finding neartermist worldviews plausible and allocating some funding to them. See, e.g., Ajeya Cotra on the 80,000 Hours podcast. I personally feel excited to fund high-quality projects that develop or promote EA principles, whether they’re longtermist or not. (And Michelle suggested this as well.) For the EAIF, I would evaluate a project like HLI based on whether it seems like it overall furthers the EA project (i.e., makes EA thinking more sophisticated, leads to more people making important decisions according to EA principles, etc.).
Let me try again with a more specific case. Suppose you are choosing between projects A and B—perhaps they have each asked for $100k but you only have $100k left. Project A is only eligible for funding from EAIF—the other EA funds consider it outside their respective purviews. Project B is eligible for funding from one of the other EA funds, but so happens to have applied to EAIF. Suppose, further, you think B is more cost-effective at doing good.
FWIW, I think this example is pretty unrealistic, as I don’t think funding constraints will become relevant in this way. I also want to note that funding A violates some principles of donor coordination. In practice, I would probably recommend a split between A and B (recommending my ‘fair share’ to B, and the rest to A); I would probably coordinate this explicitly with the other funds. I would probably also try to refer both A and B to other funders to ensure both get fully funded.
In my view, being an enthusiastic longtermist is compatible with finding neartermist worldviews plausible and allocating some funding to them
Thanks for this reply, which I found reassuring.
FWIW, I think this example is pretty unrealistic, as I don’t think funding constraints will become relevant in this way. I also want to note that funding A violates some principles of donor coordination
Okay, this is interesting and helpful to know. I’m trying to put my finger on the source of what seems to be a perspectival difference, and I wonder if this relates to the extent to which fund managers should be trying to instantiate donor’s wishes vs fund managers allocating the money by their own lights of what’s best (i.e. as if it were just their money). I think this is probably a matter of degree, but I lean towards the former, not least for long-term concerns about reputation, integrity, and people just taking their money elsewhere.
To explain how this could lead us to different conclusions, if I believed I had been entrusted with money to give to A but not B, then I should give to A, even if I personally thought B was better.
I suspect you would agree with this in principle: you wouldn’t want an EA fund manager to recommend a grant clearly/wildly outside the scope of their fund even if they sincerely thought it was great, e.g. the animal welfare fund recommended something that only benefitted humans even if they thought it was more cost-effective than something animal-focused.
However, I imagine you would disagree that this is a problem in practice, because donors expect there to be some overlap between funds and, in any case, fund managers will not recommend things wildly outside their fund’s remit. (I am not claiming this is a problem in practice; might concern is that it may become one and I want to avoid that.)
I haven’t thought lots about the topic, but all these concerns strike me as a reason to move towards a set of funds that are mutually exclusive and collectively exhaustive—this gives donors greater choice and minimises worries about permissible fund allocation.
the extent to which fund managers should be trying to instantiate donor’s wishes vs fund managers allocating the money by their own lights of what’s best (i.e. as if it were just their money). I think this is probably a matter of degree, but I lean towards the former
This is a longer discussion, but I lean towards the latter, both because I think this will often lead to better decisions, and because many donors I’ve talked to actually want the fund managers to spend the money that way (the EA Funds pitch is “defer to experts” and donors want to go all in on that, with only minimal scope constraints).
To explain how this could lead us to different conclusions, if I believed I had been entrusted with money to give to A but not B, then I should give to A, even if I personally thought B was better.
I suspect you would agree with this in principle: you wouldn’t want an EA fund manager to recommend a grant clearly/wildly outside the scope of their fund even if they sincerely thought it was great, e.g. the animal welfare fund recommended something that only benefitted humans even if they thought it was more cost-effective than something animal-focused.
Yeah, I agree that all grants should be broadly in scope – thanks for clarifying.
I haven’t thought lots about the topic, but all these concerns strike me as a reason to move towards a set of funds that are mutually exclusive and collectively exhaustive—this gives donors greater choice and minimises worries about permissible fund allocation.
Fund scope definitions are always a bit fuzzy, many grants don’t fit into a particular bucket very neatly, and there are lots of edge cases. So while I’m sympathetic to the idea in principle, I think it would be really hard to do in practice. See Max’s comment.
I think we can assume donors expect the funds not to be overlapping—otherwise, why even have different ones?
I care about donor expectations, and so I’d be interested to learn how many donors have a preference for fund scopes to not overlap.
However, I’m not following the suggested reasoning for why we should expect such a preference to be common. I think people—including donors—choose between partly-but-not-fully overlapping bundles of goods all the time, and that there is nothing odd or bad about these choices, the preferences revealed by them, or the partial overlap. I might prefer ice cream vendor A over B even though there is overlap in flavours offered; I might prefer newspaper A over B even though there is overlap in topics covered (there might even be overlap in authors); I might prefer to give to nonprofit A over B even though there is overlap in the interventions they’re implementing or the countries they’re working in; I might prefer to vote for party A over B even though there is overlap between their platforms; and so on. I think all of this is extremely common, and that for a bunch of messy reasons it is not clearly the case that generally it would be best for the world or the donors/customers/voters if overlap was reduced to zero.
I rather think it is the other way around: the only thing that would be clearly odd is if scopes were not overlapping but identical. (And even then there could be other reasons for why this makes sense, e.g., different criteria for making decisions within that scope.)
However, I’m not following the suggested reasoning for why we should expect such a preference to be common.
I definitely have the intuition the funds should be essentially non-overlapping. In the past I’ve given to the LTFF, and would be disappointed if it funded something that fit better within one of the other funds that I chose not to donate to.
With non-overlapping funds, donors can choose their allocation between the different areas (within the convex hull). If the funds overlap, donors can no longer donate to the extremal points. This is basically a tax on donors who want to e.g. care about EA Meta but not Longtermist things.
Consider the ice-cream case. Most ice-cream places will offer Vanilla, Chocolate, Strawberry, Mint etc. If instead they only offered different blends, someone who hated strawberry—or was allergic to chocolate—would have little recourse. By offering each as a separate flavour, they accommodate purists and people who want a mixture. Better for the place to offer each as a standalone option, and let donors/customers combine. In fact, for most products it is possible to buy 100% of one thing if you so desire.
This approach is also common in finance; firms will offer e.g. a Tech Fund, a Healthcare Fund and so on, and let investors decide the relative ratio they want between them. This is also (part of) the reason for the decline of conglomerates—investors want to be able to make their own decisions about which business to invest in, not have it decided by managers.
I agree the finance example is useful. I would expect that in both our case and the finance case the best implementation isn’t actually mutually exclusive funds, but funds with clear and explicit ‘central cases’ and assumptions, plus some sensible (and preferably explicit) heuristics to be used across funds like ‘try to avoid multiple funds investing too much in the same thing’.
That seems to be both because there will (as Max suggests) often be no fact of the matter as to which fund some particular company fits in, and also because the thing you care about when investing in a financial fund is in large part profit. In the case of the healthcare and tech fund, there will be clear overlaps—firms using tech to improve healthcare. If I were investing in one or other of these funds, I would be less interested in whether some particular company is more exactly described as a ‘healthcare’ or ‘tech’ company, and care more about whether they seem to be a good example of the thing I invested in. Eg if I invested in a tech fund, presumably I think some things along the lines of ‘technological advancements are likely to drive profit’ and ‘there are low hanging fruit in terms of tech innovations to be applied to market problems’. If some company is doing good tech innovation and making profit in the healthcare space, I’d be keen for the tech fund to invest in it. I wouldn’t be that fussed about whether the healthcare fund also invested in it. Though if the healthcare fund had invested substantially in the company, presumably the price would go up and it would look like a less good option for the tech fund and by extension, for me. I’d expect it to be best for EA Funds to work similarly: set clear expectations around the kinds of thing each fund aims for and what assumptions it makes, and then worry about overlap predominantly insofar as there are large potential donations which aren’t being made because some specific fund is missing (which might be a subset of a current fund, like ‘non-longtermist EA infrastructure’).
I would guess that EAF isn’t a good option for people with very granular views about how best to do good. Analogously, if I had a lot of views about the best ways for technology companies to make a profit (for example, that technology in healthcare was a dead end) I’d often do better to fund individual companies than broad funds.
In case it doesn’t go without saying, I think it’s extremely important to use money in accordance with the (communicated) intentions with which it was solicited. It seems very important to me that EAs act with integrity and are considerate of others.
Thanks for sharing your intuition, which of course moves me toward preferences for less/no overlap being common.
I’m probably even more moved by your comparison to finance because I think it’s a better analogy to EA Funds than the analogies I used in my previous comments.
However, I still maintain that there is no strong reason to think that zero overlap is optimal in some sense, or would widely be preferred. I think the situation is roughly:
There are first-principles arguments (e.g., your ‘convex hull’ argument) for why, under certain assumptions, zero overlap allows for optimal satisfaction of donor preferences.
(Though note that, due to standard arguments for why at least at first glance and under ‘naive’ assumptions splitting small donations is suboptimal, I think it’s at least somewhat unclear how significant the ‘convex hull’ point is in practice. I think there is some tension here as the loss of the extremal points seems most problematic from a ‘maximizing’ perspective, while I think that donor preferences to split their giving across causes are better construed as being the result of “intra-personal bargaining”, and it’s less clear to me how much that decision/allocation process cares about the ‘efficiency loss’ from moving away from the extremal points.)
However, reality is more messy, and I would guess that usually the optimum is somewhere on the spectrum between zero and full overlap, and that this differs significantly on a case-by-case basis. There are things pushing toward zero overlap, and others pushing toward more overlap (see e.g. the examples given for EA Funds below), and they need to be weighed up. It depends on things like transaction costs, principal-agent problems, the shape of market participants’ utility functions, etc.
Here are some reasons that might push toward more overlap for EA Funds:
Efficiency, transaction/communication cost, etc., as mentioned by Jonas.
My view is that ‘zero overlap’ just fails to carve reality at its joints, and significantly so.
I think there will be grants that seem very valuable from, e.g., both a ‘meta’ and a ‘global health’ perspective, and that it would be a judgment call whether the grant fits ‘better’ with the scope of the GHDF or the EAIF. Examples might be pre-cause-neutral GWWC, a fundraising org covering multiple causes but de facto generating 90% of its donations in global health, or an organization that does research on both meta and global health but doesn’t want to apply for ‘restricted’ grants.
If funders adopted a ‘zero overlap’ policy, grantees might worry that they will only be assessed a long one dimension of their impact. So, e.g., an organization that does research on several causes might feel incentivized to split up, or to apply for ‘restricted’ grants. However, this can incur efficiency losses because sometimes it would in fact be better to have less internal separation between activities in different causes than required by such a funding landscape.
More generally, it seems to me that incomplete contracting is everywhere.
If I as a donor made an ex-ante decision that I want my donations to go to cause X but not Y, I think there realistically would be ‘borderline cases’ I simply did not anticipate when making that decision. Even if I wanted, I probably could not tell EA Funds which things I do and don’t want to give to based on their scope, and neither could EA Funds get such a fine-grained preference out of me if they asked me.
Similarly, when EA Funds provides funding to a grantee, we cannot anticipate all the concrete activities the grantee might want to undertake. The conditions implied by the grant application and any restrictions attached to the grant just aren’t fine-grained enough. This is particularly acute for grants that support someone’s career – which might ultimately go in a different direction than anticipated. More broadly, a grantee will sometimes realize they might want to fund activities for which neither of us have previously thought about if they’re covered by the ‘intentions’ or ‘spirit’ of the grant, and this can include activities that would be more clearly in another fund’s scope.
To drive home how strongly I feel about the import of the previous points, my immediate reaction to hearing “care about EA Meta but not Longtermist things” is literally “I have no idea what that’s supposed to even mean”. When I think a bit about it, I can come up with a somewhat coherent and sensible-seeming scope of “longtermist but not meta”, but I have a harder time making sense of “meta but not longtermist” as a reasonable scope. I think if donors wanted that everything that’s longtermist (whether meta or not) was handled by the LTFF, then we should clarify the LTFF’s scope, remove the EAIF, and introduce a “non-longtermist EA fund” or something like that instead—as opposed to having an EAIF that funds things that overlap with some object-level cause areas but not others.
Some concrete examples:
Is 80k meta or longtermist? They have been funded by the EAIF before, but my understanding is that their organizational position is pro-longtermism, that many if not most of their staff are longtermist, and that this has significant implications for what they do (e.g., which sorts of people to advise, which sorts of career profiles to write, etc.).
What about Animal Advocacy Careers? If they wanted funding from EA Funds, should they get it from the AWF or the EAIF?
What about local EA groups? Do we have to review their activities and materials to understand which fund they should be funded by? E.g., I’ve heard that EA NYC is unusually focused on animal welfare (idk how strongly, and if this is still true), and I’m aware of other groups that seem pretty longtermist. Should such groups then not be funded by the EAIF? Should groups with activities in several cause areas and worldviews be co-funded by three or more funds, creating significant overhead?
What about CFAR? Longtermist? Meta?
--
Taking a step back, I think what this highlights is that feedback like this in the comment may well move me toward “be willing to incur a bit more communication cost to discuss where a grant fits best, and to move grants that arguably fit somewhat better with a different fund”. But (i) I think where I’d end up is still a far cry from ‘zero overlap’, and (ii) I think that even if I made a good-faith efforts it’s unclear if I would better fulfil any particular donor’s preference because, due to the “fund scopes don’t carve reality at its joint” point, donors and me might make different judgment calls on ‘where some grant fits best’.
In addition, I expect that different donors would disagree with each other about how to delineate scopes, which grants fits best where, etc.
This also means it would probably more help me to better satisfy donor preferences if I got specific feedback like “I feel grant X would have better fitted with fund Y” as opposed to more abstract preferences about the amount of overlap in fund scope. (Though I recognize that I’m kind of guilty having started/fueled the discussion in more abstract terms.)
However, taking yet another step back, I think that when deciding about the best strategy for EA Funds/the EAIF going forward, I think there are stakeholders besides the donors whose interests matter as well: e.g., grantees, fund managers, and beneficiaries. As implied by some of my points above, I think there can be some tensions between these interests. How to navigate this is messy, and depends crucially on the answer to this question among other things.
My impression is that when the goal is to “maximize impact” – even within a certain cause or by the lights of a certain worldview – we’re less bottlenecked by funding than by high-quality applications, highly capable people ‘matched’ with highly valuable projects they’re a good fit for, etc. This makes me suspect that the optimal strategy would put somewhat less weight on maximally satisfying donor preferences – when they’re in tension with other desiderata – than might be the case in some other nonprofit contexts. So even if we got a lot of feedback along the lines of “I feel grant X would have fitted better with fund Y”, I’m not sure how much that would move the EAIF’s strategy going forward.
(Note that the above is about what ‘products’ to offer donors going forward. Separately from that, I think it’s of course very important to not be misleading, and to make a good-faith effort to use past donations in a way that is consistent with what we told them we’d do at the time. And these demands are ‘quasi-deontological’ and can’t be easily sacrificed for the sake of better meeting other stakeholders’ interests.)
Nothing I have seen makes me thinks the EAIF should change the decision criteria. It seems to be working very well and good stuff is getting funded. So don’t change that to address a comparatively very minor issue like this, would be throwing the baby out with the bathwater!!
-- If you showed me the list here and said ‘Which EA Fund should fund each of these?’ I would have put the Lohmar and the CLTR grants (which both look like v good grants and glad they are getting funded) in the longtermist fund. Based on your comments above you might have made the same call as well.
From an outside view the actual cost of making the grants from the pot of another fund seems incredibly small. At minimum it could just be having someone to look over the end decisions and see if any feel like they belong in a different fund and then quickly double checking with the other fund’s grantmakers that they have no strong objections and then granting the money from a different pot. (You could even do that after the decision to grant has been communicated to applicants, no reason to hold up, if the second fund objects then can still be given by the first fund).
And then all those dogmatic donors to the EAIF who don’t like longtermist stuff can go to bed happy and all those dogmatic donors to the LTFF who don’t like meta stuff can go to bed happy and everyone feels like there money is going to where they expect it to go, etc. Which does matter a little bit because as a donor you feel that you really need to trust that the money is going to where it says on the tin and not to something else.
(But sure if the admin costs here are actually really high or something then not a big deal, it matters a little bit to some donors but is not the most important thing to get right)
From an outside view the actual cost of making the grants from the pot of another fund seems incredibly small. At minimum it could just be having someone to look over the end decisions and see if any feel like they belong in a different fund and then quickly double checking with the other fund’s grantmakers that they have no strong objections and then granting the money from a different pot. (You could even do that after the decision to grant has been communicated to applicants, no reason to hold up, if the second fund objects then can still be given by the first fund).
Thank you for this suggestion. It makes sense to me that this is how the situation looks from the outside.
I’ll think about the general issue and suggestions like this one a bit more, but currently don’t expect large changes to how we operate. I do think this might mean that in future rounds there may be a similar fraction of grants that some donors perceive to better fit with another fund. I acknowledge that this is not ideal, but I currently expect it will seem best after considering the cost and benefits of alternatives.
So please view the following points of me trying to explain why I don’t expect to adopt what may sound like a good suggestion, while still being appreciative of the feedback and suggestions.
I think based on my EA Funds experience so far, I’m less optimistic that the cost would be incredibly small. E.g., I would expect less correlation between “EAIF managers think something is good to fund from a longtermist perspective” and “LTFF managers think something is good to fund from a longtermist perspective” (and vice versa for ‘meta’ grants) than you seem to expect.
This is because grantmaking decisions in these areas rely a lot on judgment calls that different people might make differently even if they’re aligned on broad “EA principes” and other fundamental views. I have this view both because of some cases I’ve seen where we actually discussed (aspects of) grants across both the EAIF and LTFF managers and because within the EAIF committee large disagreements are not uncommon (and I have no reasons to believe that disagreements would be smaller between LTFF and EAIF managers than just within EAIF managers).
To be clear, I would expect decision-relevant disagreements for a minority of grants—but not a sufficiently clear minority that I’d be comfortable acting on “the other fund is going to make this grant” as a default assumption.
Your suggestion of retaining the option to make the grant through the ‘original’ fund would help with this, but not with the two following points.
I think another issue is duplication of time cost. If the LTFF told me “here is a grant we want to make, but we think it fits better for the EAIF—can you fund it?”, then I would basically always want to have a look at it. In maybe 50% [?, unsure] of cases this would only take me like 10 minutes, though the real attention + time cost would be higher. In the other 50% of cases I would want to invest at least another hour—and sometimes significantly more—assessing the grant myself. E.g., I might want to talk to the grantee myself or solicit additional references. This is because I expect that donors and grantees would hold me accountable for that decision, and I’d feel uncomfortable saying “I don’t really have an independent opinion on this grant, we just made it b/c it was recommended by the LTFF”.
(In general, I worry that “quickly double-checking” something is close to impossible between two groups of 4 or so people, all of whom are very opinionated and can’t necessarily predict each other’s views very well, are in parallel juggling dozens of grant assessments, and most of whom are very time-constrained and are doing all of this next to their main jobs.)
A third issue is that increasing the delay between the time of a grant application and the time of a grant payout is somewhat costly. So, e.g., inserting another ‘review allocation of grants to funds’ step somewhere would somewhat help with the time & attention cost by bundling all scoping decisions together; but it would also mean a delay of potentially a few days or even more given fund managers’ constrained availabilities. This is not clearly prohibitive, but significant since I think that some grantees care about the time window between application and potential payments being short.
However, there may be some cases where grants could be quickly transferred (e.g., if for some reason managers from different funds had been involved in a discussion anyway), or there may be other, less costly processes for how to organize transfers. This is definitely something I will be paying a bit more attention to going forward, but for the reasons explained in this and other comments I currently don’t expect significant changes to how we operate.
Thank you so much for your thoughtful and considered reply.
I think based on my EA Funds experience so far, I’m less optimistic that the cost would be incredibly small. E.g., I would expect less correlation between “EAIF managers think something is good to fund from a longtermist perspective” and “LTFF managers think something is good to fund from a longtermist perspective” (and vice versa for ‘meta’ grants) than you seem to expect.
This is because grantmaking decisions in these areas rely a lot on judgment calls that different people might make differently even if they’re aligned on broad “EA principes” and other fundamental views. I have this view both because of some cases I’ve seen where we actually discussed (aspects of) grants across both the EAIF and LTFF managers and because within the EAIF committee large disagreements are not uncommon (and I have no reasons to believe that disagreements would be smaller between LTFF and EAIF managers than just within EAIF managers).
Sorry to change topic but this is super fascinating and more interesting to me than questions of fund admin time (however much I like discussing organisational design I am happy to defer to you / Jonas / etc on if the admin cost is too high – ultimately only you know that).
Why would there be so much disagreement (so much that you would routinely want to veto each others decisions if you had the option)? It seems plausible that if there is such levels of disagreement maybe:
One fund is making quite poor decisions AND/OR
There is significant potential to use consensus decisions making tools as a large group to improve decision quality AND/OR
There are some particularly interesting lessons to be learned by identifying the cruxes of these disagreements.
Just curious and typing up my thoughts. Not expecting good answers to this.
I think all funds are generally making good decisions.
I think a lot of the effect is just that making these decisions is hard, and so that variance between decision-makers is to some extent unavoidable. I think some of the reasons are quite similar to why, e.g., hiring decisions, predicting startup success, high-level business strategy, science funding decisions, or policy decisions are typically considered to be hard/unreliable. Especially for longtermist grants, on top of this we have issues around cluelessness, potentially missing crucial considerations, sign uncertainty, etc.
I think you are correct that both of the following are true:
There is potential of improving decision quality by spending time on discussing diverging views, improving the way we aggregate opinions to the extent they still differ after the amount of discussion that is possible, and maybe by using specific ‘decision making tools’ (e.g., certain ways of a structured discussion + voting).
There are interesting lessons to be learned by identifying cruxes. Some of these lessons might directly improve future decisions, others might be valuable for other reasons—e.g., generating active grantmaking ideas or cruxes/results being shareable and thereby being a tiny bit epistemically helpful to many people.
I think a significant issue is that both of these cost time—both identifying how to improve in these areas and then implementing the improvements -, which is a very scarce resource for fund managers.
I don’t think it’s obvious whether at the margin the EAIF committee should spend more or less time to get more or fewer benefits in these areas. Hopefully this means we’re not too far away from the optimum.
I think there are different views on this within EA Funds (both within the EAIF committee, and potentially between the average view of the EAIF committee and the average view of the LTFF committee—or at least this is suggested by revealed preferences as my loose impression is that LTFF fund managers spend more time in discussions with each other). Personally, I actually lean toward spending less time and less aggregation of opinions across fund managers—but I think currently this view isn’t sufficiently widely shared that I expect it to be reflected in how we’re going to make decisions in the future.
But I also feel a bit confused because some people (e.g., some LTFF fund managers, Jonas) have told me that spending more time discussing disagreements seemed really helpful to them, while I feel like my experience with this and my inside-view prediction of how spending more time on discussions would look like make me expect less value. I don’t really know why that is—it could be that I’m just bad at getting value out of discussions, or updating my views, or something like that.
think a significant issue is that both of these cost time
I am always amazed at how much you fund managers all do given this isn’t your paid job!
I don’t think it’s obvious whether at the margin the EAIF committee should spend more or less time to get more or fewer benefits in these areas
Fair enough. FWIW my general approach to stuff like this is not to aim for perfection but to aim for each iteration/round to be a little bit better than the last.
… it could be that I’m just bad at getting value out of discussions, or updating my views, or something like that.
That is possible. But also possible that you are particularly smart and have well thought-out views and people learn more from talking to you than you do from talking to them! (And/or just that everyone is different and different ways of learning work for different people)
I think based on my EA Funds experience so far, I’m less optimistic that the cost would be incredibly small. E.g., I would expect less correlation between “EAIF managers think something is good to fund from a longtermist perspective” and “LTFF managers think something is good to fund from a longtermist perspective” (and vice versa for ‘meta’ grants) than you seem to expect.
This is because grantmaking decisions in these areas rely a lot on judgment calls that different people might make differently even if they’re aligned on broad “EA principes” and other fundamental views. I have this view both because of some cases I’ve seen where we actually discussed (aspects of) grants across both the EAIF and LTFF managers and because within the EAIF committee large disagreements are not uncommon (and I have no reasons to believe that disagreements would be smaller between LTFF and EAIF managers than just within EAIF managers).
Thanks for writing up this detailed response. I agree with your intuition here that ‘review, refer, and review again’ could be quite time consuming.
However, I think it’s worth considering why this is the case. Do we think that the EAIF evaluators are similarly qualified to judge primarily-longtermist activities as the LTFF people, and the differences of views is basically noise? If so, it seems plausible to me that the EAIF evaluators should be able to unilaterally make disbursements from the LTFF money. In this setup, the specific fund you apply to is really about your choice of evaluator, not about your choice of donor, and the fund you donate to is about your choice of cause area, not your choice of evaluator-delegate.
In contrast, if the EAIF people are not as qualified to judge primarily-longtermist (or primarily animal rights, etc.) projects as the specialised funds’ evaluators, then they should probably refer the application early on in the process, prior to doing detailed due diligence etc.
If you showed me the list here and said ‘Which EA Fund should fund each of these?’ I would have put the Lohmar and the CLTR grants (which both look like v good grants and glad they are getting funded) in the longtermist fund. Based on your comments above you might have made the same call as well.
Thank you for sharing—as I mentioned I find this concrete feedback spelled out in terms of particular grants particularly useful.
[ETA: btw I do think part of the issue here is an “object-level” disagreement about where the grants best fit—personally, I definitely see why among the grants we’ve made they are among the ones that seem ‘closest’ to the LTFF’s scope; but I don’t personally view them as clearly being more in scope for the LTFF than for the EAIF.]
[ETA: btw I do think part of the issue here is an “object-level” disagreement about where the grants best fit—personally, I definitely see why among the grants we’ve made they are among the ones that seem ‘closest’ to the LTFF’s scope; but I don’t personally view them as clearly being more in scope for the LTFF than for the EAIF.]
Thank you Max. A guess the interesting question then is why do we think different things. Is it just a natural case of different people thinking differently or have I made a mistake or is there some way the funds could better communicate.
One way to consider this might be to looking at juts the basic info / fund scope on the both EAIF and LTFF pages and ask: “if the man on the Clapham omnibus only read this information here and the description of these funds where do they think these grants would sit?”
A further point is donor coordination / moral trade / fair-share giving. Treating it as a tax (as Larks suggests) could often amount to defecting in an iterated prisoner’s dilemma between donors who care about different causes. E.g., if the EAIF funded only one org, which raised $0.90 for MIRI, $0.90 for AMF, and $0.90 for GFI for every dollar spent, this approach would lead to it not getting funded, even though co-funding with donors who care about other cause areas would be a substantially better approach.
You might respond that there’s no easy way to verify whether others are cooperating. I might respond that you can verify how much money the fund gets in total and can ask EA Funds about the funding sources. (Also, I think that acausal cooperation works in practice, though perhaps the number of donors who think about it in this way is too small for it to work here.)
I’m afraid I don’t quite understand why such an org would end up unfunded. Such an organisation is not longtermist or animal rights or global poverty specific, and hence seems to fall within the natural remit of the Meta/Infrastructure fund. Indeed according to the goal of the EAIF it seems like a natural fit:
While the other three Funds support direct work on various causes, this Fund supports work that could multiply the impact of direct work, including projects that provide intellectual infrastructure for the effective altruism community, run events, disseminate information, or fundraise for effective charities. [emphasis added]
Nor would this be disallowed by weeatquince’s policy, as no other fund is more appropriate than EAIF:
we aim for the funds to be mutually exclusive. If multiple funds would fund the same project we make the grant from whichever of the Funds seems most appropriate to the project in question.
Just a half-formed thought how something could be “meta but not longtermist” because I thought that was a conceptually interesting issue to unpick.
I suppose one could distinguish between meaning “meta” as (1) does non-object level work or (2) benefits more than one value-bearer group, where the classic, not-quite-mutually-exclusive three options for value-bearer groups are (1) near-term humans, (2) animals, and (3) far future lives.
If one is thinking the former way, something is meta to the degree it does non-object level vs object-level work (I’m not going to define these), regardless of what domain it works towards. In this sense, ‘meta’ and (e.g.) ‘longtermist’ are independent: you could be one, or the other, both, or neither. Hence, if you did non-object level work that wasn’t focused on the longterm, you would be meta but not longtermist (although it might be more natural to say “meta and not longtermist” as there is no tension between them).
If one is thinking the latter way, one might say that an org is less “meta”, and more “non-meta”, the greater the fraction of its resources are intentionally spent to benefit just only one value-bearer group. Here “meta” and “non-meta” are mutually exclusive and a matter of degree. A “non-meta” org is one that spends, say, more than 50% of its resources aimed at one group. The thought is of this is that, on this framework, Animal Advocacy Careers and 80k are not meta, whereas, say, GWWC is meta. Thinking this way, something is meta but not longtermist if it primarily focuses on non-longtermist stuff.
(In both cases, we will run into familiar issues about to making precise what an agent ‘focuses on’ or ‘intends’.)
Hello Michelle. Thanks for replying, but I was hoping you would engage more with the substance of my question—your comment doesn’t really give me any more information than I already had about what to expect.
Let me try again with a more specific case. Suppose you are choosing between projects A and B—perhaps they have each asked for $100k but you only have $100k left. Project A is only eligible for funding from EAIF—the other EA funds consider it outside their respective purviews. Project B is eligible for funding from one of the other EA funds, but so happens to have applied to EAIF. Suppose, further, you think B is more cost-effective at doing good.
What would you do? I can’t think of any other information you would need.
FWIW, I think you must pick A. I think we can assume donors expect the funds not to be overlapping—otherwise, why even have different ones? - and they don’t want their money to go to another fund’s area—otherwise, that’s where they have put it. Hence, picking B would be tantamount to a breach of trust.
(By the same token, if I give you £50, ask you to put it in the collection box for a guide dog charity, and you agree, I don’t think you should send the money to AMF, even if you think AMF is better. If you decide you want to spend my money somewhere else from what we agreed to, you should tell me and offer to return the money.)
In my view, being an enthusiastic longtermist is compatible with finding neartermist worldviews plausible and allocating some funding to them. See, e.g., Ajeya Cotra on the 80,000 Hours podcast. I personally feel excited to fund high-quality projects that develop or promote EA principles, whether they’re longtermist or not. (And Michelle suggested this as well.) For the EAIF, I would evaluate a project like HLI based on whether it seems like it overall furthers the EA project (i.e., makes EA thinking more sophisticated, leads to more people making important decisions according to EA principles, etc.).
FWIW, I think this example is pretty unrealistic, as I don’t think funding constraints will become relevant in this way. I also want to note that funding A violates some principles of donor coordination. In practice, I would probably recommend a split between A and B (recommending my ‘fair share’ to B, and the rest to A); I would probably coordinate this explicitly with the other funds. I would probably also try to refer both A and B to other funders to ensure both get fully funded.
Thanks for this reply, which I found reassuring.
Okay, this is interesting and helpful to know. I’m trying to put my finger on the source of what seems to be a perspectival difference, and I wonder if this relates to the extent to which fund managers should be trying to instantiate donor’s wishes vs fund managers allocating the money by their own lights of what’s best (i.e. as if it were just their money). I think this is probably a matter of degree, but I lean towards the former, not least for long-term concerns about reputation, integrity, and people just taking their money elsewhere.
To explain how this could lead us to different conclusions, if I believed I had been entrusted with money to give to A but not B, then I should give to A, even if I personally thought B was better.
I suspect you would agree with this in principle: you wouldn’t want an EA fund manager to recommend a grant clearly/wildly outside the scope of their fund even if they sincerely thought it was great, e.g. the animal welfare fund recommended something that only benefitted humans even if they thought it was more cost-effective than something animal-focused.
However, I imagine you would disagree that this is a problem in practice, because donors expect there to be some overlap between funds and, in any case, fund managers will not recommend things wildly outside their fund’s remit. (I am not claiming this is a problem in practice; might concern is that it may become one and I want to avoid that.)
I haven’t thought lots about the topic, but all these concerns strike me as a reason to move towards a set of funds that are mutually exclusive and collectively exhaustive—this gives donors greater choice and minimises worries about permissible fund allocation.
This is a longer discussion, but I lean towards the latter, both because I think this will often lead to better decisions, and because many donors I’ve talked to actually want the fund managers to spend the money that way (the EA Funds pitch is “defer to experts” and donors want to go all in on that, with only minimal scope constraints).
Yeah, I agree that all grants should be broadly in scope – thanks for clarifying.
Fund scope definitions are always a bit fuzzy, many grants don’t fit into a particular bucket very neatly, and there are lots of edge cases. So while I’m sympathetic to the idea in principle, I think it would be really hard to do in practice. See Max’s comment.
I care about donor expectations, and so I’d be interested to learn how many donors have a preference for fund scopes to not overlap.
However, I’m not following the suggested reasoning for why we should expect such a preference to be common. I think people—including donors—choose between partly-but-not-fully overlapping bundles of goods all the time, and that there is nothing odd or bad about these choices, the preferences revealed by them, or the partial overlap. I might prefer ice cream vendor A over B even though there is overlap in flavours offered; I might prefer newspaper A over B even though there is overlap in topics covered (there might even be overlap in authors); I might prefer to give to nonprofit A over B even though there is overlap in the interventions they’re implementing or the countries they’re working in; I might prefer to vote for party A over B even though there is overlap between their platforms; and so on. I think all of this is extremely common, and that for a bunch of messy reasons it is not clearly the case that generally it would be best for the world or the donors/customers/voters if overlap was reduced to zero.
I rather think it is the other way around: the only thing that would be clearly odd is if scopes were not overlapping but identical. (And even then there could be other reasons for why this makes sense, e.g., different criteria for making decisions within that scope.)
I definitely have the intuition the funds should be essentially non-overlapping. In the past I’ve given to the LTFF, and would be disappointed if it funded something that fit better within one of the other funds that I chose not to donate to.
With non-overlapping funds, donors can choose their allocation between the different areas (within the convex hull). If the funds overlap, donors can no longer donate to the extremal points. This is basically a tax on donors who want to e.g. care about EA Meta but not Longtermist things.
Consider the ice-cream case. Most ice-cream places will offer Vanilla, Chocolate, Strawberry, Mint etc. If instead they only offered different blends, someone who hated strawberry—or was allergic to chocolate—would have little recourse. By offering each as a separate flavour, they accommodate purists and people who want a mixture. Better for the place to offer each as a standalone option, and let donors/customers combine. In fact, for most products it is possible to buy 100% of one thing if you so desire.
This approach is also common in finance; firms will offer e.g. a Tech Fund, a Healthcare Fund and so on, and let investors decide the relative ratio they want between them. This is also (part of) the reason for the decline of conglomerates—investors want to be able to make their own decisions about which business to invest in, not have it decided by managers.
I agree the finance example is useful. I would expect that in both our case and the finance case the best implementation isn’t actually mutually exclusive funds, but funds with clear and explicit ‘central cases’ and assumptions, plus some sensible (and preferably explicit) heuristics to be used across funds like ‘try to avoid multiple funds investing too much in the same thing’.
That seems to be both because there will (as Max suggests) often be no fact of the matter as to which fund some particular company fits in, and also because the thing you care about when investing in a financial fund is in large part profit. In the case of the healthcare and tech fund, there will be clear overlaps—firms using tech to improve healthcare. If I were investing in one or other of these funds, I would be less interested in whether some particular company is more exactly described as a ‘healthcare’ or ‘tech’ company, and care more about whether they seem to be a good example of the thing I invested in. Eg if I invested in a tech fund, presumably I think some things along the lines of ‘technological advancements are likely to drive profit’ and ‘there are low hanging fruit in terms of tech innovations to be applied to market problems’. If some company is doing good tech innovation and making profit in the healthcare space, I’d be keen for the tech fund to invest in it. I wouldn’t be that fussed about whether the healthcare fund also invested in it. Though if the healthcare fund had invested substantially in the company, presumably the price would go up and it would look like a less good option for the tech fund and by extension, for me. I’d expect it to be best for EA Funds to work similarly: set clear expectations around the kinds of thing each fund aims for and what assumptions it makes, and then worry about overlap predominantly insofar as there are large potential donations which aren’t being made because some specific fund is missing (which might be a subset of a current fund, like ‘non-longtermist EA infrastructure’).
I would guess that EAF isn’t a good option for people with very granular views about how best to do good. Analogously, if I had a lot of views about the best ways for technology companies to make a profit (for example, that technology in healthcare was a dead end) I’d often do better to fund individual companies than broad funds.
In case it doesn’t go without saying, I think it’s extremely important to use money in accordance with the (communicated) intentions with which it was solicited. It seems very important to me that EAs act with integrity and are considerate of others.
Thanks for sharing your intuition, which of course moves me toward preferences for less/no overlap being common.
I’m probably even more moved by your comparison to finance because I think it’s a better analogy to EA Funds than the analogies I used in my previous comments.
However, I still maintain that there is no strong reason to think that zero overlap is optimal in some sense, or would widely be preferred. I think the situation is roughly:
There are first-principles arguments (e.g., your ‘convex hull’ argument) for why, under certain assumptions, zero overlap allows for optimal satisfaction of donor preferences.
(Though note that, due to standard arguments for why at least at first glance and under ‘naive’ assumptions splitting small donations is suboptimal, I think it’s at least somewhat unclear how significant the ‘convex hull’ point is in practice. I think there is some tension here as the loss of the extremal points seems most problematic from a ‘maximizing’ perspective, while I think that donor preferences to split their giving across causes are better construed as being the result of “intra-personal bargaining”, and it’s less clear to me how much that decision/allocation process cares about the ‘efficiency loss’ from moving away from the extremal points.)
However, reality is more messy, and I would guess that usually the optimum is somewhere on the spectrum between zero and full overlap, and that this differs significantly on a case-by-case basis. There are things pushing toward zero overlap, and others pushing toward more overlap (see e.g. the examples given for EA Funds below), and they need to be weighed up. It depends on things like transaction costs, principal-agent problems, the shape of market participants’ utility functions, etc.
Here are some reasons that might push toward more overlap for EA Funds:
Efficiency, transaction/communication cost, etc., as mentioned by Jonas.
My view is that ‘zero overlap’ just fails to carve reality at its joints, and significantly so.
I think there will be grants that seem very valuable from, e.g., both a ‘meta’ and a ‘global health’ perspective, and that it would be a judgment call whether the grant fits ‘better’ with the scope of the GHDF or the EAIF. Examples might be pre-cause-neutral GWWC, a fundraising org covering multiple causes but de facto generating 90% of its donations in global health, or an organization that does research on both meta and global health but doesn’t want to apply for ‘restricted’ grants.
If funders adopted a ‘zero overlap’ policy, grantees might worry that they will only be assessed a long one dimension of their impact. So, e.g., an organization that does research on several causes might feel incentivized to split up, or to apply for ‘restricted’ grants. However, this can incur efficiency losses because sometimes it would in fact be better to have less internal separation between activities in different causes than required by such a funding landscape.
More generally, it seems to me that incomplete contracting is everywhere.
If I as a donor made an ex-ante decision that I want my donations to go to cause X but not Y, I think there realistically would be ‘borderline cases’ I simply did not anticipate when making that decision. Even if I wanted, I probably could not tell EA Funds which things I do and don’t want to give to based on their scope, and neither could EA Funds get such a fine-grained preference out of me if they asked me.
Similarly, when EA Funds provides funding to a grantee, we cannot anticipate all the concrete activities the grantee might want to undertake. The conditions implied by the grant application and any restrictions attached to the grant just aren’t fine-grained enough. This is particularly acute for grants that support someone’s career – which might ultimately go in a different direction than anticipated. More broadly, a grantee will sometimes realize they might want to fund activities for which neither of us have previously thought about if they’re covered by the ‘intentions’ or ‘spirit’ of the grant, and this can include activities that would be more clearly in another fund’s scope.
To drive home how strongly I feel about the import of the previous points, my immediate reaction to hearing “care about EA Meta but not Longtermist things” is literally “I have no idea what that’s supposed to even mean”. When I think a bit about it, I can come up with a somewhat coherent and sensible-seeming scope of “longtermist but not meta”, but I have a harder time making sense of “meta but not longtermist” as a reasonable scope. I think if donors wanted that everything that’s longtermist (whether meta or not) was handled by the LTFF, then we should clarify the LTFF’s scope, remove the EAIF, and introduce a “non-longtermist EA fund” or something like that instead—as opposed to having an EAIF that funds things that overlap with some object-level cause areas but not others.
Some concrete examples:
Is 80k meta or longtermist? They have been funded by the EAIF before, but my understanding is that their organizational position is pro-longtermism, that many if not most of their staff are longtermist, and that this has significant implications for what they do (e.g., which sorts of people to advise, which sorts of career profiles to write, etc.).
What about Animal Advocacy Careers? If they wanted funding from EA Funds, should they get it from the AWF or the EAIF?
What about local EA groups? Do we have to review their activities and materials to understand which fund they should be funded by? E.g., I’ve heard that EA NYC is unusually focused on animal welfare (idk how strongly, and if this is still true), and I’m aware of other groups that seem pretty longtermist. Should such groups then not be funded by the EAIF? Should groups with activities in several cause areas and worldviews be co-funded by three or more funds, creating significant overhead?
What about CFAR? Longtermist? Meta?
--
Taking a step back, I think what this highlights is that feedback like this in the comment may well move me toward “be willing to incur a bit more communication cost to discuss where a grant fits best, and to move grants that arguably fit somewhat better with a different fund”. But (i) I think where I’d end up is still a far cry from ‘zero overlap’, and (ii) I think that even if I made a good-faith efforts it’s unclear if I would better fulfil any particular donor’s preference because, due to the “fund scopes don’t carve reality at its joint” point, donors and me might make different judgment calls on ‘where some grant fits best’.
In addition, I expect that different donors would disagree with each other about how to delineate scopes, which grants fits best where, etc.
This also means it would probably more help me to better satisfy donor preferences if I got specific feedback like “I feel grant X would have better fitted with fund Y” as opposed to more abstract preferences about the amount of overlap in fund scope. (Though I recognize that I’m kind of guilty having started/fueled the discussion in more abstract terms.)
However, taking yet another step back, I think that when deciding about the best strategy for EA Funds/the EAIF going forward, I think there are stakeholders besides the donors whose interests matter as well: e.g., grantees, fund managers, and beneficiaries. As implied by some of my points above, I think there can be some tensions between these interests. How to navigate this is messy, and depends crucially on the answer to this question among other things.
My impression is that when the goal is to “maximize impact” – even within a certain cause or by the lights of a certain worldview – we’re less bottlenecked by funding than by high-quality applications, highly capable people ‘matched’ with highly valuable projects they’re a good fit for, etc. This makes me suspect that the optimal strategy would put somewhat less weight on maximally satisfying donor preferences – when they’re in tension with other desiderata – than might be the case in some other nonprofit contexts. So even if we got a lot of feedback along the lines of “I feel grant X would have fitted better with fund Y”, I’m not sure how much that would move the EAIF’s strategy going forward.
(Note that the above is about what ‘products’ to offer donors going forward. Separately from that, I think it’s of course very important to not be misleading, and to make a good-faith effort to use past donations in a way that is consistent with what we told them we’d do at the time. And these demands are ‘quasi-deontological’ and can’t be easily sacrificed for the sake of better meeting other stakeholders’ interests.)
Nothing I have seen makes me thinks the EAIF should change the decision criteria. It seems to be working very well and good stuff is getting funded. So don’t change that to address a comparatively very minor issue like this, would be throwing the baby out with the bathwater!!
--
If you showed me the list here and said ‘Which EA Fund should fund each of these?’ I would have put the Lohmar and the CLTR grants (which both look like v good grants and glad they are getting funded) in the longtermist fund. Based on your comments above you might have made the same call as well.
From an outside view the actual cost of making the grants from the pot of another fund seems incredibly small. At minimum it could just be having someone to look over the end decisions and see if any feel like they belong in a different fund and then quickly double checking with the other fund’s grantmakers that they have no strong objections and then granting the money from a different pot. (You could even do that after the decision to grant has been communicated to applicants, no reason to hold up, if the second fund objects then can still be given by the first fund).
And then all those dogmatic donors to the EAIF who don’t like longtermist stuff can go to bed happy and all those dogmatic donors to the LTFF who don’t like meta stuff can go to bed happy and everyone feels like there money is going to where they expect it to go, etc. Which does matter a little bit because as a donor you feel that you really need to trust that the money is going to where it says on the tin and not to something else.
(But sure if the admin costs here are actually really high or something then not a big deal, it matters a little bit to some donors but is not the most important thing to get right)
Thank you for this suggestion. It makes sense to me that this is how the situation looks from the outside.
I’ll think about the general issue and suggestions like this one a bit more, but currently don’t expect large changes to how we operate. I do think this might mean that in future rounds there may be a similar fraction of grants that some donors perceive to better fit with another fund. I acknowledge that this is not ideal, but I currently expect it will seem best after considering the cost and benefits of alternatives.
So please view the following points of me trying to explain why I don’t expect to adopt what may sound like a good suggestion, while still being appreciative of the feedback and suggestions.
I think based on my EA Funds experience so far, I’m less optimistic that the cost would be incredibly small. E.g., I would expect less correlation between “EAIF managers think something is good to fund from a longtermist perspective” and “LTFF managers think something is good to fund from a longtermist perspective” (and vice versa for ‘meta’ grants) than you seem to expect.
This is because grantmaking decisions in these areas rely a lot on judgment calls that different people might make differently even if they’re aligned on broad “EA principes” and other fundamental views. I have this view both because of some cases I’ve seen where we actually discussed (aspects of) grants across both the EAIF and LTFF managers and because within the EAIF committee large disagreements are not uncommon (and I have no reasons to believe that disagreements would be smaller between LTFF and EAIF managers than just within EAIF managers).
To be clear, I would expect decision-relevant disagreements for a minority of grants—but not a sufficiently clear minority that I’d be comfortable acting on “the other fund is going to make this grant” as a default assumption.
Your suggestion of retaining the option to make the grant through the ‘original’ fund would help with this, but not with the two following points.
I think another issue is duplication of time cost. If the LTFF told me “here is a grant we want to make, but we think it fits better for the EAIF—can you fund it?”, then I would basically always want to have a look at it. In maybe 50% [?, unsure] of cases this would only take me like 10 minutes, though the real attention + time cost would be higher. In the other 50% of cases I would want to invest at least another hour—and sometimes significantly more—assessing the grant myself. E.g., I might want to talk to the grantee myself or solicit additional references. This is because I expect that donors and grantees would hold me accountable for that decision, and I’d feel uncomfortable saying “I don’t really have an independent opinion on this grant, we just made it b/c it was recommended by the LTFF”.
(In general, I worry that “quickly double-checking” something is close to impossible between two groups of 4 or so people, all of whom are very opinionated and can’t necessarily predict each other’s views very well, are in parallel juggling dozens of grant assessments, and most of whom are very time-constrained and are doing all of this next to their main jobs.)
A third issue is that increasing the delay between the time of a grant application and the time of a grant payout is somewhat costly. So, e.g., inserting another ‘review allocation of grants to funds’ step somewhere would somewhat help with the time & attention cost by bundling all scoping decisions together; but it would also mean a delay of potentially a few days or even more given fund managers’ constrained availabilities. This is not clearly prohibitive, but significant since I think that some grantees care about the time window between application and potential payments being short.
However, there may be some cases where grants could be quickly transferred (e.g., if for some reason managers from different funds had been involved in a discussion anyway), or there may be other, less costly processes for how to organize transfers. This is definitely something I will be paying a bit more attention to going forward, but for the reasons explained in this and other comments I currently don’t expect significant changes to how we operate.
Thank you so much for your thoughtful and considered reply.
Sorry to change topic but this is super fascinating and more interesting to me than questions of fund admin time (however much I like discussing organisational design I am happy to defer to you / Jonas / etc on if the admin cost is too high – ultimately only you know that).
Why would there be so much disagreement (so much that you would routinely want to veto each others decisions if you had the option)? It seems plausible that if there is such levels of disagreement maybe:
One fund is making quite poor decisions AND/OR
There is significant potential to use consensus decisions making tools as a large group to improve decision quality AND/OR
There are some particularly interesting lessons to be learned by identifying the cruxes of these disagreements.
Just curious and typing up my thoughts. Not expecting good answers to this.
I think all funds are generally making good decisions.
I think a lot of the effect is just that making these decisions is hard, and so that variance between decision-makers is to some extent unavoidable. I think some of the reasons are quite similar to why, e.g., hiring decisions, predicting startup success, high-level business strategy, science funding decisions, or policy decisions are typically considered to be hard/unreliable. Especially for longtermist grants, on top of this we have issues around cluelessness, potentially missing crucial considerations, sign uncertainty, etc.
I think you are correct that both of the following are true:
There is potential of improving decision quality by spending time on discussing diverging views, improving the way we aggregate opinions to the extent they still differ after the amount of discussion that is possible, and maybe by using specific ‘decision making tools’ (e.g., certain ways of a structured discussion + voting).
There are interesting lessons to be learned by identifying cruxes. Some of these lessons might directly improve future decisions, others might be valuable for other reasons—e.g., generating active grantmaking ideas or cruxes/results being shareable and thereby being a tiny bit epistemically helpful to many people.
I think a significant issue is that both of these cost time—both identifying how to improve in these areas and then implementing the improvements -, which is a very scarce resource for fund managers.
I don’t think it’s obvious whether at the margin the EAIF committee should spend more or less time to get more or fewer benefits in these areas. Hopefully this means we’re not too far away from the optimum.
I think there are different views on this within EA Funds (both within the EAIF committee, and potentially between the average view of the EAIF committee and the average view of the LTFF committee—or at least this is suggested by revealed preferences as my loose impression is that LTFF fund managers spend more time in discussions with each other). Personally, I actually lean toward spending less time and less aggregation of opinions across fund managers—but I think currently this view isn’t sufficiently widely shared that I expect it to be reflected in how we’re going to make decisions in the future.
But I also feel a bit confused because some people (e.g., some LTFF fund managers, Jonas) have told me that spending more time discussing disagreements seemed really helpful to them, while I feel like my experience with this and my inside-view prediction of how spending more time on discussions would look like make me expect less value. I don’t really know why that is—it could be that I’m just bad at getting value out of discussions, or updating my views, or something like that.
I am always amazed at how much you fund managers all do given this isn’t your paid job!
Fair enough. FWIW my general approach to stuff like this is not to aim for perfection but to aim for each iteration/round to be a little bit better than the last.
That is possible. But also possible that you are particularly smart and have well thought-out views and people learn more from talking to you than you do from talking to them!
(And/or just that everyone is different and different ways of learning work for different people)
Thanks for writing up this detailed response. I agree with your intuition here that ‘review, refer, and review again’ could be quite time consuming.
However, I think it’s worth considering why this is the case. Do we think that the EAIF evaluators are similarly qualified to judge primarily-longtermist activities as the LTFF people, and the differences of views is basically noise? If so, it seems plausible to me that the EAIF evaluators should be able to unilaterally make disbursements from the LTFF money. In this setup, the specific fund you apply to is really about your choice of evaluator, not about your choice of donor, and the fund you donate to is about your choice of cause area, not your choice of evaluator-delegate.
In contrast, if the EAIF people are not as qualified to judge primarily-longtermist (or primarily animal rights, etc.) projects as the specialised funds’ evaluators, then they should probably refer the application early on in the process, prior to doing detailed due diligence etc.
Thank you for sharing—as I mentioned I find this concrete feedback spelled out in terms of particular grants particularly useful.
[ETA: btw I do think part of the issue here is an “object-level” disagreement about where the grants best fit—personally, I definitely see why among the grants we’ve made they are among the ones that seem ‘closest’ to the LTFF’s scope; but I don’t personally view them as clearly being more in scope for the LTFF than for the EAIF.]
Thank you Max. A guess the interesting question then is why do we think different things. Is it just a natural case of different people thinking differently or have I made a mistake or is there some way the funds could better communicate.
One way to consider this might be to looking at juts the basic info / fund scope on the both EAIF and LTFF pages and ask: “if the man on the Clapham omnibus only read this information here and the description of these funds where do they think these grants would sit?”
A further point is donor coordination / moral trade / fair-share giving. Treating it as a tax (as Larks suggests) could often amount to defecting in an iterated prisoner’s dilemma between donors who care about different causes. E.g., if the EAIF funded only one org, which raised $0.90 for MIRI, $0.90 for AMF, and $0.90 for GFI for every dollar spent, this approach would lead to it not getting funded, even though co-funding with donors who care about other cause areas would be a substantially better approach.
You might respond that there’s no easy way to verify whether others are cooperating. I might respond that you can verify how much money the fund gets in total and can ask EA Funds about the funding sources. (Also, I think that acausal cooperation works in practice, though perhaps the number of donors who think about it in this way is too small for it to work here.)
I’m afraid I don’t quite understand why such an org would end up unfunded. Such an organisation is not longtermist or animal rights or global poverty specific, and hence seems to fall within the natural remit of the Meta/Infrastructure fund. Indeed according to the goal of the EAIF it seems like a natural fit:
Nor would this be disallowed by weeatquince’s policy, as no other fund is more appropriate than EAIF:
Just a half-formed thought how something could be “meta but not longtermist” because I thought that was a conceptually interesting issue to unpick.
I suppose one could distinguish between meaning “meta” as (1) does non-object level work or (2) benefits more than one value-bearer group, where the classic, not-quite-mutually-exclusive three options for value-bearer groups are (1) near-term humans, (2) animals, and (3) far future lives.
If one is thinking the former way, something is meta to the degree it does non-object level vs object-level work (I’m not going to define these), regardless of what domain it works towards. In this sense, ‘meta’ and (e.g.) ‘longtermist’ are independent: you could be one, or the other, both, or neither. Hence, if you did non-object level work that wasn’t focused on the longterm, you would be meta but not longtermist (although it might be more natural to say “meta and not longtermist” as there is no tension between them).
If one is thinking the latter way, one might say that an org is less “meta”, and more “non-meta”, the greater the fraction of its resources are intentionally spent to benefit just only one value-bearer group. Here “meta” and “non-meta” are mutually exclusive and a matter of degree. A “non-meta” org is one that spends, say, more than 50% of its resources aimed at one group. The thought is of this is that, on this framework, Animal Advocacy Careers and 80k are not meta, whereas, say, GWWC is meta. Thinking this way, something is meta but not longtermist if it primarily focuses on non-longtermist stuff.
(In both cases, we will run into familiar issues about to making precise what an agent ‘focuses on’ or ‘intends’.)