(I work for EA Funds, including EAIF, helping out with public communications among other work. I’m not a grantmaker on EAIF and I’m not responsible for any decision on any specific EAIF grant).
Hi. Thanks for writing this. I appreciate you putting the work in this, even though I strongly disagree with the framing of most of the doc that I feel informed enough to opine on, as well as most of the object-level.
Ultimately, I think the parts of your report about EA Funds are mostly incorrect or substantively misleading, given the best information I have available. But I think it’s possible I’m misunderstanding your position or I don’t have enough context. So please read the following as my own best understanding of the situation, which can definitely be wrong. But first, onto the positives:
I appreciate that the critical points in the doc are made as technical critiques, rather than paradigmatic ones. Technical critiques are ones that people are actually compelled to respond to, and can actually compel action (rather than just make people feel vaguely bad/smug and don’t compel any change).
The report has many numerical/quantitative details. In theory, those are easier to falsify.
The report appears extensive and must have taken a long time to write.
There are also some things the report mentioned that we have also been tracking, and I believe we have substantial room for improvement:
Our grant evaluation process is still slower than we would like.
While broadly faster than other comparable funds I’m aware of (see this comment), I still think we have substantial room for improvement.
Our various subfunds, particularly EAIF, have at points been understaffed and under-capacity.
While I strongly disagree with the assessment that hiring more grantmakers is “fairly straightforward” (bad calls with grant evaluations are very costly, for reasons including but not limited to insufficient attention to adversarial selection; empirically most EA grantmaking organizations have found it very difficult to hire), I do think on the margin we can do significantly more on hiring.
Our limited capacity has made it difficult for us to communicate with and/or coordinate with all the other stakeholders in the system, so we’re probably missing out on key high-EV opportunities (eg several of our existing collaborations in AI safety have started later than they counterfactually could have, and we haven’t been able to schedule time to fly to coordinate with folks in London/DC/Netherlands/Sweden.
One of the reasons I came on to EA Funds full-time is to help communicate with various groups.
Now, onto the disagreements:
Procedurally:
I was surprised that so much of the views that were ascribed to “EA Funds’ leadership” were from notes taken from a single informal call with the EA Funds project lead, that you did not confirm was okay to be shared publicly.
They said that they usually explicitly request privacy before quoting them publicly, but are not sure if they did so in this instance.
They were also surprised that there was a public report out at all.
My best guess is that there was a misunderstanding that arose from a norm difference, where you come from the expectation that professional meetings are expected to be public unless explicitly stated otherwise, whereas the norm that I (and I think most EAs?) are more used to is that 1-1 meetings are private unless explicitly stated otherwise.
They also disagree with the characterization of almost all of their comments (will say more below), which I think speaks to the epistemic advantages of confirming before publicly attributing comments made by someone else.
I’d have found it helpful if you shared a copy of the post before making it public.
We could’ve corrected most misunderstandings
If you were too busy for private corrections, I could’ve at least written this response earlier.
Many things in general were false (more details below). Having a fact-checking process might be useful going forwards.
Substantively:
When the report quoted “CEA has had to step in and provide support in evaluating EAIF grants for them” I believe this is false or at least substantively misleading.
The closest thing I can think about is that we ask CEA Comm Health for help in reviewing comm health issues with our grants (which as I understand is part of their explicit job duties and both sides are happy with)”
It’s possible your source misunderstood the relationship and thought the Comm Health work was supererogatory or accidental?
We frequently ask technical advisors for advice on project direction in a less institutionalized capacity as well[1]. I think “step in” conveys the wrong understanding, as most of the grant evaluation is still done by the various funds.
(To confirm this impression, we checked with multiple senior people involved with CEA’s work; however, this was not an exhaustive sweep and it’s definitely possible that my impression is incorrect).
While it is true that we’re much slower than we would like, it seems very unreasonable to single out EA Funds grantmaking as “unreasonably long” when other grantmakers are as slow or slower.
“”Not weighing in on LTFF specifically, but from having done a lot of traditional nonprofit fundraising, I’d guess two months is a faster response time than 80% of foundations/institutional funders, and one month is probably faster than like 95%+. My best guess at the average for traditional nonprofit funders is more like 3-6 months. I guess my impression is that even in the worst cases, EA Funds has been operating pretty well above average compared to the traditional nonprofit funding world (though perhaps that isn’t the right comparison). Given that LTFF is funding a lot of research, 2 months is almost certainly better than most academic grants.
My impression from what I think is a pretty large sample of EA funders and grants is also that EA Funds is the fastest turnaround time on average compared to the list you mention [Editor’s note: “Open Phil, SFF, Founders Pledge, and Longview”] ([with] exceptions in some cases in both directions for EA Funds and other funders)”
I also sanity-checked with both Google Search and GPT-4[2]
Broadly I’m aware that other people on the forum also believes that we’re slow, but I think most people who believe this believe so because:
We have our own aspirations to be faster, and we try to do so.
They think from a first-principles perspective that grantmakers “can” be faster.
We talk about our decisions and process very publicly, and so become more of an easy target for applicants’ grievances.
But while I understand and sympathize with other people’s frustrations[3], it is probably not factually true that we’re slower in relative terms than other organizations, and it’s odd to single out EA Funds here.
When your report said, “Critically, the expert reports that another major meta donor found EA Funds leadership frustrating to work with, and so ended up disengaging from further meta grantmaking coordination” my guess is that the quoted position is not true.
I feel like we’re keeping tabs on all the major donors (OP, SFF, Longview, etc).So I’m not sure who they could possibly be referring to.
Though I guess it’s possible that there is a major donor that’s so annoyed with us that they made efforts to hide themselves from us so we haven’t even heard about them.
But I think it’s more likely that the person in question isn’t a major donor.
To be clear, any donor feeling frustrated with us is regrettable. While it is true that not all donors can or should want to work with us (e.g. due to sufficiently differing cause prioritization or empirical worldviews), it is still regrettable that people have an emotionally frustrating experience.
Your report says that EA Funds leadership was “strongly dismissing the value of prioritization research, where other grantmakers generally expressed higher uncertainty”, but this is false.
I want to be clear that EA Funds has historically been, and currently is, quite positive on cause prioritization in general (though of course specific work may be lower quality, or good work that’s not cause prioritization may be falsely labeled as cause prioritization).
By revealed preferences, EAIF has given very large grants to worldview investigations and moral weights work at Rethink Priorities
By stated preferences, “research that aids prioritization across different cause areas” was listed as one of the central examples of things that EAIF would be excited to fund.
My understanding is that the best evidence you have for this view is that EA Funds leadership “would consider less than half of what [Rethink Priorities] does cause prioritization.”
My best guess is that this is just a semantics misunderstanding, where EA Funds’ project lead was trying to convey a technical point about the difference between intercause cause prioritization vs intervention prioritization, whereas you understood his claim as a emotive position of “boo cause prioritization”
Your report states that “EA Funds leadership doesn’t believe that there is more uncertainty now with EA Fund’s funding compared to other points in time” This is clearly false.
I knew coming on to EA Funds that the job will have greater job uncertainty than other jobs I’ve had in the past, and I believe this was adequately communicated to me.
We think about funding uncertainty a lot. EA Funds’ funding has always been uncertain, and things have gotten worse since Nov 2022.
Nor would it be consistent with either our stated or revealed preferences.
Our revealed preference is that we spend substantially more staff time on fundraising than we have in the past.
The last one was even the title of a post with 4000+ views!
I don’t have a good idea for how much more unambiguous we could be.
Semantically:
I originally want to correct misunderstandings and misrepresentations of EA Funds’ positions more broadly in the report. However I think there were just a lot of misunderstandings overall, so I think it’s simpler for people to just assume I contest almost every categorization of the form “EA funds believes X”. A few select examples:
When your report claimed “leadership is of the view that the current funding landscape isn’t more difficult for community builders” I a) don’t think we’ve said that, and b) to the extent we believe it, it’s relative to eg 2019; it’d be false compared to the 2022 era of unjustly excessive spending.
“The EA Funds chair has clarified that EAIF would only really coordinate with OP, since they’re reliably around; only if the [Meta-Charity Funders] was around for some time, would EA Funds find it worth factoring into their plans. ”
To clarify, at the time the fund chair was unsure if MCF was only going to have one round. If they only have one round, it wouldn’t make sense to change EA Funds’ strategy based on that. If they have multiple rounds (eg, more than 2, it could be worth factoring in). The costs of coordination are nontrivially significant.
It’s also worth noting that the fund chair had 2 calls with MCF and passed on various grants that they thought MCF might be interested in evaluating, which some people may consider coordination.
We’ve also coordinated more extensively with other non-OP funders, and have plans in the works for other collaborations with large funders.
“In general, they don’t think that other funders outside of OP need to do work on prioritization, and are in general sceptical of such work. ”
AFAIK nobody at EA Funds believes this.
[EA funds believes] “so if EA groups struggle to raise money, it’s simply because there are more compelling opportunities available instead.”
The statement seems kind of conceptually confused? Funders should always be trying to give to the most cost-effective projects on the margin.
The most charitable position that’s similar to the above I could think of is that some people might believe that
“most community-building projects that are not funded now aren’t funded because of constraints on grantmaker capacity,” so grantmakers make poor decisions
Note that the corollary to the above is that many of the community building projects that are funded should not have been funded.
I can’t speak to other people at EA Funds, but my own best guess is that this is not true (for projects that people online are likely to have heard of).
I’m more optimistic about boutique funding arrangements for projects within people’s networks that are unlikely to have applied to big funders, or people inspiring those around them to create new projects.
If projects aren’t funded, the biggest high-level reason is that there are limited resources in the world in general, and in EA specifically. You might additionally also believe that meta-EA in general is underfunded relative to object-level programs.
(Minor): To clarify “since OP’s GHW EA team is focusing on effective giving, EAIF will consider this less neglected” we should caveat this by saying that less neglected doesn’t necessarily mean less cost-effective than other plausible things for EAIF to fund.
Re: “EA Funds not posting reports or having public metrics of success”
My understanding is that you are (understandably) upset that we don’t have clear metrics and cost-effectiveness analyses written up.
I think this is a reasonable and understandable complaint, and we have indeed gotten this feedback before from others and have substantial room to improve here.
However, I think many readers might interpret the statement as something stronger, e.g. interpreting it as us not posting reports or writing publicly much at all
As a matter of practice, we write a lot more about what we fund and our decision process than any other EA funder I’m aware of (and likely more than most other non-EA funders). I think many readers may get the wrong impression of our level of transparency from that comment.
Note to readers: I reached out to Joel to clarify some of these points before posting. I really appreciate his prompt responses! Due to time constraints, I decided to not send him a copy of this exact comment before posting publicly.
“The median time to receive a response for an academic grant can vary significantly depending on the funding organization, the field of study, and the specific grant program. Generally, the process can take anywhere from a few months to over a year. ” “The timeline for receiving a response on grant applications can vary across different fields and types of grants, but generally, the processes are similar in length to those in the academic and scientific research sectors.” “Smaller grants in this field might be decided upon quicker, potentially within 3 to 6 months [emphasis mine], especially if they require less funding or involve fewer regulatory hurdles.”
Being funded by grants kind of sucks as an experience compared to e.g. employment; I dislike adding to such frustrations. There are also several cases I’m aware of where counterfactually impactful projects were not taken due to funders being insufficiently able to fund things in time, in some of those incidences I’m more responsible than anybody else.
Thanks for engaging. I appreciate that we can have a fairly object-level disagreement over this issue; it’s not personal, one way or another.
Meta point to start: We do not make any of these criticisms of EA Funds lightly, and when we do, it’s against our own interests, because we ourselves are potentially dependent on EAIF for future funding.
To address the points brought up, generally in the order that you raised them:
(1) On the fundamental matter of publication. I would like to flag out that, from checking the email chain plus our own conversation notes (both verbatim and cleaned-up), there was no request that this not be publicized.
For all our interviews, whenever someone flagged out that X data or Y document or indeed the conversation in general shouldn’t be publicized, we respected this and did not do so. In the public version of the report, this is most evident in our spreadsheet where a whole bunch of grant details have been redacted; but more generally, anyone with the “true” version of the report shared with the MCF leadership will also be able to spot differences. We also redacted all qualitative feedback from the community survey, and by default anonymized all expert interviewees who gave criticisms of large grantmakers, to protect them from backlash.
I would also note that we generally attributed views to, and discussed, “EA Leadership” in the abstract, both because we didn’t want to make this a personal criticism, and also because it afforded a degree of anonymity.
At the end of the day, I apologize if the publication was not in line with what EA Funds would have wanted—I agree it’s probably a difference in norms. In a professional context, I’m generally comfortable with people relaying that I said X in private, unless there was an explicit request not to share (e.g. I was talking to a UK-based donor yesterday, and I shared a bunch of my grantmaking views. If he wrote a post on the forum summarizing the conversations he had with a bunch of research organizations and donor advisory orgs, including our own, I wouldn’t object). More generally, I think if we have some degree of public influence (including by the money we control) it would be difficult from the perspective of public accountability if “insiders” such as ourselves were unwilling to share with the public what we think or know.
(2) For the issue of CEA stepping in: In our previous conversation, you relayed that you asked a senior person at CEA and they in turn said that “they’re aware of some things that might make the statement technically true but misleading, and they are not aware of anything that would make the statement non-misleading, although this isn’t authoritative since many thing happened at CEA”. For the record, I’m happy to remove this since the help/assistance, if any, doesn’t seem too material one way or another.
(3) For whether it’s fair to characterize EAIF’s grant timelines as unreasonably long. As previously discussed, I think the relevant metric is EAIF’s own declared timetable (“The Animal Welfare Fund, Long-Term Future Fund and EA Infrastructure Fund aim to respond to all applications in 2 months and most applications in 3 weeks.”). This is because organizations and individuals make plans based on when they expect to get an answer—when to begin applying; whether to start or stop projects; whether to go find another job; whether to hire or fire; whether to reach out to another grantmaker who isn’t going to support you until and unless you have already exhausted the primary avenues of potential funding.
(4) The issue of the major donor we relayed was frustrated/turned off. You flag out that you’re keeping tabs on all the major donors, and so don’t think the person in question is major. While I agree that it’s somewhat subjective—it’s also true that this is a HNWI who, beyond their own giving, is also sitting on the legal or advisory boards many other significant grantmakers and philanthropic outfits. Also, knowledgeable EAs in the space have generally characterized this person as an important meta funder to me (in the context of my own organization then thinking of fundraising, and being advised as to whom to approach). So even if they aren’t major in the sense that OP (or EA Funds are), they could reasonably be considered fairly significant. In any case, the discussion is backwards, I think—I agree that they don’t play as significant a role in the community right now (and so you assessment of them as non-major is reasonable), but that would be because of the frustration they have had with EA Funds (and, to be fair, the EA community in general, I understand). So perhaps it’s best to understand this as potentially vs currently major.
(5) On whether it’s fair to characterize EA Funds leadership as being strongly dismissive of cause prioritization. We agree that grants have been made to RP; so the question is cause prioritization outside OP and OP-funded RP. Our assessment of EA Fund’s general scepticism of prioritization was based, among other things, on what we reported in the previous section “They believe cause prioritization is an area that is talent constrained, and there aren’t a lot of people they feel great giving to, and it’s not clear what their natural pay would be. They do not think of RP as doing cause prioritization, and though in their view RP could absorb more people/money in a moderately cost-effective way, they would consider less than half of what they do cause prioritization. In general, they don’t think that other funders outside of OP need to do work on prioritization, and are in general sceptical of such work.” In your comment, you dispute that the bolded part in particular is true, saying “AFAIK nobody at EA Funds believes this.”
We have both verbatim and cleaned up/organized notes on this (n.b. we shared both with you privately). So it appears we have a fundamental disagreement here (and also elsewhere) as to whether what we noted down/transcribed is an accurate record of what was actually said.
TLDR: Fundamentally, I stand by the accuracy of our conversation notes.
(a) Epistemically, it’s more likely that one doesn’t remember what one said previously vs the interviewer (if in good faith) catastrophically misunderstanding and recording down something that wholesale wasn’t said at all (as opposed to a more minor error—we agree that that can totally happen; see below)
(b) From my own personal perspective—I used to work in government and in consulting (for governments). It was standard practice to have notes of meeting, as made by junior staffers and then submitted to more senior staff for edits and approval. Nothing resembling this happened to either me or anyone else (i.e. just total misunderstanding tantamount to fabrication, in saying that that XYZ was said when nothing of the sort took place).
(c) My word does not need to be taken for this. We interviewed other people, and I’m beginning to reach out to them again to check that our notes matched what they said. One has already responded (the person we labelled Expert 5 on Page 34 of the report); they said “This is all broadly correct” but requested we made some minor edits to the following paragraphs (changes indicated by bold and strikethrough)
Expert 5: Reports both substantive and communications-related concerns about EA Funds leadership.
For the latter, the expert reports both himself and others finding communications with EA Funds leadership difficult and the conversations confusing.
For the substantive concerns – beyond the long wait times EAIF imposes on grantees, the expert was primarily worried that EA Funds leadership has been unreceptive to new ideas and that they are unjustifiably confident that EA Funds is fundamentally correct in its grantmaking decisions. In particular, it appears to the expert that EA Funds leadership does not believe that additional sources of meta funding would be useful for non-EAIF grants [phrase added] – they believe that projects unfunded by EAIF do not deserve funding at all (rather than some projects perhaps not being the right fit for the EAIF, but potentially worth funding by other funders with different ethical worldviews, risk aversion or epistemics). Critically, the expert reports that another major meta donor found EA Funds leadership frustrating to work with, and so ended up disengaging from further meta grantmaking coordinationand this likely is one reason they ended up disengagement from further meta grantmaking coordination [replaced].
My even handed interpretation of this overall situation (trying to be generous to everyone) is that what was reported here (“In general, they don’t think that other funders outside of OP need to do work on prioritization”) was something the EA Funds interviewee said relatively casually (not necessarily a deep and abiding view, and so not something worth remembering) - perhaps indicative of scepticism of a lot of cause prioritization work but not literally thinking nothing outside OP/RP is worth funding. (We actually do agree with this scepticism, up to an extent).
(6) On whether our statement that “EA Funds leadership doesn’t believe that there is more uncertainty now with EA Fund’s funding compared to other points in time” is accurate. You say that this is clearly false. Again, I stand by the accuracy of our conversation notes. And in fact, I actually do personally and distinctively remember this particular exchange, because it stood out, as did the exchange that immediately followed, on whether OP’s use of the fund-matching mechanism creates more uncertainty.
My generous interpretation of this situation is, again, some things may be said relatively casually, but may not be indicative of deep, abiding views.
(8) For the various semantic disagreements. Some of it we discussed above (e.g. the OP cause prioritization stuff); for the rest -
On whether this part is accurate: “Leadership is of the view that the current funding landscape isn’t more difficult for community builders”. Again, we do hold that this was said, based on the transcripts. And again, to be even handed, I think your interpretation (b) is right—probably your team is thinking of the baseline as 2019, while we were thinking mainly of 2021-now.
On whether this part is accurate: “The EA Funds chair has clarified that EAIF would only really coordinate with OP, since they’re reliably around; only if the [Meta-Charity Funders] was around for some time, would EA Funds find it worth factoring into their plans. ” I don’t think we disagree too much, if we agree that EA Fund’s position is that coordination is only worthwhile if the counterpart is around for a bit. Otherwise, it’s just some subjective disagreement on what what coordination is or what significant degrees of it amount to.
On this statement: “[EA funds believes] “so if EA groups struggle to raise money, it’s simply because there are more compelling opportunities available instead.”
In our discussion, I asked about the community building funding landscape being worse; the interviewee disagreed with this characterization, and started discussing how it’s more that standards have risen (which we agree is a factor). The issue is that the other factor of objectively less funding being available was not brought up, even though it is, in our view, the dominant factor (and if you asked community builders this will be all they talk about). I think our disagreement here is partly subjective—over what a bad funding landscape is, and also the right degree of emphasis to put on rising standards vs less funding.
(9) EA Funds not posting reports or having public metrics of successes. Per our internal back-and-forth, we’ve clarified that we mean reports of success or having public metrics of success. We didn’t view reports on payouts to be evidence of success, since payouts are a cost, and not the desired end goal in itself. This contrasts with reports on output (e.g. a community building grant actually leading to increased engagement on XYZ engagement metrics) or much more preferably, report on impact (e.g. and those XYZ engagement metrics leading to actual money donated to GiveWell, from which we can infer that X lives were saved). Like, speaking for my own organization, I don’t think the people funding our regranting budgets would be happy if I reported the mere spending as evidence of success.
(OVERALL) For what it’s worth, I’m happy to agree to disagree, and call it a day. Both your team and mine are busy with our actual work of research/grantmaking/etc, and I’m not sure if further back and forth will be particularly productive, or a good use of my time or yours.
I’m going to butt in with some quick comments, mostly because:
I think it’s pretty important to make sure the report isn’t causing serious misunderstandings
and because I think it can be quite stressful for people to respond to (potentially incorrect) criticisms of their projects — or to content that seem to misrepresent their project(s) — and I think it can help if someone else helps disentangle/clarify things a bit. (To be clear, I haven’t run this past Linch and don’t know if he’s actually finding this stressful or the like. And I don’t want to discourage critical content or suggest that it’s inherently harmful; I just think external people can help in this kind of discussion.)
I’m sharing comments and suggestions below, using your (Joel’s) numbering. (In general, I’m not sharing my overall views on EA Funds or the report. I’m just trying to clarify some confusions that seem resolvable, based on the above discussion, and suggest changes that I hope would make the report more useful.)
(2) Given that apparently the claim that “CEA has had to step in and provide support” EA Funds is likely “technically misleading”, it seems good to in fact remove it from the report (or keep it in but immediately and explicitly flag that this seems likely misleading and link Linch’s comment) — you said you’re happy to do this, and I’d be glad to see it actually removed.
(3) The report currently concludes that would-be grantees “wait an unreasonable amount of time before knowing their grant application results.” Linch points out that other grantmakers tend to have similar or longer timelines, and you don’t seem to disagree (but argue that it’s important to compare the timelines to what EA Funds sets as the expectation for applicants, instead of comparing them to other grantmakers’ timelines).
Given that, I’d suggest replacing “unreasonably long” (which implies a criticism of the length itself) with something like “longer than what the website/communications with applicants suggest” (which seems like what you actually believe) everywhere in the report.
(9) The report currently states (or suggests) that EA Funds doesn’t post reports publicly. Linch points out that they “do post publicpayout reports.” It seems like you’re mostly disagreeing about the kind of reports that should be shared.[3]
Given that this is the case, I think you should clarify this in the report (which currently seems to mislead readers into believing that EA Funds doesn’t actually post any public reports), e.g. by replacing “EA Funds [doesn’t post] reports or [have] public metrics of success” with “EA Funds posts public payout reports like this, but doesn’t have public reports about successes achieved by their grantees.”
(5), (6), (8) (and (1)) There are a bunch of disagreements about whether what’s described as views of “EA Funds leadership” in the report is an accurate representation of the views.
(1) In general, Linch — who has first-hand knowledge — points out that these positions are from “notes taken from a single informal call with the EA Funds project lead” and that the person in question disagrees with “the characterization of almost all of their comments.” (Apparently the phrase “EA Funds leadership” was used to avoid criticizing someone personally and to preserve anonymity.)
You refer to the notes a lot, explaining that the views in the report are backed by the notes from the call and arguing that one should generally trust notes like this more than someone’s recollection of a conversation.[1] Whether or not the notes are more accurate than the project lead’s recollection of the call, it seems pretty odd to view the notes as a stronger authority on the views of EA Funds than what someone from EA Funds is explicitly saying now, personally and explicitly. (I.e. what matters is whether a statement is true, not whether it was said in a call.)
You might think that (A) Linch is mistaken about what the project lead thinks (in which case I think the project lead will probably clarify), or (B) that (some?) people at EA Funds have views that they disclosed in the call (maybe because the call was informal and they were more open with their views) but are trying to hide or cover up now — or that what was said in the call is indirect evidence for the views (that are now being disavowed). If (B) is what you believe, I think you should be explicit about that. If not, I think you should basically defer to Linch here.
As a general rule, I suggest at least replacing any instance of “EA Funds leadership [believes]” with something like “our notes from a call with someone involved in running EA Funds imply that they think...” and linking Linch’s comment for a counterpoint.
Specific examples:
(5) Seems like Linch explicitly disagrees with the idea that EA Funds dismisses the value of prioritization research, and points out that EAIF has given large grants to relevant work from Rethink Priorities.
Given this, I think you should rewrite statements in the report that are misleading. I also think you should probably clarify that EA Funds has given funding to Rethink Priorities.[2]
Also, I’m not as confident here, but it might be good to flag the potential for ~unconscious bias in the discussions of the value of cause prio research (due to the fact that CEARCH is working on cause prioritization research).
(6) Whatever was said in the conversation notes, it seems that EA Funds [leadership] does in fact believe that “there is more uncertainty now with [their] funding compared to other points in time.” Seems like this should be corrected in the report.
(8) Again, what matters isn’t what was said, but what is true (and whether the report is misleading about the truth). Linch seems to think that e.g. the statement about coordination is misleading.
I also want to say that I appreciate the work that has gone into the report and got value from e.g. the breakdown of quantitative data about funding — thanks for putting that together.
And I want to note potential COIs: I’m at CEA (although to be clear I don’t know if people at CEA agree with my comment here), briefly helped evaluate LTFF grants in early 2022, and Linch was my manager when I was a fellow at Rethink Priorities in 2021.
We have both verbatim and cleaned up/organized notes on this (n.b. we shared both with you privately). So it appears we have a fundamental disagreement here (and also elsewhere) as to whether what we noted down/transcribed is an accurate record of what was actually said.
TLDR: Fundamentally, I stand by the accuracy of our conversation notes.
Epistemically, it’s more likely that one doesn’t remember what one said previously vs the interviewer (if in good faith) catastrophically misunderstanding and recording down something that wholesale wasn’t said at all (as opposed to a more minor error—we agree that that can totally happen; see below) …
In relation to this claim: “They do not think of RP as doing cause prioritization, and though in their view RP could absorb more people/money in a moderately cost-effective way, they would consider less than half of what they do cause prioritization.”
″...we mean reports of success or having public metrics of success. We didn’t view reports on payouts to be evidence of success, since payouts are a cost, and not the desired end goal in itself. This contrasts with reports on output (e.g. a community building grant actually leading to increased engagement on XYZ engagement metrics) or much more preferably, report on impact (e.g. and those XYZ engagement metrics leading to actual money donated to GiveWell, from which we can infer that X lives were saved).”
At the end of the day, I apologize if the publication was not in line with what EA Funds would have wanted—I agree it’s probably a difference in norms. In a professional context, I’m generally comfortable with people relaying that I said X in private, unless there was an explicit request not to share
It is clear from Linch’s comment that he would have liked to see a draft of the report before it was published. Did you underestimate the interest of EA Funds in reviewing the report before its publication, or did you think their interest in reviewing the report was not too relevant? I hope the former.
(I work for EA Funds, including EAIF, helping out with public communications among other work. I’m not a grantmaker on EAIF and I’m not responsible for any decision on any specific EAIF grant).
Hi. Thanks for writing this. I appreciate you putting the work in this, even though I strongly disagree with the framing of most of the doc that I feel informed enough to opine on, as well as most of the object-level.
Ultimately, I think the parts of your report about EA Funds are mostly incorrect or substantively misleading, given the best information I have available. But I think it’s possible I’m misunderstanding your position or I don’t have enough context. So please read the following as my own best understanding of the situation, which can definitely be wrong. But first, onto the positives:
I appreciate that the critical points in the doc are made as technical critiques, rather than paradigmatic ones. Technical critiques are ones that people are actually compelled to respond to, and can actually compel action (rather than just make people feel vaguely bad/smug and don’t compel any change).
The report has many numerical/quantitative details. In theory, those are easier to falsify.
The report appears extensive and must have taken a long time to write.
There are also some things the report mentioned that we have also been tracking, and I believe we have substantial room for improvement:
Our grant evaluation process is still slower than we would like.
While broadly faster than other comparable funds I’m aware of (see this comment), I still think we have substantial room for improvement.
Our various subfunds, particularly EAIF, have at points been understaffed and under-capacity.
While I strongly disagree with the assessment that hiring more grantmakers is “fairly straightforward” (bad calls with grant evaluations are very costly, for reasons including but not limited to insufficient attention to adversarial selection; empirically most EA grantmaking organizations have found it very difficult to hire), I do think on the margin we can do significantly more on hiring.
Our limited capacity has made it difficult for us to communicate with and/or coordinate with all the other stakeholders in the system, so we’re probably missing out on key high-EV opportunities (eg several of our existing collaborations in AI safety have started later than they counterfactually could have, and we haven’t been able to schedule time to fly to coordinate with folks in London/DC/Netherlands/Sweden.
One of the reasons I came on to EA Funds full-time is to help communicate with various groups.
Now, onto the disagreements:
Procedurally:
I was surprised that so much of the views that were ascribed to “EA Funds’ leadership” were from notes taken from a single informal call with the EA Funds project lead, that you did not confirm was okay to be shared publicly.
They said that they usually explicitly request privacy before quoting them publicly, but are not sure if they did so in this instance.
They were also surprised that there was a public report out at all.
My best guess is that there was a misunderstanding that arose from a norm difference, where you come from the expectation that professional meetings are expected to be public unless explicitly stated otherwise, whereas the norm that I (and I think most EAs?) are more used to is that 1-1 meetings are private unless explicitly stated otherwise.
They also disagree with the characterization of almost all of their comments (will say more below), which I think speaks to the epistemic advantages of confirming before publicly attributing comments made by someone else.
I’d have found it helpful if you shared a copy of the post before making it public.
We could’ve corrected most misunderstandings
If you were too busy for private corrections, I could’ve at least written this response earlier.
Many things in general were false (more details below). Having a fact-checking process might be useful going forwards.
Substantively:
When the report quoted “CEA has had to step in and provide support in evaluating EAIF grants for them” I believe this is false or at least substantively misleading.
The closest thing I can think about is that we ask CEA Comm Health for help in reviewing comm health issues with our grants (which as I understand is part of their explicit job duties and both sides are happy with)”
It’s possible your source misunderstood the relationship and thought the Comm Health work was supererogatory or accidental?
We frequently ask technical advisors for advice on project direction in a less institutionalized capacity as well[1]. I think “step in” conveys the wrong understanding, as most of the grant evaluation is still done by the various funds.
(To confirm this impression, we checked with multiple senior people involved with CEA’s work; however, this was not an exhaustive sweep and it’s definitely possible that my impression is incorrect).
While it is true that we’re much slower than we would like, it seems very unreasonable to single out EA Funds grantmaking as “unreasonably long” when other grantmakers are as slow or slower.
See e.g. Abraham Rowe’s comment here.
“”Not weighing in on LTFF specifically, but from having done a lot of traditional nonprofit fundraising, I’d guess two months is a faster response time than 80% of foundations/institutional funders, and one month is probably faster than like 95%+. My best guess at the average for traditional nonprofit funders is more like 3-6 months. I guess my impression is that even in the worst cases, EA Funds has been operating pretty well above average compared to the traditional nonprofit funding world (though perhaps that isn’t the right comparison). Given that LTFF is funding a lot of research, 2 months is almost certainly better than most academic grants.
My impression from what I think is a pretty large sample of EA funders and grants is also that EA Funds is the fastest turnaround time on average compared to the list you mention [Editor’s note: “Open Phil, SFF, Founders Pledge, and Longview”] ([with] exceptions in some cases in both directions for EA Funds and other funders)”
I also sanity-checked with both Google Search and GPT-4[2]
Broadly I’m aware that other people on the forum also believes that we’re slow, but I think most people who believe this believe so because:
We have our own aspirations to be faster, and we try to do so.
They think from a first-principles perspective that grantmakers “can” be faster.
We talk about our decisions and process very publicly, and so become more of an easy target for applicants’ grievances.
But while I understand and sympathize with other people’s frustrations[3], it is probably not factually true that we’re slower in relative terms than other organizations, and it’s odd to single out EA Funds here.
When your report said, “Critically, the expert reports that another major meta donor found EA Funds leadership frustrating to work with, and so ended up disengaging from further meta grantmaking coordination” my guess is that the quoted position is not true.
I feel like we’re keeping tabs on all the major donors (OP, SFF, Longview, etc).So I’m not sure who they could possibly be referring to.
Though I guess it’s possible that there is a major donor that’s so annoyed with us that they made efforts to hide themselves from us so we haven’t even heard about them.
But I think it’s more likely that the person in question isn’t a major donor.
To be clear, any donor feeling frustrated with us is regrettable. While it is true that not all donors can or should want to work with us (e.g. due to sufficiently differing cause prioritization or empirical worldviews), it is still regrettable that people have an emotionally frustrating experience.
Your report says that EA Funds leadership was “strongly dismissing the value of prioritization research, where other grantmakers generally expressed higher uncertainty”, but this is false.
I want to be clear that EA Funds has historically been, and currently is, quite positive on cause prioritization in general (though of course specific work may be lower quality, or good work that’s not cause prioritization may be falsely labeled as cause prioritization).
By revealed preferences, EAIF has given very large grants to worldview investigations and moral weights work at Rethink Priorities
By stated preferences, “research that aids prioritization across different cause areas” was listed as one of the central examples of things that EAIF would be excited to fund.
My understanding is that the best evidence you have for this view is that EA Funds leadership “would consider less than half of what [Rethink Priorities] does cause prioritization.”
I’m confused why you think the quoted statement is surprising or good evidence, given that the stated claim is just obviously true. Eg, eg The Rodenticide Reduction Sequence or Cultured meat: A comparison of techno-economic analyses or Exposure to Lead Paint in Low- and Middle-Income Countries (To give three examples of work that I have more than a passing familiarity with) are much more about intervention prioritization than intercause prioritization. An example of the latter is the moral weights work at Rethink Priorities Worldview Investigations.
My best guess is that this is just a semantics misunderstanding, where EA Funds’ project lead was trying to convey a technical point about the difference between intercause cause prioritization vs intervention prioritization, whereas you understood his claim as a emotive position of “boo cause prioritization”
Your report states that “EA Funds leadership doesn’t believe that there is more uncertainty now with EA Fund’s funding compared to other points in time” This is clearly false.
I knew coming on to EA Funds that the job will have greater job uncertainty than other jobs I’ve had in the past, and I believe this was adequately communicated to me.
We think about funding uncertainty a lot. EA Funds’ funding has always been uncertain, and things have gotten worse since Nov 2022.
Nor would it be consistent with either our stated or revealed preferences.
Our revealed preference is that we spend substantially more staff time on fundraising than we have in the past.
Our stated preferences include “I generally expect our funding bar to vary more over time and to depend more on individual donations than it has historically.” and “LTFF and EAIF are unusually funding-constrained right now”
The last one was even the title of a post with 4000+ views!
I don’t have a good idea for how much more unambiguous we could be.
Semantically:
I originally want to correct misunderstandings and misrepresentations of EA Funds’ positions more broadly in the report. However I think there were just a lot of misunderstandings overall, so I think it’s simpler for people to just assume I contest almost every categorization of the form “EA funds believes X”. A few select examples:
When your report claimed “leadership is of the view that the current funding landscape isn’t more difficult for community builders” I a) don’t think we’ve said that, and b) to the extent we believe it, it’s relative to eg 2019; it’d be false compared to the 2022 era of unjustly excessive spending.
“The EA Funds chair has clarified that EAIF would only really coordinate with OP, since they’re reliably around; only if the [Meta-Charity Funders] was around for some time, would EA Funds find it worth factoring into their plans. ”
To clarify, at the time the fund chair was unsure if MCF was only going to have one round. If they only have one round, it wouldn’t make sense to change EA Funds’ strategy based on that. If they have multiple rounds (eg, more than 2, it could be worth factoring in). The costs of coordination are nontrivially significant.
It’s also worth noting that the fund chair had 2 calls with MCF and passed on various grants that they thought MCF might be interested in evaluating, which some people may consider coordination.
We’ve also coordinated more extensively with other non-OP funders, and have plans in the works for other collaborations with large funders.
“In general, they don’t think that other funders outside of OP need to do work on prioritization, and are in general sceptical of such work. ”
AFAIK nobody at EA Funds believes this.
[EA funds believes] “so if EA groups struggle to raise money, it’s simply because there are more compelling opportunities available instead.”
The statement seems kind of conceptually confused? Funders should always be trying to give to the most cost-effective projects on the margin.
The most charitable position that’s similar to the above I could think of is that some people might believe that
“most community-building projects that are not funded now aren’t funded because of constraints on grantmaker capacity,” so grantmakers make poor decisions
Note that the corollary to the above is that many of the community building projects that are funded should not have been funded.
I can’t speak to other people at EA Funds, but my own best guess is that this is not true (for projects that people online are likely to have heard of).
I’m more optimistic about boutique funding arrangements for projects within people’s networks that are unlikely to have applied to big funders, or people inspiring those around them to create new projects.
If projects aren’t funded, the biggest high-level reason is that there are limited resources in the world in general, and in EA specifically. You might additionally also believe that meta-EA in general is underfunded relative to object-level programs.
(Minor): To clarify “since OP’s GHW EA team is focusing on effective giving, EAIF will consider this less neglected” we should caveat this by saying that less neglected doesn’t necessarily mean less cost-effective than other plausible things for EAIF to fund.
Re: “EA Funds not posting reports or having public metrics of success”
We do post public payout reports.
My understanding is that you are (understandably) upset that we don’t have clear metrics and cost-effectiveness analyses written up.
I think this is a reasonable and understandable complaint, and we have indeed gotten this feedback before from others and have substantial room to improve here.
However, I think many readers might interpret the statement as something stronger, e.g. interpreting it as us not posting reports or writing publicly much at all
As a matter of practice, we write a lot more about what we fund and our decision process than any other EA funder I’m aware of (and likely more than most other non-EA funders). I think many readers may get the wrong impression of our level of transparency from that comment.
Note to readers: I reached out to Joel to clarify some of these points before posting. I really appreciate his prompt responses! Due to time constraints, I decided to not send him a copy of this exact comment before posting publicly.
I personally have benefited greatly from talking to specialist advisors in biosecurity.
From GPT4
“The median time to receive a response for an academic grant can vary significantly depending on the funding organization, the field of study, and the specific grant program. Generally, the process can take anywhere from a few months to over a year. ”
“The timeline for receiving a response on grant applications can vary across different fields and types of grants, but generally, the processes are similar in length to those in the academic and scientific research sectors.”
“Smaller grants in this field might be decided upon quicker, potentially within 3 to 6 months [emphasis mine], especially if they require less funding or involve fewer regulatory hurdles.”
Being funded by grants kind of sucks as an experience compared to e.g. employment; I dislike adding to such frustrations. There are also several cases I’m aware of where counterfactually impactful projects were not taken due to funders being insufficiently able to fund things in time, in some of those incidences I’m more responsible than anybody else.
Hi Linch,
Thanks for engaging. I appreciate that we can have a fairly object-level disagreement over this issue; it’s not personal, one way or another.
Meta point to start: We do not make any of these criticisms of EA Funds lightly, and when we do, it’s against our own interests, because we ourselves are potentially dependent on EAIF for future funding.
To address the points brought up, generally in the order that you raised them:
(1) On the fundamental matter of publication. I would like to flag out that, from checking the email chain plus our own conversation notes (both verbatim and cleaned-up), there was no request that this not be publicized.
For all our interviews, whenever someone flagged out that X data or Y document or indeed the conversation in general shouldn’t be publicized, we respected this and did not do so. In the public version of the report, this is most evident in our spreadsheet where a whole bunch of grant details have been redacted; but more generally, anyone with the “true” version of the report shared with the MCF leadership will also be able to spot differences. We also redacted all qualitative feedback from the community survey, and by default anonymized all expert interviewees who gave criticisms of large grantmakers, to protect them from backlash.
I would also note that we generally attributed views to, and discussed, “EA Leadership” in the abstract, both because we didn’t want to make this a personal criticism, and also because it afforded a degree of anonymity.
At the end of the day, I apologize if the publication was not in line with what EA Funds would have wanted—I agree it’s probably a difference in norms. In a professional context, I’m generally comfortable with people relaying that I said X in private, unless there was an explicit request not to share (e.g. I was talking to a UK-based donor yesterday, and I shared a bunch of my grantmaking views. If he wrote a post on the forum summarizing the conversations he had with a bunch of research organizations and donor advisory orgs, including our own, I wouldn’t object). More generally, I think if we have some degree of public influence (including by the money we control) it would be difficult from the perspective of public accountability if “insiders” such as ourselves were unwilling to share with the public what we think or know.
(2) For the issue of CEA stepping in: In our previous conversation, you relayed that you asked a senior person at CEA and they in turn said that “they’re aware of some things that might make the statement technically true but misleading, and they are not aware of anything that would make the statement non-misleading, although this isn’t authoritative since many thing happened at CEA”. For the record, I’m happy to remove this since the help/assistance, if any, doesn’t seem too material one way or another.
(3) For whether it’s fair to characterize EAIF’s grant timelines as unreasonably long. As previously discussed, I think the relevant metric is EAIF’s own declared timetable (“The Animal Welfare Fund, Long-Term Future Fund and EA Infrastructure Fund aim to respond to all applications in 2 months and most applications in 3 weeks.”). This is because organizations and individuals make plans based on when they expect to get an answer—when to begin applying; whether to start or stop projects; whether to go find another job; whether to hire or fire; whether to reach out to another grantmaker who isn’t going to support you until and unless you have already exhausted the primary avenues of potential funding.
(4) The issue of the major donor we relayed was frustrated/turned off. You flag out that you’re keeping tabs on all the major donors, and so don’t think the person in question is major. While I agree that it’s somewhat subjective—it’s also true that this is a HNWI who, beyond their own giving, is also sitting on the legal or advisory boards many other significant grantmakers and philanthropic outfits. Also, knowledgeable EAs in the space have generally characterized this person as an important meta funder to me (in the context of my own organization then thinking of fundraising, and being advised as to whom to approach). So even if they aren’t major in the sense that OP (or EA Funds are), they could reasonably be considered fairly significant. In any case, the discussion is backwards, I think—I agree that they don’t play as significant a role in the community right now (and so you assessment of them as non-major is reasonable), but that would be because of the frustration they have had with EA Funds (and, to be fair, the EA community in general, I understand). So perhaps it’s best to understand this as potentially vs currently major.
(5) On whether it’s fair to characterize EA Funds leadership as being strongly dismissive of cause prioritization. We agree that grants have been made to RP; so the question is cause prioritization outside OP and OP-funded RP. Our assessment of EA Fund’s general scepticism of prioritization was based, among other things, on what we reported in the previous section “They believe cause prioritization is an area that is talent constrained, and there aren’t a lot of people they feel great giving to, and it’s not clear what their natural pay would be. They do not think of RP as doing cause prioritization, and though in their view RP could absorb more people/money in a moderately cost-effective way, they would consider less than half of what they do cause prioritization. In general, they don’t think that other funders outside of OP need to do work on prioritization, and are in general sceptical of such work.” In your comment, you dispute that the bolded part in particular is true, saying “AFAIK nobody at EA Funds believes this.”
We have both verbatim and cleaned up/organized notes on this (n.b. we shared both with you privately). So it appears we have a fundamental disagreement here (and also elsewhere) as to whether what we noted down/transcribed is an accurate record of what was actually said.
TLDR: Fundamentally, I stand by the accuracy of our conversation notes.
(a) Epistemically, it’s more likely that one doesn’t remember what one said previously vs the interviewer (if in good faith) catastrophically misunderstanding and recording down something that wholesale wasn’t said at all (as opposed to a more minor error—we agree that that can totally happen; see below)
(b) From my own personal perspective—I used to work in government and in consulting (for governments). It was standard practice to have notes of meeting, as made by junior staffers and then submitted to more senior staff for edits and approval. Nothing resembling this happened to either me or anyone else (i.e. just total misunderstanding tantamount to fabrication, in saying that that XYZ was said when nothing of the sort took place).
(c) My word does not need to be taken for this. We interviewed other people, and I’m beginning to reach out to them again to check that our notes matched what they said. One has already responded (the person we labelled Expert 5 on Page 34 of the report); they said “This is all broadly correct” but requested we made some minor edits to the following paragraphs (changes indicated by bold and
strikethrough)Expert 5: Reports both substantive and communications-related concerns about EA Funds leadership.
For the latter, the expert reports both himself and others finding communications with EA Funds leadership difficult and the conversations confusing.
For the substantive concerns – beyond the long wait times EAIF imposes on grantees, the expert was primarily worried that EA Funds leadership has been unreceptive to new ideas and that they are unjustifiably confident that EA Funds is fundamentally correct in its grantmaking decisions. In particular, it appears to the expert that EA Funds leadership does not believe that additional sources of meta funding would be useful for non-EAIF grants [phrase added] – they believe that projects unfunded by EAIF do not deserve funding at all (rather than some projects perhaps not being the right fit for the EAIF, but potentially worth funding by other funders with different ethical worldviews, risk aversion or epistemics). Critically, the expert reports that another major meta donor found EA Funds leadership frustrating to work with,
and so ended up disengaging from further meta grantmaking coordinationand this likely is one reason they ended up disengagement from further meta grantmaking coordination [replaced].My even handed interpretation of this overall situation (trying to be generous to everyone) is that what was reported here (“In general, they don’t think that other funders outside of OP need to do work on prioritization”) was something the EA Funds interviewee said relatively casually (not necessarily a deep and abiding view, and so not something worth remembering) - perhaps indicative of scepticism of a lot of cause prioritization work but not literally thinking nothing outside OP/RP is worth funding. (We actually do agree with this scepticism, up to an extent).
(6) On whether our statement that “EA Funds leadership doesn’t believe that there is more uncertainty now with EA Fund’s funding compared to other points in time” is accurate. You say that this is clearly false. Again, I stand by the accuracy of our conversation notes. And in fact, I actually do personally and distinctively remember this particular exchange, because it stood out, as did the exchange that immediately followed, on whether OP’s use of the fund-matching mechanism creates more uncertainty.
My generous interpretation of this situation is, again, some things may be said relatively casually, but may not be indicative of deep, abiding views.
(8) For the various semantic disagreements. Some of it we discussed above (e.g. the OP cause prioritization stuff); for the rest -
On whether this part is accurate: “Leadership is of the view that the current funding landscape isn’t more difficult for community builders”. Again, we do hold that this was said, based on the transcripts. And again, to be even handed, I think your interpretation (b) is right—probably your team is thinking of the baseline as 2019, while we were thinking mainly of 2021-now.
On whether this part is accurate: “The EA Funds chair has clarified that EAIF would only really coordinate with OP, since they’re reliably around; only if the [Meta-Charity Funders] was around for some time, would EA Funds find it worth factoring into their plans. ” I don’t think we disagree too much, if we agree that EA Fund’s position is that coordination is only worthwhile if the counterpart is around for a bit. Otherwise, it’s just some subjective disagreement on what what coordination is or what significant degrees of it amount to.
On this statement: “[EA funds believes] “so if EA groups struggle to raise money, it’s simply because there are more compelling opportunities available instead.”
In our discussion, I asked about the community building funding landscape being worse; the interviewee disagreed with this characterization, and started discussing how it’s more that standards have risen (which we agree is a factor). The issue is that the other factor of objectively less funding being available was not brought up, even though it is, in our view, the dominant factor (and if you asked community builders this will be all they talk about). I think our disagreement here is partly subjective—over what a bad funding landscape is, and also the right degree of emphasis to put on rising standards vs less funding.
(9) EA Funds not posting reports or having public metrics of successes. Per our internal back-and-forth, we’ve clarified that we mean reports of success or having public metrics of success. We didn’t view reports on payouts to be evidence of success, since payouts are a cost, and not the desired end goal in itself. This contrasts with reports on output (e.g. a community building grant actually leading to increased engagement on XYZ engagement metrics) or much more preferably, report on impact (e.g. and those XYZ engagement metrics leading to actual money donated to GiveWell, from which we can infer that X lives were saved). Like, speaking for my own organization, I don’t think the people funding our regranting budgets would be happy if I reported the mere spending as evidence of success.
(OVERALL) For what it’s worth, I’m happy to agree to disagree, and call it a day. Both your team and mine are busy with our actual work of research/grantmaking/etc, and I’m not sure if further back and forth will be particularly productive, or a good use of my time or yours.
I’m going to butt in with some quick comments, mostly because:
I think it’s pretty important to make sure the report isn’t causing serious misunderstandings
and because I think it can be quite stressful for people to respond to (potentially incorrect) criticisms of their projects — or to content that seem to misrepresent their project(s) — and I think it can help if someone else helps disentangle/clarify things a bit. (To be clear, I haven’t run this past Linch and don’t know if he’s actually finding this stressful or the like. And I don’t want to discourage critical content or suggest that it’s inherently harmful; I just think external people can help in this kind of discussion.)
I’m sharing comments and suggestions below, using your (Joel’s) numbering. (In general, I’m not sharing my overall views on EA Funds or the report. I’m just trying to clarify some confusions that seem resolvable, based on the above discussion, and suggest changes that I hope would make the report more useful.)
(2) Given that apparently the claim that “CEA has had to step in and provide support” EA Funds is likely “technically misleading”, it seems good to in fact remove it from the report (or keep it in but immediately and explicitly flag that this seems likely misleading and link Linch’s comment) — you said you’re happy to do this, and I’d be glad to see it actually removed.
(3) The report currently concludes that would-be grantees “wait an unreasonable amount of time before knowing their grant application results.” Linch points out that other grantmakers tend to have similar or longer timelines, and you don’t seem to disagree (but argue that it’s important to compare the timelines to what EA Funds sets as the expectation for applicants, instead of comparing them to other grantmakers’ timelines).
Given that, I’d suggest replacing “unreasonably long” (which implies a criticism of the length itself) with something like “longer than what the website/communications with applicants suggest” (which seems like what you actually believe) everywhere in the report.
(9) The report currently states (or suggests) that EA Funds doesn’t post reports publicly. Linch points out that they “do post public payout reports.” It seems like you’re mostly disagreeing about the kind of reports that should be shared.[3]
Given that this is the case, I think you should clarify this in the report (which currently seems to mislead readers into believing that EA Funds doesn’t actually post any public reports), e.g. by replacing “EA Funds [doesn’t post] reports or [have] public metrics of success” with “EA Funds posts public payout reports like this, but doesn’t have public reports about successes achieved by their grantees.”
(5), (6), (8) (and (1)) There are a bunch of disagreements about whether what’s described as views of “EA Funds leadership” in the report is an accurate representation of the views.
(1) In general, Linch — who has first-hand knowledge — points out that these positions are from “notes taken from a single informal call with the EA Funds project lead” and that the person in question disagrees with “the characterization of almost all of their comments.” (Apparently the phrase “EA Funds leadership” was used to avoid criticizing someone personally and to preserve anonymity.)
You refer to the notes a lot, explaining that the views in the report are backed by the notes from the call and arguing that one should generally trust notes like this more than someone’s recollection of a conversation.[1] Whether or not the notes are more accurate than the project lead’s recollection of the call, it seems pretty odd to view the notes as a stronger authority on the views of EA Funds than what someone from EA Funds is explicitly saying now, personally and explicitly. (I.e. what matters is whether a statement is true, not whether it was said in a call.)
You might think that (A) Linch is mistaken about what the project lead thinks (in which case I think the project lead will probably clarify), or (B) that (some?) people at EA Funds have views that they disclosed in the call (maybe because the call was informal and they were more open with their views) but are trying to hide or cover up now — or that what was said in the call is indirect evidence for the views (that are now being disavowed). If (B) is what you believe, I think you should be explicit about that. If not, I think you should basically defer to Linch here.
As a general rule, I suggest at least replacing any instance of “EA Funds leadership [believes]” with something like “our notes from a call with someone involved in running EA Funds imply that they think...” and linking Linch’s comment for a counterpoint.
Specific examples:
(5) Seems like Linch explicitly disagrees with the idea that EA Funds dismisses the value of prioritization research, and points out that EAIF has given large grants to relevant work from Rethink Priorities.
Given this, I think you should rewrite statements in the report that are misleading. I also think you should probably clarify that EA Funds has given funding to Rethink Priorities.[2]
Also, I’m not as confident here, but it might be good to flag the potential for ~unconscious bias in the discussions of the value of cause prio research (due to the fact that CEARCH is working on cause prioritization research).
(6) Whatever was said in the conversation notes, it seems that EA Funds [leadership] does in fact believe that “there is more uncertainty now with [their] funding compared to other points in time.” Seems like this should be corrected in the report.
(8) Again, what matters isn’t what was said, but what is true (and whether the report is misleading about the truth). Linch seems to think that e.g. the statement about coordination is misleading.
I also want to say that I appreciate the work that has gone into the report and got value from e.g. the breakdown of quantitative data about funding — thanks for putting that together.
And I want to note potential COIs: I’m at CEA (although to be clear I don’t know if people at CEA agree with my comment here), briefly helped evaluate LTFF grants in early 2022, and Linch was my manager when I was a fellow at Rethink Priorities in 2021.
E.g.
In relation to this claim: “They do not think of RP as doing cause prioritization, and though in their view RP could absorb more people/money in a moderately cost-effective way, they would consider less than half of what they do cause prioritization.”
″...we mean reports of success or having public metrics of success. We didn’t view reports on payouts to be evidence of success, since payouts are a cost, and not the desired end goal in itself. This contrasts with reports on output (e.g. a community building grant actually leading to increased engagement on XYZ engagement metrics) or much more preferably, report on impact (e.g. and those XYZ engagement metrics leading to actual money donated to GiveWell, from which we can infer that X lives were saved).”
Thanks for the clarifications, Joel.
It is clear from Linch’s comment that he would have liked to see a draft of the report before it was published. Did you underestimate the interest of EA Funds in reviewing the report before its publication, or did you think their interest in reviewing the report was not too relevant? I hope the former.