I’m going to butt in with some quick comments, mostly because:
I think it’s pretty important to make sure the report isn’t causing serious misunderstandings
and because I think it can be quite stressful for people to respond to (potentially incorrect) criticisms of their projects — or to content that seem to misrepresent their project(s) — and I think it can help if someone else helps disentangle/clarify things a bit. (To be clear, I haven’t run this past Linch and don’t know if he’s actually finding this stressful or the like. And I don’t want to discourage critical content or suggest that it’s inherently harmful; I just think external people can help in this kind of discussion.)
I’m sharing comments and suggestions below, using your (Joel’s) numbering. (In general, I’m not sharing my overall views on EA Funds or the report. I’m just trying to clarify some confusions that seem resolvable, based on the above discussion, and suggest changes that I hope would make the report more useful.)
(2) Given that apparently the claim that “CEA has had to step in and provide support” EA Funds is likely “technically misleading”, it seems good to in fact remove it from the report (or keep it in but immediately and explicitly flag that this seems likely misleading and link Linch’s comment) — you said you’re happy to do this, and I’d be glad to see it actually removed.
(3) The report currently concludes that would-be grantees “wait an unreasonable amount of time before knowing their grant application results.” Linch points out that other grantmakers tend to have similar or longer timelines, and you don’t seem to disagree (but argue that it’s important to compare the timelines to what EA Funds sets as the expectation for applicants, instead of comparing them to other grantmakers’ timelines).
Given that, I’d suggest replacing “unreasonably long” (which implies a criticism of the length itself) with something like “longer than what the website/communications with applicants suggest” (which seems like what you actually believe) everywhere in the report.
(9) The report currently states (or suggests) that EA Funds doesn’t post reports publicly. Linch points out that they “do post publicpayout reports.” It seems like you’re mostly disagreeing about the kind of reports that should be shared.[3]
Given that this is the case, I think you should clarify this in the report (which currently seems to mislead readers into believing that EA Funds doesn’t actually post any public reports), e.g. by replacing “EA Funds [doesn’t post] reports or [have] public metrics of success” with “EA Funds posts public payout reports like this, but doesn’t have public reports about successes achieved by their grantees.”
(5), (6), (8) (and (1)) There are a bunch of disagreements about whether what’s described as views of “EA Funds leadership” in the report is an accurate representation of the views.
(1) In general, Linch — who has first-hand knowledge — points out that these positions are from “notes taken from a single informal call with the EA Funds project lead” and that the person in question disagrees with “the characterization of almost all of their comments.” (Apparently the phrase “EA Funds leadership” was used to avoid criticizing someone personally and to preserve anonymity.)
You refer to the notes a lot, explaining that the views in the report are backed by the notes from the call and arguing that one should generally trust notes like this more than someone’s recollection of a conversation.[1] Whether or not the notes are more accurate than the project lead’s recollection of the call, it seems pretty odd to view the notes as a stronger authority on the views of EA Funds than what someone from EA Funds is explicitly saying now, personally and explicitly. (I.e. what matters is whether a statement is true, not whether it was said in a call.)
You might think that (A) Linch is mistaken about what the project lead thinks (in which case I think the project lead will probably clarify), or (B) that (some?) people at EA Funds have views that they disclosed in the call (maybe because the call was informal and they were more open with their views) but are trying to hide or cover up now — or that what was said in the call is indirect evidence for the views (that are now being disavowed). If (B) is what you believe, I think you should be explicit about that. If not, I think you should basically defer to Linch here.
As a general rule, I suggest at least replacing any instance of “EA Funds leadership [believes]” with something like “our notes from a call with someone involved in running EA Funds imply that they think...” and linking Linch’s comment for a counterpoint.
Specific examples:
(5) Seems like Linch explicitly disagrees with the idea that EA Funds dismisses the value of prioritization research, and points out that EAIF has given large grants to relevant work from Rethink Priorities.
Given this, I think you should rewrite statements in the report that are misleading. I also think you should probably clarify that EA Funds has given funding to Rethink Priorities.[2]
Also, I’m not as confident here, but it might be good to flag the potential for ~unconscious bias in the discussions of the value of cause prio research (due to the fact that CEARCH is working on cause prioritization research).
(6) Whatever was said in the conversation notes, it seems that EA Funds [leadership] does in fact believe that “there is more uncertainty now with [their] funding compared to other points in time.” Seems like this should be corrected in the report.
(8) Again, what matters isn’t what was said, but what is true (and whether the report is misleading about the truth). Linch seems to think that e.g. the statement about coordination is misleading.
I also want to say that I appreciate the work that has gone into the report and got value from e.g. the breakdown of quantitative data about funding — thanks for putting that together.
And I want to note potential COIs: I’m at CEA (although to be clear I don’t know if people at CEA agree with my comment here), briefly helped evaluate LTFF grants in early 2022, and Linch was my manager when I was a fellow at Rethink Priorities in 2021.
We have both verbatim and cleaned up/organized notes on this (n.b. we shared both with you privately). So it appears we have a fundamental disagreement here (and also elsewhere) as to whether what we noted down/transcribed is an accurate record of what was actually said.
TLDR: Fundamentally, I stand by the accuracy of our conversation notes.
Epistemically, it’s more likely that one doesn’t remember what one said previously vs the interviewer (if in good faith) catastrophically misunderstanding and recording down something that wholesale wasn’t said at all (as opposed to a more minor error—we agree that that can totally happen; see below) …
In relation to this claim: “They do not think of RP as doing cause prioritization, and though in their view RP could absorb more people/money in a moderately cost-effective way, they would consider less than half of what they do cause prioritization.”
″...we mean reports of success or having public metrics of success. We didn’t view reports on payouts to be evidence of success, since payouts are a cost, and not the desired end goal in itself. This contrasts with reports on output (e.g. a community building grant actually leading to increased engagement on XYZ engagement metrics) or much more preferably, report on impact (e.g. and those XYZ engagement metrics leading to actual money donated to GiveWell, from which we can infer that X lives were saved).”
I’m going to butt in with some quick comments, mostly because:
I think it’s pretty important to make sure the report isn’t causing serious misunderstandings
and because I think it can be quite stressful for people to respond to (potentially incorrect) criticisms of their projects — or to content that seem to misrepresent their project(s) — and I think it can help if someone else helps disentangle/clarify things a bit. (To be clear, I haven’t run this past Linch and don’t know if he’s actually finding this stressful or the like. And I don’t want to discourage critical content or suggest that it’s inherently harmful; I just think external people can help in this kind of discussion.)
I’m sharing comments and suggestions below, using your (Joel’s) numbering. (In general, I’m not sharing my overall views on EA Funds or the report. I’m just trying to clarify some confusions that seem resolvable, based on the above discussion, and suggest changes that I hope would make the report more useful.)
(2) Given that apparently the claim that “CEA has had to step in and provide support” EA Funds is likely “technically misleading”, it seems good to in fact remove it from the report (or keep it in but immediately and explicitly flag that this seems likely misleading and link Linch’s comment) — you said you’re happy to do this, and I’d be glad to see it actually removed.
(3) The report currently concludes that would-be grantees “wait an unreasonable amount of time before knowing their grant application results.” Linch points out that other grantmakers tend to have similar or longer timelines, and you don’t seem to disagree (but argue that it’s important to compare the timelines to what EA Funds sets as the expectation for applicants, instead of comparing them to other grantmakers’ timelines).
Given that, I’d suggest replacing “unreasonably long” (which implies a criticism of the length itself) with something like “longer than what the website/communications with applicants suggest” (which seems like what you actually believe) everywhere in the report.
(9) The report currently states (or suggests) that EA Funds doesn’t post reports publicly. Linch points out that they “do post public payout reports.” It seems like you’re mostly disagreeing about the kind of reports that should be shared.[3]
Given that this is the case, I think you should clarify this in the report (which currently seems to mislead readers into believing that EA Funds doesn’t actually post any public reports), e.g. by replacing “EA Funds [doesn’t post] reports or [have] public metrics of success” with “EA Funds posts public payout reports like this, but doesn’t have public reports about successes achieved by their grantees.”
(5), (6), (8) (and (1)) There are a bunch of disagreements about whether what’s described as views of “EA Funds leadership” in the report is an accurate representation of the views.
(1) In general, Linch — who has first-hand knowledge — points out that these positions are from “notes taken from a single informal call with the EA Funds project lead” and that the person in question disagrees with “the characterization of almost all of their comments.” (Apparently the phrase “EA Funds leadership” was used to avoid criticizing someone personally and to preserve anonymity.)
You refer to the notes a lot, explaining that the views in the report are backed by the notes from the call and arguing that one should generally trust notes like this more than someone’s recollection of a conversation.[1] Whether or not the notes are more accurate than the project lead’s recollection of the call, it seems pretty odd to view the notes as a stronger authority on the views of EA Funds than what someone from EA Funds is explicitly saying now, personally and explicitly. (I.e. what matters is whether a statement is true, not whether it was said in a call.)
You might think that (A) Linch is mistaken about what the project lead thinks (in which case I think the project lead will probably clarify), or (B) that (some?) people at EA Funds have views that they disclosed in the call (maybe because the call was informal and they were more open with their views) but are trying to hide or cover up now — or that what was said in the call is indirect evidence for the views (that are now being disavowed). If (B) is what you believe, I think you should be explicit about that. If not, I think you should basically defer to Linch here.
As a general rule, I suggest at least replacing any instance of “EA Funds leadership [believes]” with something like “our notes from a call with someone involved in running EA Funds imply that they think...” and linking Linch’s comment for a counterpoint.
Specific examples:
(5) Seems like Linch explicitly disagrees with the idea that EA Funds dismisses the value of prioritization research, and points out that EAIF has given large grants to relevant work from Rethink Priorities.
Given this, I think you should rewrite statements in the report that are misleading. I also think you should probably clarify that EA Funds has given funding to Rethink Priorities.[2]
Also, I’m not as confident here, but it might be good to flag the potential for ~unconscious bias in the discussions of the value of cause prio research (due to the fact that CEARCH is working on cause prioritization research).
(6) Whatever was said in the conversation notes, it seems that EA Funds [leadership] does in fact believe that “there is more uncertainty now with [their] funding compared to other points in time.” Seems like this should be corrected in the report.
(8) Again, what matters isn’t what was said, but what is true (and whether the report is misleading about the truth). Linch seems to think that e.g. the statement about coordination is misleading.
I also want to say that I appreciate the work that has gone into the report and got value from e.g. the breakdown of quantitative data about funding — thanks for putting that together.
And I want to note potential COIs: I’m at CEA (although to be clear I don’t know if people at CEA agree with my comment here), briefly helped evaluate LTFF grants in early 2022, and Linch was my manager when I was a fellow at Rethink Priorities in 2021.
E.g.
In relation to this claim: “They do not think of RP as doing cause prioritization, and though in their view RP could absorb more people/money in a moderately cost-effective way, they would consider less than half of what they do cause prioritization.”
″...we mean reports of success or having public metrics of success. We didn’t view reports on payouts to be evidence of success, since payouts are a cost, and not the desired end goal in itself. This contrasts with reports on output (e.g. a community building grant actually leading to increased engagement on XYZ engagement metrics) or much more preferably, report on impact (e.g. and those XYZ engagement metrics leading to actual money donated to GiveWell, from which we can infer that X lives were saved).”