Iâm back to work and able to reply with a bit more detail now (though also time-constrained as we have a lot of other important work to do this new year :)).
I still do not think any (immediate) action on our part is required. Let me lay out the reasons why:
(1) Our full process and criteria are explained here. As you seem to agree with from your comment above we need clear and simple rules for what is and what isnât included (incl. because we have a very small team and need to prioritize). Currently a very brief summary of these rules/âthe process would be: first determine which evaluators to rely on (also note our plans for this year) and then rely on their recommendations. We do not generally have the capacity to review individual charity evaluations, and would only do so and potentially diverge from a trusted evaluatorâs recommendation under exceptional circumstances. (I donât believe we have had such a circumstance this giving season, but may misremember)
(2) There were no strong reasons to diverge with respect to FPâs recommendation of StrongMinds at the time they recommended themâor to do an in-depth review of FPâs evaluation ourselvesâand I think there still arenât. As I said before, you make a few useful points in your post but I think Mattâs reaction and the subsequent discussion satisfactorily explain why Founders Pledge chose to recommend StrongMinds and why your comments donât (immediately) change their view on this: StrongMinds doesnât need to meet GiveWell-tier levels of confidence and easily clears FPâs bar in expectationâeven with the issues you mention having been taken into accountâand nearly all the decision-relevant reasoning is already available publicly in the 2019 report and HLIâs recent review. I would of course be very interested and we could reconsider our view if any ongoing discussion brings to light new arguments or if FP is unable to back up any claims they made, but so far I havenât seen any red or even orange flags.
(3) The above should be enough for GWWC to not prioritize taking any action related to StrongMinds at the moment, but I happen to have a bit more context here than usual as I was a co-author on the 2019 FP report on StrongMinds, and none of the five issues you raise are a surprise/ânew to me or change my view of StrongMinds very much. Very briefly on each (note: I donât have much time /â will mostly leave this to Matt /â some of my knowledge may be outdated or my memory may be off):
I agree the overall quality of evidence is far short from e.g. GiveWellâs standards (cf. Mattâs commentsâand would have agreed on this back in 2019. At this point, I certainly wouldnât take FPâs 2019 cost-effectiveness analysis literally: I would deflate the results by quite a bit to account for quality of evidence, and I know FP have done so internally for the past ~2 years at least. However, AFAIK such accountingâdone reasonablyâisnât enough to change the overall conclusion of StrongMinds meeting the cost-effectiveness bar in wellbeing terms. I should also note that HLIâs cost-effectiveness analysis seems to take into account more pieces of evidence, though I havenât reviewed it; just skimmed it.
As you say yourself, The 2019 FP report already accounted for social desirability bias to some extent, and it further highlights this bias as one of its key uncertainties (section 3.8, p.31).
I disagree with depression being overweighted here for various reasons, including that DALYs plausibly underweight mental health (see section 1, p.8-9 of the FP mental health report. Also note that HLIâs recent analysisâAFAIKâdoesnât rely on DALYâs in any way.
I donât think the reasons StrongMinds mention for not collecting more evidence (than they already are) are as unreasonable as you seem to think. Iâd need to delve more into the specifics to form a view here, but just want to reiterate StrongMindsâs first reason that running high-quality studies is generally very expensive, and may often not be the best decision for a charity from a cost-effectiveness standpoint. Even though I think the sector as a whole could probably still do with more (of the right type of) evidence generation, from my experience I would guess itâs also relatively common charities collect more evidence (of the wrong kind) than would be optimal.
I donât like what I see in at least some of the examples of communication you giveâand if I were evaluating StrongMinds currently I would certainly want to give them this feedback (in fact I believe I did back in 2018, which I think prompted them to make some changes). However, though Iâd agree that these provide some update on how thoroughly one should check claims StrongMinds makes more generally, I donât think they should meaningfully change oneâs view on the cost-effectiveness of StrongMindsâs core work.
(4) Jeff suggested (and some others seem to like) the idea of GWWC changing its inclusion criteria and only recommending/âtop-rating organisations for which an up-to-date public evaluation is available. This is something we discussed internally in the lead-up to this giving season, but we decided against it and I still feel that was and is the right decision (though I am open to further discussion/âarguments):
There are only very few charities for which full public and up-to-date evaluations are available, and coverage for some worldviews/âpromising cause areas is structurally missing. In particular, there are currently hardly any full public and up-to-date evaluations in the mental health/âsubjective well-being, longtermist and âmetaâ spaces. And note that - by this standardâwe wouldnât be able to recommend anyfunds except for those just regranting to already-established recommendations.
If the main reason for this was that we donât know of any cost-effective places to donate in these areas/âaccording to these worldviews, I would have agreed that we should just go with what we know or at least highlight that standards are much lower in these areas.
However, I donât think this is the case: we do have various evaluators/âgrantmakers looking into these areas (though too few yet IMO!) and arguably identifying very cost-effective donation opportunities (in expectation), but they often donât prioritise sharing these findings publicly or updating public evaluations regularly. Having worked at one of those myself (FP), my impression is this is generally for very good reasons, mainly related to resource constraints/âprioritisation as Jeff notes himself.
In an ideal worldâwhere these resource constraints wouldnât existâGWWC would only recommend charities for which public, up-to-date evaluations are available. However, we do not live in that ideal world, and as our goal is primarily to provide guidance on what are the best places to give to according to a variety of worldviews, rather than what are the best explainable/âpublicly documented places to give, I think the current policy is the way to go.
Obviously it is very important that we are transparent about this, which we aim to do by clearly documenting our inclusion criteria, explaining why we rely on our trusted evaluators, and highlighting the evidence that is publicly available for each individual charity. Providing this transparency has been a major focus for us this giving season, and though I think weâve made major steps in the right direction thereâs probably still room for improvement: any feedback is very welcome!
Note that one reason why more public evaluations would seem to be good/ânecessary is accountability: donors can check and give feedback on the quality of evaluations, providing the right incentives and useful information to evaluators. This sounds great in theory, but in my experience public evaluation reports are almost never read by donors (this post is an exception, which is why Iâm so happy with it, even though I donât agree with the authorâs conclusions), and they are a very high resource cost to create and maintainâin my experience writing a public report can take up about half of the total time spent on an evaluation (!). This leaves us with an accountability and transparency problem that I think is real, and which is one of the main reasons for our planned research direction this year at GWWC.
Lastly, FWIW I agree that we actively recommend StrongMinds (and this is our intention), even though we generally recommend donors to give to funds over individual charities.
I believe this covers (nearly) all of the GWWC-related comments Iâve seen here, but please let me know if Iâve missed anything!
This is an excellent response from a transparency standpoint, and increases my confidence in GWWC even though I donât agree with everything in it.
One interesting topic for a different discussionâalthough not really relevant to GWWCâs workâis the extent to which recommenders should condition an organizationâs continued recommendation status on obtaining better data if the organization grows (or even after a suitable period of time). Among other things, Iâm concerned that allowing recommendations that were appropriate under criteria appropriate for a small/âmid-size organization to be affirmed on the same evidence as an organization grows could disincentivize organizations from commissioning RCTs where appropriate. As relevant here, my take on an organization not having a better RCT is significantly different in the context of an organization with about $2MM a year in room for funding (which was the situation when FP made the recommendation, p. 31 here) than one that is seeking to raise $20MM over the next two years.
I still do not think any (immediate) action on our part is required.
FWIW Iâm not asking for immediate action, but a reconsideration of the criteria for endorsing a recommendation from a trusted evaluator.
There are only very few charities for which full public and up-to-date evaluations are available, and coverage for some worldviews/âpromising cause areas is structurally missing. In particular, there are currently hardly any full public and up-to-date evaluations in the mental health/âsubjective well-being, longtermist and âmetaâ spaces. And note thatâby this standardâwe wouldnât be able to recommend any funds except for those just regranting to already-established recommendations.
Iâm not proposing changing your approach to recommending funds, but for recommending charities. In cases where a field has only non-public or stale evaluations then fund managers are still in a position to consider non-public information and the general state of the field, check in with evaluators about what kind of stale the current evaluations are at, etc. And in these cases I think the best you can do is say that this is a field where GWWC currently doesnât have any recommendations for specific charities, and only recommends giving via funds.
FWIW Iâm not asking for immediate action, but a reconsideration of the criteria for endorsing a recommendation from a trusted evaluator.
I wasnât suggesting you were, but Simon certainly was. Sorry if that wasnât clear.
In cases where a field has only non-public or stale evaluations then fund managers are still in a position to consider non-public information and the general state of the field, check in with evaluators about what kind of stale the current evaluations are at, etc. And in these cases I think the best you can do is say that this is a field where GWWC currently doesnât have any recommendations for specific charities, and only recommends giving via funds.
As GWWC gets its recommendations and information directly from evaluators (and aims to update its recommendations regularly), I donât see a meaningful difference here between funds vs charities in fields where there are public up-to-date evaluations and where there arenât: in both cases, GWWC would recommend giving to funds over charities, and in both cases we can also highlight the charities that seem to be the most cost-effective donation opportunities based on the latest views of evaluators. GWWC provides a value-add to donors here, given some of these recommendations wouldnât be available to them otherwise (and many donors probably still prefer to donate to charities over donating to funds /â might not donate otherwise).
I wasnât suggesting you were, but Simon certainly was. Sorry if that wasnât clear.
Sorry, yes, I forgot your comment was primarily a response to Simon!
I donât see a meaningful difference here between funds vs charities in fields where there are public up-to-date evaluations and where there arenât
Iâm generally comfortable donating via funds, but this requires a large degree of trust in the fund managers. Iâm saying that I trust them to make decisions in line with the fund objectives, often without making their reasoning public. The biggest advantage I see to GWWC continuing to recommend specific charities is that it supports people who donât have that level of trust in directing their money well. This doesnât work without recommendations being backed by public current evaluations: if it just turns into âGWWC has internal reasons to trust FP which has internal reasons to recommend SMâ then this advantage for these donors is lost.
Note that this doesnât require that most donors read the public evaluations: these lower-trust donors still (rightly!) understand that their chances of being seriously misled are much lower if an evaluator has written up a public case like this.
So in fields where there are public up-to-date evaluations I think itâs good for GWWC to recommend funds, with individual charities as a fallback. But in fields where there arenât, I think GWWC should recommend funds only.
GWWC provides a value-add to donors here, given some of these recommendations wouldnât be available to them otherwise
What to do about people who canât donate to funds is a tricky case. I think what Iâd like to see is funds saying something like, if you want to support our work the best thing is to give to the fund, but the second best is to support orgs X, Y, Z. This recommendation wouldnât be based on a public evaluation, but just on your trust in them as a funder.
I especially think itâs important to separate when someone would be happy giving to a fund if not for the tax etc consequences vs when someone wants the trust/âpublic/âepistemic/âetc benefits of donating to a specific charity based on a public case.
I think trust is one of the reasons why a donor may or may not decide to give to a fund over a charity, but there are others as well, e.g. a preference for supporting more specific causes or projects. I expect donors with these other reasons (who trust evaluators/âfund managers but would still prefer to give to individual charities (as well)) will value charity recommendations in areas for which there are no public and up-to-date evaluations available.
I think what Iâd like to see is funds saying something like, if you want to support our work the best thing is to give to the fund, but the second best is to support orgs X, Y, Z. This recommendation wouldnât be based on a public evaluation, but just on your trust in them as a funder.
Note that this is basically equivalent to the current situation: we recommend funds over charities but highlight supporting charities as the second-best thing, based on recommendations of evaluators (who are often also fund managers in their area).
Thinking more, other situations in which a donor might want to donate to specific charities despite trusting the grantmakerâs judgement include:
Preference adjustments. Perhaps you agree with a fund in general, but you think they value averting deaths too highly relative to improving already existing lives. By donating to one of the charities they typically fund that focuses on the latter you might shift the distribution of funds in that direction. Or maybe not; your donation also has the effect of decreasing how much additional funding the charity needs, and the fund might allocate more elsewhere.
Ops skepticism. When you donate through a fund, in addition to trusting the grantmakers to make good decisions youâre also trusting the fundâs operations staff to handle the money properly and that your money wonât be caught up in unrelated legal trouble. Donating directly to a charity avoids these risks.
Yeah agreed. And another one could be as a way of getting involved more closely with a particularly charity when one wants to provide other types of support (advice, connections) in addition to funding. E.g. even though I donât think this should help a lot, Iâve anecdotally found it helpful to fund individual charities that I advise, because putting my personal donation money on the line motivates me to think even more critically about how the charity could best use its limited resources.
Thanks again for engaging in this discussion so thoughtfully Jeff! These types of comments and suggestions are generally very helpful for us (even if I donât agree with these particular ones).
Hi Simon,
Iâm back to work and able to reply with a bit more detail now (though also time-constrained as we have a lot of other important work to do this new year :)).
I still do not think any (immediate) action on our part is required. Let me lay out the reasons why:
(1) Our full process and criteria are explained here. As you seem to agree with from your comment above we need clear and simple rules for what is and what isnât included (incl. because we have a very small team and need to prioritize). Currently a very brief summary of these rules/âthe process would be: first determine which evaluators to rely on (also note our plans for this year) and then rely on their recommendations. We do not generally have the capacity to review individual charity evaluations, and would only do so and potentially diverge from a trusted evaluatorâs recommendation under exceptional circumstances. (I donât believe we have had such a circumstance this giving season, but may misremember)
(2) There were no strong reasons to diverge with respect to FPâs recommendation of StrongMinds at the time they recommended themâor to do an in-depth review of FPâs evaluation ourselvesâand I think there still arenât. As I said before, you make a few useful points in your post but I think Mattâs reaction and the subsequent discussion satisfactorily explain why Founders Pledge chose to recommend StrongMinds and why your comments donât (immediately) change their view on this: StrongMinds doesnât need to meet GiveWell-tier levels of confidence and easily clears FPâs bar in expectationâeven with the issues you mention having been taken into accountâand nearly all the decision-relevant reasoning is already available publicly in the 2019 report and HLIâs recent review. I would of course be very interested and we could reconsider our view if any ongoing discussion brings to light new arguments or if FP is unable to back up any claims they made, but so far I havenât seen any red or even orange flags.
(3) The above should be enough for GWWC to not prioritize taking any action related to StrongMinds at the moment, but I happen to have a bit more context here than usual as I was a co-author on the 2019 FP report on StrongMinds, and none of the five issues you raise are a surprise/ânew to me or change my view of StrongMinds very much. Very briefly on each (note: I donât have much time /â will mostly leave this to Matt /â some of my knowledge may be outdated or my memory may be off):
I agree the overall quality of evidence is far short from e.g. GiveWellâs standards (cf. Mattâs commentsâand would have agreed on this back in 2019. At this point, I certainly wouldnât take FPâs 2019 cost-effectiveness analysis literally: I would deflate the results by quite a bit to account for quality of evidence, and I know FP have done so internally for the past ~2 years at least. However, AFAIK such accountingâdone reasonablyâisnât enough to change the overall conclusion of StrongMinds meeting the cost-effectiveness bar in wellbeing terms. I should also note that HLIâs cost-effectiveness analysis seems to take into account more pieces of evidence, though I havenât reviewed it; just skimmed it.
As you say yourself, The 2019 FP report already accounted for social desirability bias to some extent, and it further highlights this bias as one of its key uncertainties (section 3.8, p.31).
I disagree with depression being overweighted here for various reasons, including that DALYs plausibly underweight mental health (see section 1, p.8-9 of the FP mental health report. Also note that HLIâs recent analysisâAFAIKâdoesnât rely on DALYâs in any way.
I donât think the reasons StrongMinds mention for not collecting more evidence (than they already are) are as unreasonable as you seem to think. Iâd need to delve more into the specifics to form a view here, but just want to reiterate StrongMindsâs first reason that running high-quality studies is generally very expensive, and may often not be the best decision for a charity from a cost-effectiveness standpoint. Even though I think the sector as a whole could probably still do with more (of the right type of) evidence generation, from my experience I would guess itâs also relatively common charities collect more evidence (of the wrong kind) than would be optimal.
I donât like what I see in at least some of the examples of communication you giveâand if I were evaluating StrongMinds currently I would certainly want to give them this feedback (in fact I believe I did back in 2018, which I think prompted them to make some changes). However, though Iâd agree that these provide some update on how thoroughly one should check claims StrongMinds makes more generally, I donât think they should meaningfully change oneâs view on the cost-effectiveness of StrongMindsâs core work.
(4) Jeff suggested (and some others seem to like) the idea of GWWC changing its inclusion criteria and only recommending/âtop-rating organisations for which an up-to-date public evaluation is available. This is something we discussed internally in the lead-up to this giving season, but we decided against it and I still feel that was and is the right decision (though I am open to further discussion/âarguments):
There are only very few charities for which full public and up-to-date evaluations are available, and coverage for some worldviews/âpromising cause areas is structurally missing. In particular, there are currently hardly any full public and up-to-date evaluations in the mental health/âsubjective well-being, longtermist and âmetaâ spaces. And note that - by this standardâwe wouldnât be able to recommend any funds except for those just regranting to already-established recommendations.
If the main reason for this was that we donât know of any cost-effective places to donate in these areas/âaccording to these worldviews, I would have agreed that we should just go with what we know or at least highlight that standards are much lower in these areas.
However, I donât think this is the case: we do have various evaluators/âgrantmakers looking into these areas (though too few yet IMO!) and arguably identifying very cost-effective donation opportunities (in expectation), but they often donât prioritise sharing these findings publicly or updating public evaluations regularly. Having worked at one of those myself (FP), my impression is this is generally for very good reasons, mainly related to resource constraints/âprioritisation as Jeff notes himself.
In an ideal worldâwhere these resource constraints wouldnât existâGWWC would only recommend charities for which public, up-to-date evaluations are available. However, we do not live in that ideal world, and as our goal is primarily to provide guidance on what are the best places to give to according to a variety of worldviews, rather than what are the best explainable/âpublicly documented places to give, I think the current policy is the way to go.
Obviously it is very important that we are transparent about this, which we aim to do by clearly documenting our inclusion criteria, explaining why we rely on our trusted evaluators, and highlighting the evidence that is publicly available for each individual charity. Providing this transparency has been a major focus for us this giving season, and though I think weâve made major steps in the right direction thereâs probably still room for improvement: any feedback is very welcome!
Note that one reason why more public evaluations would seem to be good/ânecessary is accountability: donors can check and give feedback on the quality of evaluations, providing the right incentives and useful information to evaluators. This sounds great in theory, but in my experience public evaluation reports are almost never read by donors (this post is an exception, which is why Iâm so happy with it, even though I donât agree with the authorâs conclusions), and they are a very high resource cost to create and maintainâin my experience writing a public report can take up about half of the total time spent on an evaluation (!). This leaves us with an accountability and transparency problem that I think is real, and which is one of the main reasons for our planned research direction this year at GWWC.
Lastly, FWIW I agree that we actively recommend StrongMinds (and this is our intention), even though we generally recommend donors to give to funds over individual charities.
I believe this covers (nearly) all of the GWWC-related comments Iâve seen here, but please let me know if Iâve missed anything!
This is an excellent response from a transparency standpoint, and increases my confidence in GWWC even though I donât agree with everything in it.
One interesting topic for a different discussionâalthough not really relevant to GWWCâs workâis the extent to which recommenders should condition an organizationâs continued recommendation status on obtaining better data if the organization grows (or even after a suitable period of time). Among other things, Iâm concerned that allowing recommendations that were appropriate under criteria appropriate for a small/âmid-size organization to be affirmed on the same evidence as an organization grows could disincentivize organizations from commissioning RCTs where appropriate. As relevant here, my take on an organization not having a better RCT is significantly different in the context of an organization with about $2MM a year in room for funding (which was the situation when FP made the recommendation, p. 31 here) than one that is seeking to raise $20MM over the next two years.
Thanks for the response!
FWIW Iâm not asking for immediate action, but a reconsideration of the criteria for endorsing a recommendation from a trusted evaluator.
Iâm not proposing changing your approach to recommending funds, but for recommending charities. In cases where a field has only non-public or stale evaluations then fund managers are still in a position to consider non-public information and the general state of the field, check in with evaluators about what kind of stale the current evaluations are at, etc. And in these cases I think the best you can do is say that this is a field where GWWC currently doesnât have any recommendations for specific charities, and only recommends giving via funds.
I wasnât suggesting you were, but Simon certainly was. Sorry if that wasnât clear.
As GWWC gets its recommendations and information directly from evaluators (and aims to update its recommendations regularly), I donât see a meaningful difference here between funds vs charities in fields where there are public up-to-date evaluations and where there arenât: in both cases, GWWC would recommend giving to funds over charities, and in both cases we can also highlight the charities that seem to be the most cost-effective donation opportunities based on the latest views of evaluators. GWWC provides a value-add to donors here, given some of these recommendations wouldnât be available to them otherwise (and many donors probably still prefer to donate to charities over donating to funds /â might not donate otherwise).
Sorry, yes, I forgot your comment was primarily a response to Simon!
Iâm generally comfortable donating via funds, but this requires a large degree of trust in the fund managers. Iâm saying that I trust them to make decisions in line with the fund objectives, often without making their reasoning public. The biggest advantage I see to GWWC continuing to recommend specific charities is that it supports people who donât have that level of trust in directing their money well. This doesnât work without recommendations being backed by public current evaluations: if it just turns into âGWWC has internal reasons to trust FP which has internal reasons to recommend SMâ then this advantage for these donors is lost.
Note that this doesnât require that most donors read the public evaluations: these lower-trust donors still (rightly!) understand that their chances of being seriously misled are much lower if an evaluator has written up a public case like this.
So in fields where there are public up-to-date evaluations I think itâs good for GWWC to recommend funds, with individual charities as a fallback. But in fields where there arenât, I think GWWC should recommend funds only.
What to do about people who canât donate to funds is a tricky case. I think what Iâd like to see is funds saying something like, if you want to support our work the best thing is to give to the fund, but the second best is to support orgs X, Y, Z. This recommendation wouldnât be based on a public evaluation, but just on your trust in them as a funder.
I especially think itâs important to separate when someone would be happy giving to a fund if not for the tax etc consequences vs when someone wants the trust/âpublic/âepistemic/âetc benefits of donating to a specific charity based on a public case.
I think trust is one of the reasons why a donor may or may not decide to give to a fund over a charity, but there are others as well, e.g. a preference for supporting more specific causes or projects. I expect donors with these other reasons (who trust evaluators/âfund managers but would still prefer to give to individual charities (as well)) will value charity recommendations in areas for which there are no public and up-to-date evaluations available.
Note that this is basically equivalent to the current situation: we recommend funds over charities but highlight supporting charities as the second-best thing, based on recommendations of evaluators (who are often also fund managers in their area).
Thinking more, other situations in which a donor might want to donate to specific charities despite trusting the grantmakerâs judgement include:
Preference adjustments. Perhaps you agree with a fund in general, but you think they value averting deaths too highly relative to improving already existing lives. By donating to one of the charities they typically fund that focuses on the latter you might shift the distribution of funds in that direction. Or maybe not; your donation also has the effect of decreasing how much additional funding the charity needs, and the fund might allocate more elsewhere.
Ops skepticism. When you donate through a fund, in addition to trusting the grantmakers to make good decisions youâre also trusting the fundâs operations staff to handle the money properly and that your money wonât be caught up in unrelated legal trouble. Donating directly to a charity avoids these risks.
Yeah agreed. And another one could be as a way of getting involved more closely with a particularly charity when one wants to provide other types of support (advice, connections) in addition to funding. E.g. even though I donât think this should help a lot, Iâve anecdotally found it helpful to fund individual charities that I advise, because putting my personal donation money on the line motivates me to think even more critically about how the charity could best use its limited resources.
Thanks again for engaging in this discussion so thoughtfully Jeff! These types of comments and suggestions are generally very helpful for us (even if I donât agree with these particular ones).