“Fair” and “unfair” are tricky words to nail down.
I think there are a wide range of factors that explain why HLI has been treated differently than other orgs -- some “fair” under most definitions of the word, some less so. Some of those reasons are adjacent to questions of funding and influence, but I’m not sure they provide much room to criticize HLI’s critics.
HLI is running in a lane—global health/development/wellbeing—where the evidentiary standards are much higher than in longtermist areas. Part of this is the nature of the work; asking a biosecurity program how many pandemics it has prevented is not workable. Part of it is that there is a very well-funded organization that has been doing CEAs that conensus views as high-quality. Yet another aspect is that GHDW work has been much more limited by funding constraints, which has incentivized GHDW funders to adopt higher standards.
I think people generally need to be kinder to smaller-scale, early-stage efforts . . . but see point 3 below.
HLI is a charity recommender, a significant portion of whose focus currently involves making recommendations to ordinary people (not megadonors, foundations, etc.) I do think the level of scrutiny should ordinarily be higher for charity recommenders, especially those making recommendations to the general public. The purpose of a charity recommender is to evaluate the relative merits of various charities, and for ordinary donors their recommendations may be seen as near-authoritative. A sense that the community needs to carefully scrutinize the recommender’s work destroys much of a recommender’s value proposition in the first place. And while it’s not very utilitarian of me, I do feel more protective of small donors who don’t have an in-house staff to pick up on a recommender’s mistakes.
I think an overconfident marketing campaign in 2022 did play a major role in how much grace people are willing to extend on the CEA. I haven’t been around that long, but this does seem to significantly distinguish HLI from other orgs. I believe that HLI has expressed regret for certain statements, but a framework that compares statements made at that time (that have not been clearly and explicitly retracted) to what the data actually support strikes me as on the “fair” side of the ledger.
This was HLI’s first major recommendation; people would be less prone to draw negative inferences about (e.g.) an org whose first four analyses/recommendations were fine but whose fifth had some significant issues.
StrongMinds spends (and could potentially fundraise) enough money to make a significant dive into its cost-effectiveness worthwhile for critics, but probably not so much as to justify an airtight multi-million dollar workup (including by commissioning our own studies to fill any major holes in the data that would have a big effect on the CEA). So it’s an awkward-size program to evaluate.
Pretty much all skeptical analysis is done by volunteers on their own time, and so the volume/quality of that work will heavily depend on who is interested in and available to doing it. It’s plausible to me that having a controversial and/or novel framework could motivate more critics to volunteer for duty.
There could also be a snowball effect; the detection of one significant weakness in a CEA may motivate others to start looking.
HLI asked Forum users to contribute money. Although I take a wide stance on “standing” to criticize organizations, one could reasonably characterize asking users for action as opening the door to some extent. Having an active fundraising ask may also provide a more concrete payoff/impact for criticism, by preventing users from taking an action the critic found undesirable.
HLI has been unusually transparent with data and responsive to criticism, which has made such criticism easier and kept it up longer. I think you’re right to be concerned about the ferocity of criticism disincentivizing trransparency and opennness on the margin.
The barriers to criticizing HLI are much lower. Because HLI has little power, no one is concerned about blowback. Compare that to the recent Omega criticisms of AI labs, which were posted psuedonymously and which had to rely on undisclosed data. Criticism from established community members who sign their work and can show their work carries more weight, and there’s a disincentive to writing anonymous criticism (you’ll never get any credit for it).
Several of these points are at least adjacent to questions of funding and power, and they cumulatively make me feel at least somewhat uncomfortable, e.g.:
It’s unlikely an organization with more secure funding would have made a fundraising appeal at this time. Rather, it likely would have laid low until it had produced a new CEA for SM and until more time had passed since the prior harsh posts.
HLI may have felt pressure to be particularly transparent and responsive than a more established org. It’s unlikely HLI would have been taken seriously if it didn’t show its receipts, and it doesn’t have the power/prestige needed for a “no real comment” approach to criticism to have a good shot at working.
That being said, I find it challenging to assign much fault for those factors to the Forum user community on those. For example, in point 10, the unfairness is not that HLI is being criticized by named users who have built up a reputation, but that the criticism of other orgs is disincentivized and psuedonymous.
I think you’re right that the response to HLI may discourage transparency and responsiveness on the margin, and that this is a problem. As a practical matter, I think there are two factors that mitigate this to some extent. One is that I think the criticism of HLI reflects a convergence of a number of factors as listed above, and I’m not sure how much marginal effect comes from their good transparency and responsiveness. Second, I think any startup org trying to pursue HLI-like goals has to be transparent and responsive to get a hearing from the community, so I think it less likely that knowledge of current events will change another org’s stance to a materially less open and responsive one.
I’m undecided on the net effect of all of this. My hope is that it will ultimately result in adoption of better epistemic safeguards and communications management—both at HLI and elsewhere in the ecosystem. (Cf. my recent post on the HLI thread). That would be a good result, although I’d still wish we had gotten there with a lot less rancor.
“Fair” and “unfair” are tricky words to nail down.
I think there are a wide range of factors that explain why HLI has been treated differently than other orgs -- some “fair” under most definitions of the word, some less so. Some of those reasons are adjacent to questions of funding and influence, but I’m not sure they provide much room to criticize HLI’s critics.
HLI is running in a lane—global health/development/wellbeing—where the evidentiary standards are much higher than in longtermist areas. Part of this is the nature of the work; asking a biosecurity program how many pandemics it has prevented is not workable. Part of it is that there is a very well-funded organization that has been doing CEAs that conensus views as high-quality. Yet another aspect is that GHDW work has been much more limited by funding constraints, which has incentivized GHDW funders to adopt higher standards.
I think people generally need to be kinder to smaller-scale, early-stage efforts . . . but see point 3 below.
HLI is a charity recommender, a significant portion of whose focus currently involves making recommendations to ordinary people (not megadonors, foundations, etc.) I do think the level of scrutiny should ordinarily be higher for charity recommenders, especially those making recommendations to the general public. The purpose of a charity recommender is to evaluate the relative merits of various charities, and for ordinary donors their recommendations may be seen as near-authoritative. A sense that the community needs to carefully scrutinize the recommender’s work destroys much of a recommender’s value proposition in the first place. And while it’s not very utilitarian of me, I do feel more protective of small donors who don’t have an in-house staff to pick up on a recommender’s mistakes.
I think an overconfident marketing campaign in 2022 did play a major role in how much grace people are willing to extend on the CEA. I haven’t been around that long, but this does seem to significantly distinguish HLI from other orgs. I believe that HLI has expressed regret for certain statements, but a framework that compares statements made at that time (that have not been clearly and explicitly retracted) to what the data actually support strikes me as on the “fair” side of the ledger.
This was HLI’s first major recommendation; people would be less prone to draw negative inferences about (e.g.) an org whose first four analyses/recommendations were fine but whose fifth had some significant issues.
StrongMinds spends (and could potentially fundraise) enough money to make a significant dive into its cost-effectiveness worthwhile for critics, but probably not so much as to justify an airtight multi-million dollar workup (including by commissioning our own studies to fill any major holes in the data that would have a big effect on the CEA). So it’s an awkward-size program to evaluate.
Pretty much all skeptical analysis is done by volunteers on their own time, and so the volume/quality of that work will heavily depend on who is interested in and available to doing it. It’s plausible to me that having a controversial and/or novel framework could motivate more critics to volunteer for duty.
There could also be a snowball effect; the detection of one significant weakness in a CEA may motivate others to start looking.
HLI asked Forum users to contribute money. Although I take a wide stance on “standing” to criticize organizations, one could reasonably characterize asking users for action as opening the door to some extent. Having an active fundraising ask may also provide a more concrete payoff/impact for criticism, by preventing users from taking an action the critic found undesirable.
HLI has been unusually transparent with data and responsive to criticism, which has made such criticism easier and kept it up longer. I think you’re right to be concerned about the ferocity of criticism disincentivizing trransparency and opennness on the margin.
The barriers to criticizing HLI are much lower. Because HLI has little power, no one is concerned about blowback. Compare that to the recent Omega criticisms of AI labs, which were posted psuedonymously and which had to rely on undisclosed data. Criticism from established community members who sign their work and can show their work carries more weight, and there’s a disincentive to writing anonymous criticism (you’ll never get any credit for it).
Several of these points are at least adjacent to questions of funding and power, and they cumulatively make me feel at least somewhat uncomfortable, e.g.:
It’s unlikely an organization with more secure funding would have made a fundraising appeal at this time. Rather, it likely would have laid low until it had produced a new CEA for SM and until more time had passed since the prior harsh posts.
HLI may have felt pressure to be particularly transparent and responsive than a more established org. It’s unlikely HLI would have been taken seriously if it didn’t show its receipts, and it doesn’t have the power/prestige needed for a “no real comment” approach to criticism to have a good shot at working.
That being said, I find it challenging to assign much fault for those factors to the Forum user community on those. For example, in point 10, the unfairness is not that HLI is being criticized by named users who have built up a reputation, but that the criticism of other orgs is disincentivized and psuedonymous.
I think you’re right that the response to HLI may discourage transparency and responsiveness on the margin, and that this is a problem. As a practical matter, I think there are two factors that mitigate this to some extent. One is that I think the criticism of HLI reflects a convergence of a number of factors as listed above, and I’m not sure how much marginal effect comes from their good transparency and responsiveness. Second, I think any startup org trying to pursue HLI-like goals has to be transparent and responsive to get a hearing from the community, so I think it less likely that knowledge of current events will change another org’s stance to a materially less open and responsive one.
I’m undecided on the net effect of all of this. My hope is that it will ultimately result in adoption of better epistemic safeguards and communications management—both at HLI and elsewhere in the ecosystem. (Cf. my recent post on the HLI thread). That would be a good result, although I’d still wish we had gotten there with a lot less rancor.