Currently Research Director at Founders Pledge, but posts and comments represent my own opinions, not FP’s, unless otherwise noted.
I worked previously as a data scientist and as a journalist.
Currently Research Director at Founders Pledge, but posts and comments represent my own opinions, not FP’s, unless otherwise noted.
I worked previously as a data scientist and as a journalist.
Easily reconciled — most of our money moved is via advising our members. These grants are in large part not public, and members also grant to many organizations that they choose irrespective of our recommendations. We provide the infrastructure to enable this.
The Funds are a relatively recent development, and indeed some of the grants listed on the current Fund pages were actually advised by the fund managers, not granted directly from money contributed to the Fund (this is noted on the website if it’s the case for each grant). Ideally, we’d be able to grow the Funds a lot more so that we can do much more active grantmaking, and at the same time continue to advise members on effective giving.
My team (11 people at the moment) does generalist research across worldviews — animal welfare, longtermism/GCRs, and global health and development. We also have a climate vertical, as you note, which I characterize in more detail in this previous forum comment.
EDIT:
Realized I didn’t address your final question. I think we are a mix, basically — we are enabling successful entrepreneurs to give, period (in fact, we are committing them to do so via a legally binding pledge), and we are trying to influence as much of their giving as possible toward the most effective possible things. It is probably more accurate to represent FP as having a research arm, simply given staff proportions, but equally accurate to describe our recommendations as being “research-driven.”
We (Founders Pledge) do have a significant presence in SF, and are actively trying to grow much faster in the U.S. in 2024.
A couple weakly held takes here, based on my experience:
Although it’s true that issues around effective giving are much more salient in the Bay Area, it’s also the case that effective giving is nearly as much of an uphill battle with SF philanthropists as with others. People do still have pet causes, and there are many particularities about the U.S. philanthropic ecosystem that sometimes push against individuals’ willingness to take the main points of effective giving on board.
Relatedly, growing in SF seems in part to be hard essentially because of competition. There’s a lot of money and philanthropic intent, and a fair number of existing organizations (and philanthropic advisors, etc) that are focused on capturing that money and guiding that philanthropy. So we do face the challenge of getting in front of people, getting enough of their time, etc.
Since FP has historically offered mostly free services to members, growing our network in SF is something we actually need to fundraise for. On the margin I believe it’s worthwhile, given the large number of potentially aligned UHNWs, but it’s the kind of investment (in this case, in Founders Pledge by its funders) that would likely take a couple years to bear fruit in terms of increased amounts of giving to effective charities. I expect this is also a consideration for other existing groups that are thinking about raising money for a Bay Area expansion.
I think your arguments do suggest good reasons why nuclear risk might be prioritized lower; since we operate on the most effective margin, as you note, it is also possible at the same time for there to be significant funding margins in nuclear that are highly effective in expectation.
My point is precisely that you should not assume any view. My position is that the uncertainties here are significant enough to warrant some attention to nuclear war as a potential extinction risk, rather than to simply bat away these concerns on first principles and questionable empirics.
Where extinction risk is concerned, it is potentially very costly to conclude on little evidence that something is not an extinction risk. We do need to prioritize, so I would not for instance propose treating bad zoning laws as an X-risk simply because we can’t demonstrate conclusively that they won’t lead to extinction. Luckily there are very few things that could kill very large numbers of people, and nuclear war is one of them.
I don’t think my argument says anything about how nuclear risk should be prioritized relative to other X-risks, I think the arguments for deprioritizing it relative to others are strong and reasonable people can disagree; YMMV.
If you leave 1,000 − 10,000 humans alive, the longterm future is probably fine
This is a very common claim that I think needs to be defended somewhat more robustly instead of simply assumed. If we have one strength as a community, is in not simply assuming things.
My read is that the evidence here is quite limited, the outside view suggests that losing 99.9999% of a species / having a very small population is a significant extinction risk, and that the uncertainty around the long-term viability of collapse scenarios is enough reason to want to avoid near-extinction events.
Has there been any formal probabilistic risk assessment on AI X-risk? e.g. fault tree analysis or event tree analysis — anything of that sort?
I disagree with the valence of the comment, but think it reflects legitimate concerns.
I am not worried that “HLI’s institutional agenda corrupts its ability to conduct fair-minded and even-handed assessment.” I agree that there are some ways that HLI’s pro-SWB-measurement stance can bleed into overly optimistic analytic choices, but we are not simply taking analyses by our research partners on faith and I hope no one else is either. Indeed, the very reason HLI’s mistakes are obvious is that they have been transparent and responsive to criticism.
We disagree with HLI about SM’s rating — we use HLI’s work as a starting point and arrive at an undiscounted rating of 5-6x; subjective discounts place it between 1-2x, which squares with GiveWell’s analysis. But our analysis was facilitated significantly by HLI’s work, which remains useful despite its flaws.
I guess I would very slightly adjust my sense of HLI, but I wouldn’t really think of this as an “error.” I don’t significantly adjust my view of GiveWell when they delist a charity based on new information.
I think if the RCT downgrades StrongMinds’ work by a big factor, that won’t really introduce new information about HLI’s methodology/expertise. If you think there are methodological weaknesses that would cause them to overstate StrongMinds’ impact, those weaknesses should be visible now, irrespective of the RCT results.
I can also vouch for HLI. Per John Salter’s comment, I may also have been a little sus early (sorry Michael) on but HLI’s work has been extremely valuable for our own methodology improvements at Founders Pledge. The whole team is great, and I will second John’s comment to the effect that Joel’s expertise is really rare and that HLI seems to be the right home for it.
Just a note here as the author of that lobbying post you cite: the CEA including the 2.5% change in chance of success is intended to be illustrative — well, conservative, but it’s based on nothing more than a rough sense of effect magnitude from having read all those studies for the lit review. The specific figures included in the CEA are very rough. As Stephen Clare pointed out in the comments, it’s also probably not realistic to have modeled that is normal on the [0,5] 95% CI.
Hey Vasco, you make lots of good points here that are worth considering at length. These are topics we’ve discussed on and off in a fairly unstructured way on the research team at FP, and I’m afraid I’m not sure what’s next when it comes to tackling them. We don’t currently have a researcher dedicated to animal welfare, and our recommendations in that space have historically come from partner orgs.
Just as context, the reason for this is that FP has historically separated our recommendations into three “worldviews” (longtermism, current generations, and animal welfare). The idea is that it’s a lot easier to shift member grantmaking across causes within a worldview (e.g. from rare diseases to malaria, for instance) than across worldviews (e.g. to get people to care much more about chickens). The upshot of this, for better or for worse, is that we end up spending a lot of time prioritizing causes within worldviews, and avoiding the question of how to prioritize across worldviews.
This is also part of the reason we don’t have a dedicated animal welfare researcher — we haven’t historically moved as much money within that worldview as within our others. But it’s actually not sure which way the causality flows in that case, so your post is a good nudge to think more seriously about this, as well as the ways we might be able to incorporate animal welfare considerations into our GHD calculations, worldview separations notwithstanding.
Hey Matthew, thanks for sharing this. Can you provide some more information (or link to your thoughts elsewhere) on why fervor around UV-C is misplaced? As you know, ASHRAE Standards 185.1 and 185.2 concern testing of UV devices for germicidal irradiation, so I’d be particularly interested to know if this was an area that ASHRAE itself had concluded was unpromising.
I thought of some other down-the-line feature requests
Google Sheets integration (we currently already store our forecasts in a Google sheet)
Relatedly, ability to export to CSV (does this already exist and I just missed it?)
Ability to designate a particular resolver
Different formal resolution mechanisms, like a poll of users.
Ah, great! I think it would be nice to offer different aggregation options, though if you do offer one I agree that geo mean of odds is the best default. But I can imagine people wanting to use medians or averages, or even specifying their own aggregation functions. Especially if you are trying to encourage uptake by less technical organizations, it seems important to offer at least one option that is more legible to less numerate people.
I have already installed this and started using this at Founders Pledge. Thanks for making this! I’ve been wanting something like this for a long time.
Some feature requests:
Aggregation choices (e.g. geo mean of odds would be nice)
Brier scores for users
Calibration curves for users
Honestly, what surprises me most here is how similar all four organizations’ numbers are across most of the items involved
This was also gratifying for us to see, but it’s probably important to note that our approach incorporates weights from both GiveWell and HLI at different points, so the estimates are not completely independent.
Thanks, bruce — this is a great point. I’m not sure if we would account for the costs in the exact way I think you have done here, but we will definitely include this consideration in our calculation.
I haven’t thought extensively about what kind of effect size I’d expect, but I think I’m roughly 65-70% confident that the RCT will return evidence of a detectable effect.
But my uncertainty is more in terms of rating upon re-evaluating the whole thing. Since I reviewed SM last year, we’ve started to be a lot more punctilious about incorporating various discounts and forecasts into CEAs. So on the one hand I’d naturally expect us to apply more of those discounts on reviewing this case, but on the other hand my original reason for not discounting HLI’s effect size estimates was my sense that their meta-analytic weightings appropriately accounted for a lot of the concerns that we’d discount for. This generates uncertainty that I expect we can resolve once we dig in.
As promised, I am returning here with some more detail. I will break this (very long) comment into sections for the sake of clarity.
My overview of this discussion
It seems clear to me that what is going on here is that there are conflicting interpretations of the evidence on StrongMinds’ effectiveness. In particular, the key question here is what our estimate of the effect size of SM’s programs should be. There are other uncertainties and disagreements, but in my view, this is the essential crux of the conversation. I will give my own (personal) interpretation below, but I cannot stress enough that the vast majority of the relevant evidence is public—compiled very nicely in HLI’s report—and that neither FP’s nor GWWC’s recommendation hinges on “secret” information. As I indicate below, there are some materials that can’t be made public, but they are simply not critical elements of the evaluation, just quotes from private communications and things of that nature.
We are all looking at more or less the same evidence and coming to different conclusions.
I also think there is an important subtext to this conversation, which is the idea that both GWWC and FP should not recommend things for which we can’t achieve bednet-level levels of confidence. We simply don’t agree, and accordingly this is not FP’s approach to charity evaluation. As I indicated in my original comment, we are risk-neutral and evaluate charities on the basis of expected cost-effectiveness. I think GiveWell is about as good as an organization can be at doing what GiveWell does, and for donors who prioritize their giving conditional on high levels of confidence, I will always recommend GiveWell top charities over others, irrespective of expected value calculations. It bears repeating that even with this orientation, we still think GiveWell charities are around twice as cost-effective as StrongMinds. I think Founders Pledge is in a substantially different position, and from the standpoint of doing the most possible good in the world, I am confident that risk-neutrality is the right position for us.
We will provide our recommendations, along with any shareable information we have to support them, to anyone who asks. I am not sure what the right way for GWWC to present them is.
How this conversation will and won’t affect FP’s position
What we won’t do is take immediate steps (like, this week) to modify our recommendation or our cost-effectiveness analysis of StrongMinds. My approach to managing FP’s research is to try to thoughtfully build processes that maximize the good we do over the long term. This is not a procedure fetish; this is a commonsensical way to ensure that we prioritize our time well and allocate important questions the resources and systematic thought they deserve.
What we will do is incorporate some important takeaways from this conversation during StrongMinds’ next re-evaluation, which will likely happen in the coming months. To my eye, the most important takeaway is that our rating of StrongMinds may not sufficiently account for uncertainty around effect size. Incorporating this uncertainty would deflate SM’s rating and may bring it much closer to our bar of 1x GiveDirectly.
More generally, I do agree with the meta-point that our evaluations should be public. We are slowly but surely moving in this direction over time, though resource constraints make it a slow process.
FP’s materials on StrongMinds
A copy of our CEA. I’m afraid this may not be very elucidating, as essentially all we did here is take HLI’s estimates and put them into a format that works better with our ratings system. One note is that we don’t apply any subjective discounts in this CEA—this is the kind of thing I expect might change in future.
Some exploration I did in R and Stan to try to test various components of the analysis. In particular, this contains several attempts to use SM’s pre-post data (corrected for a hypothesized counterfactual) to update on several different more general priors. Of particular interest are this review from which I took a prior on psychosocial interventions in LMICs and this one which offers a much more outside view-y prior.
Crucially, I really don’t think this type of explicit Bayesian update is the right way to estimate effects here; I much prefer HLI’s way of estimating effects (it leaves a lot less data on the table).
The main goal of this admittedly informal analysis was to test under what alternate analytic conditions our estimate of SM’s effectiveness would fall below our recommendation bar.
We have an internal evaluation template that I have not shared, since it contains quotes from private communications with StrongMinds. There’s nothing mysterious or particularly informative here; we just don’t share details of private communications that weren’t conducted with the explicit expectation that they’d be shared. This is the type of template that in future we hope to post publicly with privileged communications excised.
How I view the evidence about StrongMinds
Our task as charity evaluators is, to the extent possible, to quantify the important considerations in estimating a charity’s impact. When I reviewed HLI’s work on StrongMinds, I was very satisfied that they had accounted for many different sources of uncertainty. I am still pretty satisfied, though I am now somewhat more uncertain myself.
A running theme in critiques of StrongMinds is that the effects they report are unbelievably large. I agree that they are very large. I don’t agree that the existence of large-seeming effects is itself a knockdown argument against recommending this charity. It is, rather, a piece of evidence that we should consider alongside many other pieces of evidence.
I want to oversimplify a bit by distinguishing between two different views of how SM could end up reporting very large effect sizes.
The reported effects are essentially made-up. The intervention has no effect at all, and the illusion of an effect is driven by fraud at worst and severe confirmation bias at best.
The reported effects are severely inflated by selection bias, social desirability bias, and other similar factors.
I am very satisfied that (1) is not the case here. There are two reasons for this. First, the intervention is well-supported by a fair amount of external evidence. This program is not “out of nowhere”; there are good reasons to believe it has some (possibly small) effect. Second, though StrongMinds’ recent data collection practices have been wanting, they have shown a willingness to be evaluated (the existence of the Ozler RCT is a key data point here). With FP, StrongMinds were extremely responsive to questions and forthcoming and transparent with their answers.
Now, I think (2) is very likely to be the case. At FP, we increasingly try to account for this uncertainty in our CEAs. As you’ll note in the link above, we didn’t do that in our last review of StrongMinds, yielding a rating of roughly 5-6xGiveDirectly (per our moral weights, we value a WELLBY at about $160). So the question here is how much of the observed effect is due to bias? If it’s 80%, we should deflate our rating to 1.2x at StrongMinds’ net review. In this scenario it would still clear our bar (though only just).
In the absence of prior evidence about IPT-g, I think we might likely conclude that the observed effects are overwhelmingly due to bias. But I don’t think this is a Pascal’s Mugging-type scenario. We are not seeing a very large, possibly dubious effect that remains large in expectation even after deflating for dubiousness. We are seeing a large effect that is very broadly in line with the kind of effect we should expect on priors.
What I expect for the future
In my internal forecast attached to our last evaluation, I gave an 80% probability to us finding that SM would have an effectiveness of between 5.5x and 7x GD at its next evaluation. I would lower this significantly, to something like 40%, and overall I would say that I think there’s a 70-80% chance we’ll still be recommending SM after its next re-evaluation.
I’m also strongly interested in this research topic — note that although the problem is worst in the U.S., the availability and affordability of fentanyl (which appears to be driving OD deaths) suggests that this could easily spread to LMICs in the medium-term, suggesting that preventive measures such as vaccines could even be cost-effective by traditional metrics.