A quick update on this: Good Ventures is now open to supporting work that Open Phil recommends on digital minds/AI moral patienthood. We’re still figuring out where that work should slot in (including whether we’d open a public call for applications) and will update people working in the field when we do. Additionally, Good Ventures are now open to considering a wider range of recommendations in right-of-center AI policy and a couple other smaller areas (e.g. in macrostrategy/futurism), though those will be evaluated on a case-by-case basis for now. We’ll hopefully develop clearer parameters for GV interest over time (and share more when we have those). In practice, given our increasing work with other donors, we don’t think any of this is a huge update; we’d like to continue to hear about and expect to be able to direct funding to the most promising opportunities whether or not they are a fit for Good Ventures.
Alexander_Berger
Thanks Nick.
On the housing piece: we have a long internal report on the valuation question that we didn’t think was particularly relevant to external folks so we haven’t published it, but will see about doing so later this year. Fn 7 and the text around it of this grant writeup explain the basic math of a previous version of that valuation calc, though our recent version is a lot more complex.
If you’re asking about the bar math, the general logic is explained here and the move to a 2,100x bar is mentioned here.
On R&D, the 70x number comes from Matt Clancy’s report (and I think we may have made some modest internal revisions but I don’t think they change the bottom line much). You’re right that that implies we need ~30x leverage to clear our bar. We sometimes think that is possible directly through strategic project selection—e.g., we fund direct R&D on neglected and important global health problems, and sometimes (in the case of this portfolio) through policy/advocacy. I agree 30x leverage presents a high bar and I think it’s totally reasonable to be skeptical about whether we can clear it, but we think we sometimes can.
Thanks Ozzie, you’re definitely allowed to ask questions like this! We won’t always be able to answer but we welcome questions and critiques of our work.
Our innovation policy work is generally based on the assumption that long-run health and income gains are ultimately attributable to R&D. For example, Matt Clancy estimated in this report that general funding for scientific research ranged from 50-330x in our framework, depending on the model and assumptions about downside risks from scientific research. In practice we currently internally use a value of average scientific research funding of 70x when evaluating our innovation policy work. Of course, 70x is well below our bar (currently ~2,100x), and so the premise of the program is not to directly fund additional scientific research, but instead to make grants that we think are sufficiently likely to increase the effective size of R&D effort by raising its efficiency or productivity or level enough to clear the bar. Moreover, while most of our giving in this program flows to grantees in high-income countries operating on the research frontier, the ultimate case is based on global impact: we assume research like this eventually benefits everyone, though with multi-decade lags (which in practice lead us to discount the benefits substantially, as discussed in Matt’s paper above and this report by Tom Davidson).
Our innovation policy work so far has cleared our internal bar for impact, and one reason we are excited to expand into this space is because we’ve found more opportunities that we think are above the bar than Good Ventures’ previous budget covered.
We also think our housing policy work clears our internal bar for impact. Our current internal valuation on a marginal housing unit in a highly constrained metro area in the US is just over $400k (so a grant would be above the bar if we think it causes a new unit in expectation for $200). A relatively small part of the case here is again based on innovation—there is some research indicating that increasing the density of people in innovative cities increases the rate of innovation. But our internal valuation for new housing units also incorporates a few other paths to impact. For example, increasing the density of productive cities also raises the incomes of movers and other residents, and reduces the overall carbon footprint of the housing stock. Collectively, we think these benefits are large enough to make a lot of grants related to housing policy clear our bar, given the leverage that advocacy can sometimes bring.
We’ll be evaluating new sub-areas as we go to make sure that they are also generally above our impact bar, but we suspect that the same logic of large potential importance will mean that policy changes that make even modest improvements to areas like clinical trial regulation, energy permitting, etc. could be highly impactful.
In terms of scale, while this is a significant expansion of Open Phil’s overall work in the space, it’s a modest expansion of Good Ventures’ (from ~$15M to ~$20M/year). The remaining funding is coming from other donors. As we wrote in our annual review last week:
One implication of our growing work with other donors is that it’s increasingly incorrect to think about Open Philanthropy as a single unified funder making top-down decisions. Increasingly, our resources come from different partners who are devoted to different causes and have different preferences and limitations for their giving. Their philanthropic dollars are not fungible, and we would be doing them a disservice if we treated them as if they were… it’s clearly less true than in the past (not that it was ever perfectly true) that the distribution of grants we advise across causes reflects our leadership’s unconstrained recommendations.
- 's comment on Mo Putera’s Quick takes by (10 Apr 2025 8:31 UTC; 41 points)
- 's comment on NickLaing’s Quick takes by (24 Apr 2025 18:09 UTC; 11 points)
- 's comment on Mikhail Samin’s Shortform by (LessWrong; 18 Nov 2025 16:59 UTC; 7 points)
- 's comment on Ozzie Gooen’s Quick takes by (28 Mar 2025 1:55 UTC; 4 points)
- 's comment on Ozzie Gooen’s Quick takes by (27 Mar 2025 1:30 UTC; 3 points)
FWIW I certainly agree with “non-trivial”; “huge” is a judgment call IMO. We’ll see!
I think this is a complicated question—it’s always been the case that individual OP staff had to submit grants to an overall review process and were not totally unilateral decision makers. As I said in my post above, they (and I) will now face somewhat more constraints. I think staff would differ in terms of how costly they would assess the new constraints as being. But it’s true this was a GV rather than OP decision; it wasn’t a place where GV was deferring to OP to weigh the costs and benefits.
Just flagging that I think “OP [is] open to funding XYZ areas if a new funder appears who wants to partner with them to do so” accurately describes the status quo. In the post above we (twice!) invited outreach from other funders interested in the some of these spaces, and we’re planning to do a lot more work to try to find other funders for some of this work in the coming months.
No, the farm animal welfare budget is not changing, and some of the substreams GV are exiting (or not entering) are on the AI side. So any funding from substratgies that GV is no longer funding within FAW would be reallocated to other strategies within FAW (and as Dustin notes below, hopefully the strategies that GV will no longer fund can be taken forward by others).
FWIW I think I’m an example of Type 1 (literally, in Lorenzo’s data) and I also agree that abstractly more of Type 2 would be helpful (but I think there are various tradeoffs and difficulties that make it not straightforwardly clear what to do about it).
Verstergaard has a reply on their website FWIW, can’t vouch for it/just passing along: https://vestergaard.com/blogs/vestergaard-position-bloomberg-article-malaria-bed-nets-papua-new-guinea/
Exciting news! I worked closely with Zach at Open Phil before he left to be interim CEO of EV US, and was sad to lose him, but I was happy for EV at the time, and I’m excited now for what Zach will be able to do at the helm of CEA.
Great to hear about finding such a good fit, thanks for sharing!
Hi Dustin :)
FWIW I also don’t particularly understand the normative appeal of democratizing funding within the EA community. It seems to me like the common normative basis for democracy would tend to argue for democratizing control of resources in a much broader way, rather than within the self-selected EA community. I think epistemic/efficiency arguments for empowering more decision-makers within EA are generally more persuasive, but wouldn’t necessarily look like “democracy” per se and might look more like more regranting, forecasting tournaments, etc.
Just wanted to say that I thought this post was very interesting and I was grateful to read it.
Just wanted to comment to say I thought this was very well done, nice work! I agree with Charles that replication work like this seems valuable and under-supplied.
I enjoyed the book and recommend it to others!
In case of of interest to EA forum folks, I wrote a long tweet thread with more substance on what I learned from it and remaining questions I have here: https://twitter.com/albrgr/status/1559570635390562305
Thanks MHR. I agree that one shouldn’t need to insist on statistical significance, but if GiveWell thinks that the actual expected effect is ~12% of the MK result, then I think if you’re updating on a similarly-to-MK-powered trial, you’re almost to the point of updating on a coinflip because of how underpowered you are to detect the expected effect.
I agree it would be useful to do this in a more formal bayesian framework which accurately characterizes the GW priors. It wouldn’t surprise me if one of the conclusions was that I’m misinterpreting GiveWell’s current views, or that it’s hard to articulate a formal prior that gets you from the MK results to GiveWell’s current views.
Thanks, appreciate it! I sympathize with this for some definition of low FWIW: “I have an intuition that low VSLs are a problem and we shouldn’t respect them” but I think it’s just a question of what the relevant “low” is.
Thanks Karthik. I think we might be talking past each other a bit, but replying in order on your first four replies:
My key issue with higher etas isn’t philosophical disagreement, it’s as guidance for practical decision-making. If I had taken your post at face value and used eta=1.5 to value UK GDP relative to other ways we could spend money, I think I would have predictably destroyed a lot of value for the global poor by failing to account for the full set of spillovers (because I think doing so is somewhere between very difficult and impossible). Even within low-income countries there are still pervasive tax, pecuniary, other externalities from high-income spending/consumption on lower-income co-nationals, that are closer to linear than logarithmic in $s. None of this is to deny the possibility or likelihood that in a totally abstract pure notion of consumption where it didn’t have any externalities at all and it was truly final personal consumption, it would be appropriate to have a log or steeper eta, it’s to say that that is a predictably bad approximation of our world and accordingly a bad decision rule given the actual data that we have. I think the main reply here has to be a defense of the feasibility of explicitly accounting for all relevant spillovers, and having made multiple (admittedly weak!) stabs in that direction, I’m personally pessimistic, but I’d certainly love to see others’ attempts.
In the blog post I linked in my #2 above we explicitly consider the set point implied by the IDInsight survey data, and we think it’s consistent with what we’re doing. We’re open to the argument for using a higher fixed constant on being alive, but instead of making you focus more on redistribution of income, the first order consequence of that decision would be to focus more on saving poor people’s lives (which is in fact what we predominantly do). It’s also worth noting that as your weight there gets high, it gets increasingly out of line with people’s revealed preferences and the VSL literature (and it’s not obvious to me why you’d take those revealed preferences less seriously than the revealed preferences around eta).
“I think almost everyone would agree that 10% income increase is worth much more to a poor person than a rich person”—I don’t think that’s right as a descriptive claim but again even if it were the point I’m making in #1 above still holds—if your income measure is imperfect as a measure of purely private consumption without any externalities, and I think they all are, then any small positive externalities that are ~linear in $ will dominate the effective utility calculation as eta gets to or above 1. I think there are many such externalities—taxes, philanthropy, aid, R&D, trade… - such that very high etas will lead to predictably bad policy advice.
You can add a constant normalizing function and it doesn’t change my original point—maybe it’s worth checking the Weitzman paper I linked to get an intuition? There’s genuinely more “at stake” in higher incomes when you have a lower eta vs a higher eta, and so if you’re trying make the correct utilitarian decision under true uncertainty, you don’t want to take a unweighted mean of eta and then run with it, you want to run your scenarios over different etas and weight by the stakes to get the best aggregate outcome. (I think how you specify the units might matter for the conclusion here though, a la the two envelope problem; I’m not sure.)
Hey Karthik, starting separate thread for a different issue. I opened your main spreadsheet for the first time, and I’m not positive but I think the 90% reduction claim is due to a spreadsheet error? The utility gain in B5 that flows through to your bottom line takeaway is hardcoded as being in log terms, but if eta changes than the utility gain to $s at the global average should change (and by the way I think it would really matter if you were denominating in units of global average, global median, or global poverty level). In this copy I made a change to reimplement isoelastic utility in B7 and B8. In this version, when eta=1.00001, OP ROI is 169, and when eta=1.5, OP ROI is 130, for a difference of ~25% rather than 90%. I didn’t really follow what was happening in the rest of the sheet so it’s possible this is wrong or misguided or implemented incorrectly.
Just wanted to say that I enjoyed reading this and the section starting with “Online:” and your concluding question really resonated with me.