I probably would have had ALLFED and CE on a list like this had I written it (don’t know as much about most of the other selections). It seems to me that both organizations get, on a relative basis, a whole lot more public praise than they get funding. Does anyone have a good explanation for the praise-funding mismatch?
TL;DR: I think the main reason is the same reason we aren’t donating to them: we think there are even more promising projects in terms of the effectiveness of a marginal $, and we are extremely funding constrained. I strongly agree with Elizabeth that all these projects (and many others) deserve more money.
Keeping in mind that I haven’t researched any of the projects, and I’m definitely not an expert in grantmaking; I personally think that “the theory of change seems valuable, and worse projects are regularly funded” is not the right bar to estimate the relative value of a marginal dollar, as it doesn’t take into account funding-gaps, costs, and actual results achieved.
As a data point on the perspective of a mostly uninformed effectiveness-oriented small donor, here’s why I personally haven’t donated to these projects in 2023, starting from the 2 you mention.
I’m not writing this because I think they are good reasons to fund other projects, but as a potentially interesting data-point in the psychology of an uninformed giver.
ALLFED:
Their theory of change seems really cool, but research organizations seem very hard to evaluate as a non-expert. I think 3 things all need to go right for research to be impactful:
The research needs to find “surprising”/”new” impactful interventions (or show that existing top interventions are surprisingly less cost-effective)
The research needs to be reliable and generally high quality
The research needs to be influential and decision-relevant for the right actors.
It’s really hard to evaluate each of the three as a non-expert. I would also be surprised if this was particularly neglected, as ALLFED is very famous in EA, and Denkenberger seems to have a good network. I also don’t know what more funding would lead to, and their track record is not clear to me after >6 years (but that is very much my ignorance; and because evaluating research is hard)
They’re possibly my favourite EA org (which is saying a lot; the bar is very high). I recommended allocating $50k to CE when I won a donor lottery. But because they’re so obviously cost-effective, if they ever have a funding need, I imagine tons of us would be really eager to jump in and help fill it. Including e.g. the EAIF. So, I personally would consider a donation to CE as counterfactually ~similar to a donation to the EAIF.
Regarding CE-incubated projects, I do donate a bit to them, but I personally believe that some of the medium-large donors in the CE seed network are very thoughtful and experienced grantmakers. So, I don’t expect the unfunded projects to be the most promising CE projects. Some projects like Healthier Hens do scale down due to lack of funding after some time, but I think a main reason in that case was that some proposed interventions turned out to not work or cost more than they expected. See their impact estimates.
Faunalytics:
They are super well known and have been funded by OpenPhil and the EA Animal Welfare Fund for specific projects, I defer to them. While they have been a ACE recommended charity for 8 years, I don’t know if the marginal dollar has more impact there compared to the other extremely impressive animal orgs.
Exotic Tofu:
It seems really hard to evaluate, Elizabeth mentions some issues, but in general my very uninformed opinion is that if it wouldn’t work as a for-profit it might be less promising as a non-profit compared to other (exceptional) animal welfare orgs.
Impact Certificates:
I think the first results weren’t promising, and I fear it’s mostly about predicting the judges’ scores since it’s rare to have good metrics and evaluations. That said, Manifund seems cool, and I made a $12 offer for Legal Impact for Chickens to try it out.[1] Since you donate to them and have relevant specific expertise, you might have alpha here and it might be worth checking out
Edit: see the object-level response from Ozzie; the above is somewhat wrong and I expect other points about other orgs to be wrong in similar ways
Community Building:
I’m personally unsure about the value of non-impact-oriented community building. I see a lot of events like “EA Karaoke Night”, which I think are great but:
I’m not sure they’re the most cost-effective way to mitigate burnout
I think there are very big downsides in encouraging people to rely on “EA” for both social and economic support
I worry that “EA” is getting increasingly defined in terms of social ties instead of impact-focus, and that makes us less impactful and optimize for the wrong things (hopefully, I’ll write a post soon about this. Basically, I find it suboptimal that someone who doesn’t change their career, donate, or volunteer, but goes to EA social events, is sometimes considered closer to the quintessential “EA” compared to e.g. Bill Gates)
Independent grant-funded researchers:
See ALLFED above for why it’s hard for me to evaluate research projects, but mostly I think this obviously depends a lot on the researcher. But I think the point is about better funding methodology/infrastructure and not just more funding.
Lightcone:
I hear conflicting things about the dynamics there (the point about “the bay area community”). I’m very far from the Bay Area, and I think projects there are really expensive compared to other great projects. I also thought they had less of a funding need nowadays, but again I know very little.
Please don’t update much on the above in your decisions on which projects to fund. I know almost nothing about most of the projects above and I’m probably wrong. I also trust grantmakers and other donors have much more information, experience, and grantmaking skills; and that they have thought much more about each of the orgs mentioned. This is just meant to be an answer to “Does anyone have a good explanation for the praise-funding mismatch?” that basically is a bunch of guessed examples for: “many things can be very praise-worthy without being a great funding opportunity for many donors”
But I really don’t expect to have more information than the AWF on this, and I think they’ll be the judge, so rationally, I should probably just have donated the money to the AWF. I think I’m just not the target audience for this.
We’ve spent a lot of time at blog posts / research, and other projects, as well as Squiggle Hub. (Though in the last year especially, we’ve focused on Squiggle)
Regarding users, I’d agree it’s not as many as I would have liked, but think we are having some. If you look through the Squiggle Tag, you’ll see several EA groups who have used Squiggle.
We’ve been working with a few EA organizations on Squiggle setups that are mostly private.
Of course! In general I’m happy for people to make quick best-guess evaluations openly—in part, that helps others here correct things when there might be some obvious mistakes. :)
For many CE-incubated charities, the obvious counterfactual donation would be to GiveWell top charities, and that’s a really high bar.
I consider the possibility that a lot of ALLFED’s potential value proposition comes from a low probability of saving hundreds of millions to billions of lives in scenarios that would counterfactually neither lead to extinction nor produce major continuing effects thousands of years down the road.
If that is so, it is plausible that this kind of value proposition may not be particularly well suited to many neartermist donors (for whom the chain of contingencies leading to impact may be too speculative for their comfort level) or to many strong longtermist donors (for whom the effects thousands to millions of years down the road may be weaker than for other options seen as mitigating extinction risk more).
If you had a moral parliament of 50 neartermists & 50 longtermists that could fund only one organization (and by a 2⁄3 majority vote), one with this kind of potential impact model might do very well!
For many CE-incubated charities, the obvious counterfactual donation would be to GiveWell top charities, and that’s a really high bar.
I think this is right and important. Possible additional layer: some donors are more comfortable with experimental or hits based giving than others. Those people disproportionately go into x-risk. The donors remaining in global poverty/health are both more adverse to uncertainty and have options to avoid it (both objectively, and vibe-wise)
I really agree with the first point, and the really high bar is the main reason all of these projects have room for more funding.
I somewhat disagree with the second point: my impression is that many donors are interested in mitigating non-existential global catastrophic risks (e.g. natural pandemics, climate change), but I don’t have much data to support this.
I don’t think many donors are interested in mitigating non-existential global catastrophic risks is necessarily inconsistent with the potential explanation for why organizations like ALLFED may get substantially more public praise than funding. It’s plausible to me that an org in that position might be unusually good at rating highly on many donors’ charts, without being unusually good at rating at the very top of the donors’ lists:
There’s no real limit on how many orgs one can praise, and preventing non-existential GCRs may win enough points on donors’ scoresheets to receive praise from the two groups I described above (focused neartermists and focused longtermists) in addition to its actual donors.
However, many small/mid-size donors may fund only their very top donation opportunities (e.g., top two, top five, etc.)
Maximise nearterm welfare, one had better donate to the best animal welfare interventions.
I estimate corporate campaigns for chicken welfare, like the ones promoted by The Humane League, are 1.37 k times as cost-effective as GiveWell’s top charities.
Maximise nearterm human welfare in a robust way, one had better donate to GiveWell’s funds.
I guess the cost-effectiveness of ALLFED is of the same order of magnitude of that of GiveWell’s funds (relatedly), but it is way less robust (in the sense my best guess will change more upon further investigation).
CEARCH estimated “the cost-effectiveness of conducting a pilot study of a resilient food source to be 10,000 DALYs per USD 100,000, which is around 14× as cost-effective as giving to a GiveWell top charity”. “The result is highly uncertain. Our probabilistic model suggests a 53% chance that the intervention is less cost-effective than giving to a GiveWell top charity, and an 18% chance that it is at least 10× more cost-effective. The estimated cost-effectiveness is likely to fall if the intervention is subjected to further research, due to optimizer’s curse”. I guess CEARCH is overestimating cost-effectiveness (see my comments).
My impression is that efforts to decrease the number of nuclear detonations are more cost-effective than ones to decrease famine deaths caused by nuclear winter. This is partly informed by CEARCHestimatingthat lobbying for arsenal limitation is 5 k times as cost-effective as GiveWell’s top charities, although I guess the actual cost-effectivess is more like 0.5 to 50 times that of GiveWell’s top charities.
As always (unless otherwise stated), the views expressed here are my own, not those of ALLFED.
I’m wrong and they’re not outstanding orgs, but discovering that takes work the praisers haven’t done.
The praise is a way to virtue signal, but people don’t actually put their money behind it.
The praise is truly meant and people put their money behind it, but none of the praise is from the people with real money.
I believe CE has received OpenPhil money and ALLFED CEA and SFF money, just not as much as they wanted. Maybe the difference is not in # of grants approved, but in how much room for funding big funders believe they have or want to fill.
I’m not sure of CE’s funding situation, it was the incubated orgs that they pitched as high-need.
Maybe the OpenPhil AI and meta teams are more comfortable fully funding something than other teams.
ALLFED also gets academic grants, maybe funders fear their money will replace those rather than stack on top of.
OpenPhil has a particular grant cycle, maybe it doesn’t work for some orgs (at least not as their sole support).
I probably would have had ALLFED and CE on a list like this had I written it (don’t know as much about most of the other selections). It seems to me that both organizations get, on a relative basis, a whole lot more public praise than they get funding. Does anyone have a good explanation for the praise-funding mismatch?
TL;DR: I think the main reason is the same reason we aren’t donating to them: we think there are even more promising projects in terms of the effectiveness of a marginal $, and we are extremely funding constrained. I strongly agree with Elizabeth that all these projects (and many others) deserve more money.
Keeping in mind that I haven’t researched any of the projects, and I’m definitely not an expert in grantmaking; I personally think that “the theory of change seems valuable, and worse projects are regularly funded” is not the right bar to estimate the relative value of a marginal dollar, as it doesn’t take into account funding-gaps, costs, and actual results achieved.
As a data point on the perspective of a mostly uninformed effectiveness-oriented small donor, here’s why I personally haven’t donated to these projects in 2023, starting from the 2 you mention.
I’m not writing this because I think they are good reasons to fund other projects, but as a potentially interesting data-point in the psychology of an uninformed giver.
ALLFED:
Their theory of change seems really cool, but research organizations seem very hard to evaluate as a non-expert. I think 3 things all need to go right for research to be impactful:
The research needs to find “surprising”/”new” impactful interventions (or show that existing top interventions are surprisingly less cost-effective)
The research needs to be reliable and generally high quality
The research needs to be influential and decision-relevant for the right actors.
It’s really hard to evaluate each of the three as a non-expert. I would also be surprised if this was particularly neglected, as ALLFED is very famous in EA, and Denkenberger seems to have a good network. I also don’t know what more funding would lead to, and their track record is not clear to me after >6 years (but that is very much my ignorance; and because evaluating research is hard)
Charity Entrepreneurship/Ambitious Impact:
They’re possibly my favourite EA org (which is saying a lot; the bar is very high). I recommended allocating $50k to CE when I won a donor lottery. But because they’re so obviously cost-effective, if they ever have a funding need, I imagine tons of us would be really eager to jump in and help fill it. Including e.g. the EAIF. So, I personally would consider a donation to CE as counterfactually ~similar to a donation to the EAIF.
Regarding CE-incubated projects, I do donate a bit to them, but I personally believe that some of the medium-large donors in the CE seed network are very thoughtful and experienced grantmakers. So, I don’t expect the unfunded projects to be the most promising CE projects. Some projects like Healthier Hens do scale down due to lack of funding after some time, but I think a main reason in that case was that some proposed interventions turned out to not work or cost more than they expected. See their impact estimates.
Faunalytics:
They are super well known and have been funded by OpenPhil and the EA Animal Welfare Fund for specific projects, I defer to them. While they have been a ACE recommended charity for 8 years, I don’t know if the marginal dollar has more impact there compared to the other extremely impressive animal orgs.
Exotic Tofu:
It seems really hard to evaluate, Elizabeth mentions some issues, but in general my very uninformed opinion is that if it wouldn’t work as a for-profit it might be less promising as a non-profit compared to other (exceptional) animal welfare orgs.
Impact Certificates:
I think the first results weren’t promising, and I fear it’s mostly about predicting the judges’ scores since it’s rare to have good metrics and evaluations. That said, Manifund seems cool, and I made a $12 offer for Legal Impact for Chickens to try it out.[1] Since you donate to them and have relevant specific expertise, you might have alpha here and it might be worth checking out
QURI:
My understanding is that most of their focus in the past few years has been building a new programming language. While technically very impressive, I don’t fully understand the value proposition and after four years they don’t seem to have a lot of users. The previous QURI project www.foretold.io didn’t seem to have worked out, which is a small negative update. I’m personally more optimistic about projects like carlo.app and I like that it’s for-profit.
Edit: see the object-level response from Ozzie; the above is somewhat wrong and I expect other points about other orgs to be wrong in similar ways
Community Building:
I’m personally unsure about the value of non-impact-oriented community building. I see a lot of events like “EA Karaoke Night”, which I think are great but:
I’m not sure they’re the most cost-effective way to mitigate burnout
I think there are very big downsides in encouraging people to rely on “EA” for both social and economic support
I worry that “EA” is getting increasingly defined in terms of social ties instead of impact-focus, and that makes us less impactful and optimize for the wrong things (hopefully, I’ll write a post soon about this. Basically, I find it suboptimal that someone who doesn’t change their career, donate, or volunteer, but goes to EA social events, is sometimes considered closer to the quintessential “EA” compared to e.g. Bill Gates)
Independent grant-funded researchers:
See ALLFED above for why it’s hard for me to evaluate research projects, but mostly I think this obviously depends a lot on the researcher. But I think the point is about better funding methodology/infrastructure and not just more funding.
Lightcone:
I hear conflicting things about the dynamics there (the point about “the bay area community”). I’m very far from the Bay Area, and I think projects there are really expensive compared to other great projects. I also thought they had less of a funding need nowadays, but again I know very little.
Please don’t update much on the above in your decisions on which projects to fund. I know almost nothing about most of the projects above and I’m probably wrong. I also trust grantmakers and other donors have much more information, experience, and grantmaking skills; and that they have thought much more about each of the orgs mentioned. This is just meant to be an answer to “Does anyone have a good explanation for the praise-funding mismatch?” that basically is a bunch of guessed examples for: “many things can be very praise-worthy without being a great funding opportunity for many donors”
But I really don’t expect to have more information than the AWF on this, and I think they’ll be the judge, so rationally, I should probably just have donated the money to the AWF. I think I’m just not the target audience for this.
Quick notes on your QURI section:
“after four years they don’t seem to have a lot of users” → I think it’s more fair to say this has been about 2 years. If you look at the commit history you can see that there was very little development for the first two years of that time.
https://github.com/quantified-uncertainty/squiggle/graphs/contributors
We’ve spent a lot of time at blog posts / research, and other projects, as well as Squiggle Hub. (Though in the last year especially, we’ve focused on Squiggle)
Regarding users, I’d agree it’s not as many as I would have liked, but think we are having some. If you look through the Squiggle Tag, you’ll see several EA groups who have used Squiggle.
We’ve been working with a few EA organizations on Squiggle setups that are mostly private.
I think for-profits have their space, but I also think that nonprofits and open-source/open organizations have a lot of benefits.
Thank you for the context! Useful example of why it’s not trivial to evaluate projects without looking into the details
Of course! In general I’m happy for people to make quick best-guess evaluations openly—in part, that helps others here correct things when there might be some obvious mistakes. :)
My thoughts were:
For many CE-incubated charities, the obvious counterfactual donation would be to GiveWell top charities, and that’s a really high bar.
I consider the possibility that a lot of ALLFED’s potential value proposition comes from a low probability of saving hundreds of millions to billions of lives in scenarios that would counterfactually neither lead to extinction nor produce major continuing effects thousands of years down the road.
If that is so, it is plausible that this kind of value proposition may not be particularly well suited to many neartermist donors (for whom the chain of contingencies leading to impact may be too speculative for their comfort level) or to many strong longtermist donors (for whom the effects thousands to millions of years down the road may be weaker than for other options seen as mitigating extinction risk more).
If you had a moral parliament of 50 neartermists & 50 longtermists that could fund only one organization (and by a 2⁄3 majority vote), one with this kind of potential impact model might do very well!
I think this is right and important. Possible additional layer: some donors are more comfortable with experimental or hits based giving than others. Those people disproportionately go into x-risk. The donors remaining in global poverty/health are both more adverse to uncertainty and have options to avoid it (both objectively, and vibe-wise)
I really agree with the first point, and the really high bar is the main reason all of these projects have room for more funding.
I somewhat disagree with the second point: my impression is that many donors are interested in mitigating non-existential global catastrophic risks (e.g. natural pandemics, climate change), but I don’t have much data to support this.
I don’t think many donors are interested in mitigating non-existential global catastrophic risks is necessarily inconsistent with the potential explanation for why organizations like ALLFED may get substantially more public praise than funding. It’s plausible to me that an org in that position might be unusually good at rating highly on many donors’ charts, without being unusually good at rating at the very top of the donors’ lists:
There’s no real limit on how many orgs one can praise, and preventing non-existential GCRs may win enough points on donors’ scoresheets to receive praise from the two groups I described above (focused neartermists and focused longtermists) in addition to its actual donors.
However, many small/mid-size donors may fund only their very top donation opportunities (e.g., top two, top five, etc.)
Hi Jason,
Here is why I do not recommend donating to ALLFED, for which I work as a contractor. If one wants to:
Minimise existential risk, one had better donate to the best AI safety interventions, namely the Long-Term Future Fund (LTFF).
Maximise nearterm welfare, one had better donate to the best animal welfare interventions.
I estimate corporate campaigns for chicken welfare, like the ones promoted by The Humane League, are 1.37 k times as cost-effective as GiveWell’s top charities.
Maximise nearterm human welfare in a robust way, one had better donate to GiveWell’s funds.
I guess the cost-effectiveness of ALLFED is of the same order of magnitude of that of GiveWell’s funds (relatedly), but it is way less robust (in the sense my best guess will change more upon further investigation).
CEARCH estimated “the cost-effectiveness of conducting a pilot study of a resilient food source to be 10,000 DALYs per USD 100,000, which is around 14× as cost-effective as giving to a GiveWell top charity”. “The result is highly uncertain. Our probabilistic model suggests a 53% chance that the intervention is less cost-effective than giving to a GiveWell top charity, and an 18% chance that it is at least 10× more cost-effective. The estimated cost-effectiveness is likely to fall if the intervention is subjected to further research, due to optimizer’s curse”. I guess CEARCH is overestimating cost-effectiveness (see my comments).
Maximise nearterm human welfare supporting interventions related to nuclear risk, one had better donate to Longview’s Nuclear Weapons Policy Fund.
My impression is that efforts to decrease the number of nuclear detonations are more cost-effective than ones to decrease famine deaths caused by nuclear winter. This is partly informed by CEARCH estimating that lobbying for arsenal limitation is 5 k times as cost-effective as GiveWell’s top charities, although I guess the actual cost-effectivess is more like 0.5 to 50 times that of GiveWell’s top charities.
As always (unless otherwise stated), the views expressed here are my own, not those of ALLFED.
Some hypotheses:
I’m wrong, and they are adequately funded
I’m wrong and they’re not outstanding orgs, but discovering that takes work the praisers haven’t done.
The praise is a way to virtue signal, but people don’t actually put their money behind it.
The praise is truly meant and people put their money behind it, but none of the praise is from the people with real money.
I believe CE has received OpenPhil money and ALLFED CEA and SFF money, just not as much as they wanted. Maybe the difference is not in # of grants approved, but in how much room for funding big funders believe they have or want to fill.
I’m not sure of CE’s funding situation, it was the incubated orgs that they pitched as high-need.
Maybe the OpenPhil AI and meta teams are more comfortable fully funding something than other teams.
ALLFED also gets academic grants, maybe funders fear their money will replace those rather than stack on top of.
OpenPhil has a particular grant cycle, maybe it doesn’t work for some orgs (at least not as their sole support).