A friend asked me which projects in EA I thought deserved more money, especially ones that seemed to be held back by insufficient charisma of the founders. After a few names he encouraged me to write it up. This list is very off the cuff and tentative: in most cases I have pretty minimal information on the project, and they’re projects I incidentally encountered on EAF. If you have additions I encourage you to comment with them.
The main list
The bar here is “the theory of change seems valuable, and worse projects are regularly funded”.
Faunalytics is a data analysis firm focused on metrics related to animal suffering. I searched high and low for health data on vegans that included ex-vegans, and they were the only place I found anything that had any information from ex-vegans. They shared their data freely and offered some help with formatting, although in the end it was too much work to do my own analysis.
I do think their description minimized the problems they found. But they shared enough information that I could figure that out rather than relying on their interpretation, and that’s good enough.
EA is trend-following to unfortunate degrees. ALLFED picked the important but unsexy target of food security during catastrophes, and has been steadfastly pursuing it for 7 years.
CE runs a bootcamp/incubator that produces several new charities each run. I don’t think every project that comes out of this program is gold. I don’t even know of any projects that make me go “yes, definitely amazing”. But they are creating founder-talent where none existed before, and getting unsexy projects implemented, and building up their own skill in developing talent.
Exotic Tofu Project, perhaps dependent on getting a more charismatic co-founder
I was excited by George Stiffman’s original announcement of a plan to bring unknown Chinese tofus into the west and marketing it to foodie omnivores as desirable and fun. This was a theory of change that could work, and that avoided the ideological sirens that crash so many animal suffering projects on the rocks.
Then he released his book. It was titled Broken Cuisine: Chinese tofu, Western cooking, and a hope to save our planet. The blurb opens with “Our meat-based diets are leading to antibiotic-resistant superbugs, runaway climate change, and widespread animal cruelty. Yet, our plant-based alternatives aren’t appealing enough. This is our Broken Cuisine. I believe we must fix it.” . This is not a fun and high status leisure activity: this is doom and drudgery, aimed at the already-converted. I think doom and drudgery strategies are oversupplied right now, but I was especially sad to see this from someone I thought understood the power of offering options that were attractive on people’s own terms.
This was supposed to be a list of projects who are underfunded due to lack of charisma; unfortunately this project requires charisma. I would still love to see it succeed, but I think that will require a partner who is good at convincing the general public that something is high-status. My dream is a charming, high-status reducetarian or ameliatarian foodie influencer, but just someone who understood omnivores on their own terms would be an improvement.
I love impact certificates as a concept, but they’ve yet to take off. They suffer from both lemon and coordination problems.They’re less convenient than normal grants, so impact certificate markets only get projects who couldn’t get funding elsewhere. And it’s a 2 or 3 sided market (producers, initial purchasers, later purchasers).
There are a few people valiantly struggling to make impact certificates a thing. I think these are worth funding directly, but it’s also valuable to buy impact certificates. If you don’t like the uncertainty of the project applications you can always be a secondary buyer, who are perhaps even rarer.
Projects I know doing impact certificates
Manifund. Manifund is the IC subproject of Manifold Market, which manifestly does not suffer from lack of extroversion in its founder. But impact certs are just an uphill battle, and private conversations with founder Austin Chen indicated they had a lot of room for more funding.
ACX Grants runs an impact certificate program, managed by Manifund.
Oops, the primary project I was thinking of has gone offline.
Full disclosure: I know Ozzie socially, although not so well as to put him in the Conflicts of Interest section.
Similar to ALLFED: I don’t know that QURI’s estimation tools are the most important project, but I do know Ozzie has been banging the drums on forecasting for years, way before Austin Chen made it cool, and it’s good for the EA ecosystem to have that kind of persistent pursuit in the mix.
Community Building
Most work done under the name “community building” is recruiting. Recruiting can be a fine thing to do, but it makes me angry to see it mislabeled in this, while actual community building starves. Community recruiting is extremely well funded, at least for people willing to frame their project in terms of accepted impact metrics. However if you do the harder part of actually building and maintaining a community that nourishes members, and are uncomfortable pitching impact when that’s not your focus, money is very scarce. This is a problem because
EA has an extractive streak that can burn people out. Having social support that isn’t dependent on validation for EA authorities is an important counterbalance.
The people who are best at this are the ones doing it for its own sake rather than optimizing for short-term proxies on long-term impact. Requiring fundees to aim at legible impact selects for liars and people worse at the job.
People driven to community build for its own sake are less likely to pursue impact in other ways. Even if you think impact-focus is good and social builders are not the best at impact, giving them building work frees up someone impact-focused to work on something else.
Unfortunately I don’t have anyone specific to donate to, because the best target I know already burnt out and quit. But I encourage you to be on the lookout in your local community. Or be the change I want to see in the world: being An Organizer is hard, but hosting occasional movie nights or proactively cleaning at someone else’s party is pretty easy, and can go a long way towards creating a healthy connected community.
Projects with conflicts of Interest
The first section featured projects I know only a little about. This section includes projects I know way too much about, to the point I’m at risk of bias.
Independent grant-funded researchers
Full disclosure: I am an independent, sometimes grant-funded, researcher.
This really needs to be its own post, but in a nutshell: relying on grants for your whole income sucks, and often leaves you with gaps or at least a lot of uncertainty. I’m going to use myself as an example because I haven’t run surveys or anything, but I expect I’m on the easier end of things.
The core difficulty with grant-financing: grantmakers don’t want to pay too far in advance. Grantmakers don’t want to approve new grants until you’ve shown results from the last grant. Results take time. Grant submissions, approval, and payout also take time. This means that, at best, you spend many months not knowing if you’ll have a funding gap, and many times the answer will be yes. I don’t know if this is the grantmaker’s fault, but many people feel pressure to ask for as little money as possible, which makes the gaps a bigger hardship
I get around this by contracting and treating grants as one client out of several, but I’m lucky that’s an option. It also means I spend time on projects that EA would consider unoptimal. Other problems: I have to self-fund most of my early work because I don’t want to apply for a grant until I have a reasonable idea of what I could hope to accomplish. There are projects I’ve been meaning to do for years, but are too big to self-fund, and too illegible and inspiration-dependent to survive a grant process. I have to commit to a project at application time but then not start until the application is approved, which could be months later.
All purpose funding with a gentle reapplication cycle would let independents take more risks at a lower psychological toll. Or test out Austin Chen’s idea of ~employment-as-a-service. Alas neither would help me right this second- illness has put me behind on some existing grant funded work, so I shouldn’t accept more money right now. But other independents could; if you know of any, please leave a pitch in the comments.
full disclosure: I volunteer and am very occasionally paid for work at Lightcone, and have deep social ties with the team.
Lightcone’s issue isn’t so much charisma as that the CEO is allergic to accepting money with strings, and the EA-offered money comes with strings. I like Lightcone’s work, and some of my favorite parts of their work would have been much more difficult without that independence.
Very grateful for the kind words, Elizabeth! Manifund is facing a funding shortfall at the moment, and will be looking for donors soon (once we get the ACX Grants Impact Market out the door), so I really appreciate the endorsement here.
(Fun fact: Manifund has never actually raised donations for our own core operations/salary; we’ve been paid ~$75k in commission to run the regrantor program, and otherwise have been just moving money on behalf of others.)
i’ve been working at manifund for the last couple months, figured i’d respond where austin hasn’t (yet)
here’s a grant application for the meta charity funders circle that we submitted a few weeks ago, which i think is broadly representative of who we are & what we’re raising for.
tldr of that application:
core ops
staff salaries
misc things (software, etc)
programs like regranting, impact certificates, etc, for us to run how we think is best[1]
additionally, if a funder was particularly interested in a specific funding program, we’re also happy to provide them with infrastructure. e.g. we’re currently facilitating the ACX grants, we’re probably (70%) going to run a prize round for dwarkesh patel, and we’d be excited about building/hosting the infrastructure for similar funding/prize/impact cert/etc programs. this wouldn’t really look like [funding manifund core ops, where the money goes to manifund], but rather [running a funding round on manifund, where the funding mostly[2] goes to object-level projects that aren’t manifund].
i’ll also add that we’re a less funding-crunched than when austin first commented; we’ll be running another regranting round, for which we’ll be paid another $75k in commission. this was new info between his comment and this comment. (details of this are very rough/subject to change/not firm.)
i’m keeping this section intentionally vague. what we want is [sufficient funding to be able to run the programs we think are best, iterate & adjust quickly, etc] not [this specific particular program in this specific particular way that we’re tying ourselves down to]. we have experimentation built into our bones, and having strings attached breaks our ability to experiment fast.
I probably would have had ALLFED and CE on a list like this had I written it (don’t know as much about most of the other selections). It seems to me that both organizations get, on a relative basis, a whole lot more public praise than they get funding. Does anyone have a good explanation for the praise-funding mismatch?
TL;DR: I think the main reason is the same reason we aren’t donating to them: we think there are even more promising projects in terms of the effectiveness of a marginal $, and we are extremely funding constrained. I strongly agree with Elizabeth that all these projects (and many others) deserve more money.
Keeping in mind that I haven’t researched any of the projects, and I’m definitely not an expert in grantmaking; I personally think that “the theory of change seems valuable, and worse projects are regularly funded” is not the right bar to estimate the relative value of a marginal dollar, as it doesn’t take into account funding-gaps, costs, and actual results achieved.
As a data point on the perspective of a mostly uninformed effectiveness-oriented small donor, here’s why I personally haven’t donated to these projects in 2023, starting from the 2 you mention.
I’m not writing this because I think they are good reasons to fund other projects, but as a potentially interesting data-point in the psychology of an uninformed giver.
ALLFED:
Their theory of change seems really cool, but research organizations seem very hard to evaluate as a non-expert. I think 3 things all need to go right for research to be impactful:
The research needs to find “surprising”/”new” impactful interventions (or show that existing top interventions are surprisingly less cost-effective)
The research needs to be reliable and generally high quality
The research needs to be influential and decision-relevant for the right actors.
It’s really hard to evaluate each of the three as a non-expert. I would also be surprised if this was particularly neglected, as ALLFED is very famous in EA, and Denkenberger seems to have a good network. I also don’t know what more funding would lead to, and their track record is not clear to me after >6 years (but that is very much my ignorance; and because evaluating research is hard)
They’re possibly my favourite EA org (which is saying a lot; the bar is very high). I recommended allocating $50k to CE when I won a donor lottery. But because they’re so obviously cost-effective, if they ever have a funding need, I imagine tons of us would be really eager to jump in and help fill it. Including e.g. the EAIF. So, I personally would consider a donation to CE as counterfactually ~similar to a donation to the EAIF.
Regarding CE-incubated projects, I do donate a bit to them, but I personally believe that some of the medium-large donors in the CE seed network are very thoughtful and experienced grantmakers. So, I don’t expect the unfunded projects to be the most promising CE projects. Some projects like Healthier Hens do scale down due to lack of funding after some time, but I think a main reason in that case was that some proposed interventions turned out to not work or cost more than they expected. See their impact estimates.
Faunalytics:
They are super well known and have been funded by OpenPhil and the EA Animal Welfare Fund for specific projects, I defer to them. While they have been a ACE recommended charity for 8 years, I don’t know if the marginal dollar has more impact there compared to the other extremely impressive animal orgs.
Exotic Tofu:
It seems really hard to evaluate, Elizabeth mentions some issues, but in general my very uninformed opinion is that if it wouldn’t work as a for-profit it might be less promising as a non-profit compared to other (exceptional) animal welfare orgs.
Impact Certificates:
I think the first results weren’t promising, and I fear it’s mostly about predicting the judges’ scores since it’s rare to have good metrics and evaluations. That said, Manifund seems cool, and I made a $12 offer for Legal Impact for Chickens to try it out.[1] Since you donate to them and have relevant specific expertise, you might have alpha here and it might be worth checking out
Edit: see the object-level response from Ozzie; the above is somewhat wrong and I expect other points about other orgs to be wrong in similar ways
Community Building:
I’m personally unsure about the value of non-impact-oriented community building. I see a lot of events like “EA Karaoke Night”, which I think are great but:
I’m not sure they’re the most cost-effective way to mitigate burnout
I think there are very big downsides in encouraging people to rely on “EA” for both social and economic support
I worry that “EA” is getting increasingly defined in terms of social ties instead of impact-focus, and that makes us less impactful and optimize for the wrong things (hopefully, I’ll write a post soon about this. Basically, I find it suboptimal that someone who doesn’t change their career, donate, or volunteer, but goes to EA social events, is sometimes considered closer to the quintessential “EA” compared to e.g. Bill Gates)
Independent grant-funded researchers:
See ALLFED above for why it’s hard for me to evaluate research projects, but mostly I think this obviously depends a lot on the researcher. But I think the point is about better funding methodology/infrastructure and not just more funding.
Lightcone:
I hear conflicting things about the dynamics there (the point about “the bay area community”). I’m very far from the Bay Area, and I think projects there are really expensive compared to other great projects. I also thought they had less of a funding need nowadays, but again I know very little.
Please don’t update much on the above in your decisions on which projects to fund. I know almost nothing about most of the projects above and I’m probably wrong. I also trust grantmakers and other donors have much more information, experience, and grantmaking skills; and that they have thought much more about each of the orgs mentioned. This is just meant to be an answer to “Does anyone have a good explanation for the praise-funding mismatch?” that basically is a bunch of guessed examples for: “many things can be very praise-worthy without being a great funding opportunity for many donors”
But I really don’t expect to have more information than the AWF on this, and I think they’ll be the judge, so rationally, I should probably just have donated the money to the AWF. I think I’m just not the target audience for this.
We’ve spent a lot of time at blog posts / research, and other projects, as well as Squiggle Hub. (Though in the last year especially, we’ve focused on Squiggle)
Regarding users, I’d agree it’s not as many as I would have liked, but think we are having some. If you look through the Squiggle Tag, you’ll see several EA groups who have used Squiggle.
We’ve been working with a few EA organizations on Squiggle setups that are mostly private.
Of course! In general I’m happy for people to make quick best-guess evaluations openly—in part, that helps others here correct things when there might be some obvious mistakes. :)
For many CE-incubated charities, the obvious counterfactual donation would be to GiveWell top charities, and that’s a really high bar.
I consider the possibility that a lot of ALLFED’s potential value proposition comes from a low probability of saving hundreds of millions to billions of lives in scenarios that would counterfactually neither lead to extinction nor produce major continuing effects thousands of years down the road.
If that is so, it is plausible that this kind of value proposition may not be particularly well suited to many neartermist donors (for whom the chain of contingencies leading to impact may be too speculative for their comfort level) or to many strong longtermist donors (for whom the effects thousands to millions of years down the road may be weaker than for other options seen as mitigating extinction risk more).
If you had a moral parliament of 50 neartermists & 50 longtermists that could fund only one organization (and by a 2⁄3 majority vote), one with this kind of potential impact model might do very well!
For many CE-incubated charities, the obvious counterfactual donation would be to GiveWell top charities, and that’s a really high bar.
I think this is right and important. Possible additional layer: some donors are more comfortable with experimental or hits based giving than others. Those people disproportionately go into x-risk. The donors remaining in global poverty/health are both more adverse to uncertainty and have options to avoid it (both objectively, and vibe-wise)
I really agree with the first point, and the really high bar is the main reason all of these projects have room for more funding.
I somewhat disagree with the second point: my impression is that many donors are interested in mitigating non-existential global catastrophic risks (e.g. natural pandemics, climate change), but I don’t have much data to support this.
I don’t think many donors are interested in mitigating non-existential global catastrophic risks is necessarily inconsistent with the potential explanation for why organizations like ALLFED may get substantially more public praise than funding. It’s plausible to me that an org in that position might be unusually good at rating highly on many donors’ charts, without being unusually good at rating at the very top of the donors’ lists:
There’s no real limit on how many orgs one can praise, and preventing non-existential GCRs may win enough points on donors’ scoresheets to receive praise from the two groups I described above (focused neartermists and focused longtermists) in addition to its actual donors.
However, many small/mid-size donors may fund only their very top donation opportunities (e.g., top two, top five, etc.)
Maximise nearterm welfare, one had better donate to the best animal welfare interventions.
I estimate corporate campaigns for chicken welfare, like the ones promoted by The Humane League, are 1.37 k times as cost-effective as GiveWell’s top charities.
Maximise nearterm human welfare in a robust way, one had better donate to GiveWell’s funds.
I guess the cost-effectiveness of ALLFED is of the same order of magnitude of that of GiveWell’s funds (relatedly), but it is way less robust (in the sense my best guess will change more upon further investigation).
CEARCH estimated “the cost-effectiveness of conducting a pilot study of a resilient food source to be 10,000 DALYs per USD 100,000, which is around 14× as cost-effective as giving to a GiveWell top charity”. “The result is highly uncertain. Our probabilistic model suggests a 53% chance that the intervention is less cost-effective than giving to a GiveWell top charity, and an 18% chance that it is at least 10× more cost-effective. The estimated cost-effectiveness is likely to fall if the intervention is subjected to further research, due to optimizer’s curse”. I guess CEARCH is overestimating cost-effectiveness (see my comments).
My impression is that efforts to decrease the number of nuclear detonations are more cost-effective than ones to decrease famine deaths caused by nuclear winter. This is partly informed by CEARCHestimatingthat lobbying for arsenal limitation is 5 k times as cost-effective as GiveWell’s top charities, although I guess the actual cost-effectivess is more like 0.5 to 50 times that of GiveWell’s top charities.
As always (unless otherwise stated), the views expressed here are my own, not those of ALLFED.
I’m wrong and they’re not outstanding orgs, but discovering that takes work the praisers haven’t done.
The praise is a way to virtue signal, but people don’t actually put their money behind it.
The praise is truly meant and people put their money behind it, but none of the praise is from the people with real money.
I believe CE has received OpenPhil money and ALLFED CEA and SFF money, just not as much as they wanted. Maybe the difference is not in # of grants approved, but in how much room for funding big funders believe they have or want to fill.
I’m not sure of CE’s funding situation, it was the incubated orgs that they pitched as high-need.
Maybe the OpenPhil AI and meta teams are more comfortable fully funding something than other teams.
ALLFED also gets academic grants, maybe funders fear their money will replace those rather than stack on top of.
OpenPhil has a particular grant cycle, maybe it doesn’t work for some orgs (at least not as their sole support).
On exotic tofu: I am not yet convinced that Stiffman doesn’t have the requisite charisma. Is your concern that he’s vegan (hence less relatable to non-vegans), his messaging in Broken Cuisine specifically, or something else? I am sympathetic to the first concern, but not as convinced by the second. In particular, from what little else I’ve read from Stiffman, his messaging is more like his original post on this Forum: positive and minimally doom-y. See, for example, his article in Asterisk, this podcast episode (on what appears to be a decently popular podcast?), and his newsletter.
Have you reached out to him directly about your concerns about his messaging? Your comments seem very plausible to me and reaching out seems to have a high upside.
I sent a message to George Stiffman through a mutual friend and never heard back, so I gave up after 2 pings (to the friend).
Thanks for mentioning places Stiffman comes across better. I’ve read the Asterisk article and found it irrelevant to his consumer-aimed work. Maybe the Bittman podcast is consumer-targeted and an improvement, I dunno. For now I can’t get over that book title and blurb.
Not well. I only have snippets of information, and it’s private (Habryka did sign off on that description).
I don’t know if this specifically has come up in regards to Lightcone or Lighthaven, but I know Haybrka has been steadfastly opposed to the kind of slow, cautious, legally-defensive actions coming out of EVF. I expect he would reject funding that demanded that approach (and if he accepted it, I’d be disappointed in him, given his public statements).
Thanks for putting the Exotic Tofu Project on my screen! I also like all the others.
We (me and my cofounder) run yet another “impact certificates” project. We started out with straightforward impact certificates, but the legal hurdles for us and for the certificate issuers turned out too high and possibly (for us) insurmountable, at least in the US.
We instead turned to the system that works for carbon credits. These are not so much traded on the level of the certificate or impact claim but instead there are validators that confirm that the impact has happened according to certain standards and then pay out the impact credits (or carbon credits) associated with that standard.
That system seems more promising to us as it has all the advantages of impact certificate markets but also the advantage that one party (e.g., us) can do the legal battle in the US once for this impact credit (and can even rely on the precedent of carbon credits), and thereby pave the ground for all the other market participants that come after and don’t have to worry about the legalities anymore. There are already a number non-EA organizations that are working toward a similar vision.
Even outside such restrictive jurisdictions as the US, this system has the advantage that it allows for deeper liquidity on the impact credit markets (compared to the auctions for individual impact certificates). But the US is an important market for EA and AI safety, so we couldn’t just ignore it even if it hadn’t been for this added benefit.
We’ve started bootstrapping this system with GiveWiki in January of last year. But over the course of the year we’ve found it very hard to find anyone who wanted to use the system as a donor/grantmaker. Most of the grantmakers we were in touch with had lost their funding in Nov. 2022; others wanted to wait until the system is mature; and many smaller donors had no trouble finding great funding gaps without our help.
We will keep the platform running, but we’ll probably have wait for the next phase of funding overhang when there are more grantmakers and they actually have trouble finding their funding gaps.
What a market does, idealizing egregiously, is that people with special knowledge or insight invest into things early: Thus less informed people (some of whom have more capital) can watch the valuations, and invest into projects with high and increasing valuations or some other valuation-based marker of quality. A process of price discovery.
AngelList, for example, facilitates that. They have a no-action letter from the SEC (and the startups on AngelList have at least a registration D I imagine), so they didn’t have to register as a broker-dealer to be allowed to match startups to investors. I think they have some funds that are led by seasoned investors, and then the newbie investors can follow the seasoned ones by investing into their funds. Or some mechanism of that sort.
We’re probably not getting a no-action letter, and we don’t have the money yet to start the legal process to get our impact credits registered with the CFTC. So instead we recognized that in the above example investors are treating valuations basically like scores. So we’re just using scores for now. (Some rich people say money is just for keeping score. We’re not rich, so we use scores directly.)
The big advantage of actual scores (rather than using monetary valuations like scores) is that it’s legally easy. The disadvantage is that we can’t pitch GiveWiki to profit-oriented investors.
So unlike AngelList, we’re not giving profit-oriented investors the ability to follow more knowledgeable profit-oriented investors, but we’re allowing donors/grantmakers to follow more knowledgeable donors/grantmakers. (One day, with the blessing of the CFTC, we can hopefully lift that limitation.)
We usually frame this as a process of three phases:
Implement the equivalent of price discovery with a score. (The current state of GiveWiki.)
Pay out a play money currency according to the score.
Turn the play money currency into a real impact credit that can be sold for dollars (with the blessing of the CFTC).
A friend asked me which projects in EA I thought deserved more money, especially ones that seemed to be held back by insufficient charisma of the founders. After a few names he encouraged me to write it up. This list is very off the cuff and tentative: in most cases I have pretty minimal information on the project, and they’re projects I incidentally encountered on EAF. If you have additions I encourage you to comment with them.
The main list
The bar here is “the theory of change seems valuable, and worse projects are regularly funded”.
Faunalytics
Faunalytics is a data analysis firm focused on metrics related to animal suffering. I searched high and low for health data on vegans that included ex-vegans, and they were the only place I found anything that had any information from ex-vegans. They shared their data freely and offered some help with formatting, although in the end it was too much work to do my own analysis.
I do think their description minimized the problems they found. But they shared enough information that I could figure that out rather than relying on their interpretation, and that’s good enough.
ALLFED
EA is trend-following to unfortunate degrees. ALLFED picked the important but unsexy target of food security during catastrophes, and has been steadfastly pursuing it for 7 years.
Charity Entrepreneurship pilot fund
CE runs a bootcamp/incubator that produces several new charities each run. I don’t think every project that comes out of this program is gold. I don’t even know of any projects that make me go “yes, definitely amazing”. But they are creating founder-talent where none existed before, and getting unsexy projects implemented, and building up their own skill in developing talent.
CE recently opened up the funding circle for their incubated projects.
Exotic Tofu Project, perhaps dependent on getting a more charismatic co-founder
I was excited by George Stiffman’s original announcement of a plan to bring unknown Chinese tofus into the west and marketing it to foodie omnivores as desirable and fun. This was a theory of change that could work, and that avoided the ideological sirens that crash so many animal suffering projects on the rocks.
Then he released his book. It was titled Broken Cuisine: Chinese tofu, Western cooking, and a hope to save our planet. The blurb opens with “Our meat-based diets are leading to antibiotic-resistant superbugs, runaway climate change, and widespread animal cruelty. Yet, our plant-based alternatives aren’t appealing enough. This is our Broken Cuisine. I believe we must fix it.” . This is not a fun and high status leisure activity: this is doom and drudgery, aimed at the already-converted. I think doom and drudgery strategies are oversupplied right now, but I was especially sad to see this from someone I thought understood the power of offering options that were attractive on people’s own terms.
This was supposed to be a list of projects who are underfunded due to lack of charisma; unfortunately this project requires charisma. I would still love to see it succeed, but I think that will require a partner who is good at convincing the general public that something is high-status. My dream is a charming, high-status reducetarian or ameliatarian foodie influencer, but just someone who understood omnivores on their own terms would be an improvement.
Impact Certificates
I love impact certificates as a concept, but they’ve yet to take off. They suffer from both lemon and coordination problems.They’re less convenient than normal grants, so impact certificate markets only get projects who couldn’t get funding elsewhere. And it’s a 2 or 3 sided market (producers, initial purchasers, later purchasers).
There are a few people valiantly struggling to make impact certificates a thing. I think these are worth funding directly, but it’s also valuable to buy impact certificates. If you don’t like the uncertainty of the project applications you can always be a secondary buyer, who are perhaps even rarer.
Projects I know doing impact certificates
Manifund. Manifund is the IC subproject of Manifold Market, which manifestly does not suffer from lack of extroversion in its founder. But impact certs are just an uphill battle, and private conversations with founder Austin Chen indicated they had a lot of room for more funding.
ACX Grants runs an impact certificate program, managed by Manifund.
Oops, the primary project I was thinking of has gone offline.
Ozzie Gooen/QURI
Full disclosure: I know Ozzie socially, although not so well as to put him in the Conflicts of Interest section.
Similar to ALLFED: I don’t know that QURI’s estimation tools are the most important project, but I do know Ozzie has been banging the drums on forecasting for years, way before Austin Chen made it cool, and it’s good for the EA ecosystem to have that kind of persistent pursuit in the mix.
Community Building
Most work done under the name “community building” is recruiting. Recruiting can be a fine thing to do, but it makes me angry to see it mislabeled in this, while actual community building starves. Community recruiting is extremely well funded, at least for people willing to frame their project in terms of accepted impact metrics. However if you do the harder part of actually building and maintaining a community that nourishes members, and are uncomfortable pitching impact when that’s not your focus, money is very scarce. This is a problem because
EA has an extractive streak that can burn people out. Having social support that isn’t dependent on validation for EA authorities is an important counterbalance.
The people who are best at this are the ones doing it for its own sake rather than optimizing for short-term proxies on long-term impact. Requiring fundees to aim at legible impact selects for liars and people worse at the job.
People driven to community build for its own sake are less likely to pursue impact in other ways. Even if you think impact-focus is good and social builders are not the best at impact, giving them building work frees up someone impact-focused to work on something else.
Unfortunately I don’t have anyone specific to donate to, because the best target I know already burnt out and quit. But I encourage you to be on the lookout in your local community. Or be the change I want to see in the world: being An Organizer is hard, but hosting occasional movie nights or proactively cleaning at someone else’s party is pretty easy, and can go a long way towards creating a healthy connected community.
Projects with conflicts of Interest
The first section featured projects I know only a little about. This section includes projects I know way too much about, to the point I’m at risk of bias.
Independent grant-funded researchers
Full disclosure: I am an independent, sometimes grant-funded, researcher.
This really needs to be its own post, but in a nutshell: relying on grants for your whole income sucks, and often leaves you with gaps or at least a lot of uncertainty. I’m going to use myself as an example because I haven’t run surveys or anything, but I expect I’m on the easier end of things.
The core difficulty with grant-financing: grantmakers don’t want to pay too far in advance. Grantmakers don’t want to approve new grants until you’ve shown results from the last grant. Results take time. Grant submissions, approval, and payout also take time. This means that, at best, you spend many months not knowing if you’ll have a funding gap, and many times the answer will be yes. I don’t know if this is the grantmaker’s fault, but many people feel pressure to ask for as little money as possible, which makes the gaps a bigger hardship
I get around this by contracting and treating grants as one client out of several, but I’m lucky that’s an option. It also means I spend time on projects that EA would consider unoptimal. Other problems: I have to self-fund most of my early work because I don’t want to apply for a grant until I have a reasonable idea of what I could hope to accomplish. There are projects I’ve been meaning to do for years, but are too big to self-fund, and too illegible and inspiration-dependent to survive a grant process. I have to commit to a project at application time but then not start until the application is approved, which could be months later.
All purpose funding with a gentle reapplication cycle would let independents take more risks at a lower psychological toll. Or test out Austin Chen’s idea of ~employment-as-a-service. Alas neither would help me right this second- illness has put me behind on some existing grant funded work, so I shouldn’t accept more money right now. But other independents could; if you know of any, please leave a pitch in the comments.
Lightcone
full disclosure: I volunteer and am very occasionally paid for work at Lightcone, and have deep social ties with the team.
Lightcone’s issue isn’t so much charisma as that the CEO is allergic to accepting money with strings, and the EA-offered money comes with strings. I like Lightcone’s work, and some of my favorite parts of their work would have been much more difficult without that independence.
Very grateful for the kind words, Elizabeth! Manifund is facing a funding shortfall at the moment, and will be looking for donors soon (once we get the ACX Grants Impact Market out the door), so I really appreciate the endorsement here.
(Fun fact: Manifund has never actually raised donations for our own core operations/salary; we’ve been paid ~$75k in commission to run the regrantor program, and otherwise have been just moving money on behalf of others.)
what would fundraising mean here? is it for staffing, or donations to programs, or to your grantmakers to distribute as they seem fit?
i’ve been working at manifund for the last couple months, figured i’d respond where austin hasn’t (yet)
here’s a grant application for the meta charity funders circle that we submitted a few weeks ago, which i think is broadly representative of who we are & what we’re raising for.
tldr of that application:
core ops
staff salaries
misc things (software, etc)
programs like regranting, impact certificates, etc, for us to run how we think is best[1]
additionally, if a funder was particularly interested in a specific funding program, we’re also happy to provide them with infrastructure. e.g. we’re currently facilitating the ACX grants, we’re probably (70%) going to run a prize round for dwarkesh patel, and we’d be excited about building/hosting the infrastructure for similar funding/prize/impact cert/etc programs. this wouldn’t really look like [funding manifund core ops, where the money goes to manifund], but rather [running a funding round on manifund, where the funding mostly[2] goes to object-level projects that aren’t manifund].
i’ll also add that we’re a less funding-crunched than when austin first commented; we’ll be running another regranting round, for which we’ll be paid another $75k in commission. this was new info between his comment and this comment. (details of this are very rough/subject to change/not firm.)
i’m keeping this section intentionally vague. what we want is [sufficient funding to be able to run the programs we think are best, iterate & adjust quickly, etc] not [this specific particular program in this specific particular way that we’re tying ourselves down to]. we have experimentation built into our bones, and having strings attached breaks our ability to experiment fast.
we often charge a fee of 5% of the total funding; we’ve been paid $75k in commission to run the $1.5mm regranting round last year.
I probably would have had ALLFED and CE on a list like this had I written it (don’t know as much about most of the other selections). It seems to me that both organizations get, on a relative basis, a whole lot more public praise than they get funding. Does anyone have a good explanation for the praise-funding mismatch?
TL;DR: I think the main reason is the same reason we aren’t donating to them: we think there are even more promising projects in terms of the effectiveness of a marginal $, and we are extremely funding constrained. I strongly agree with Elizabeth that all these projects (and many others) deserve more money.
Keeping in mind that I haven’t researched any of the projects, and I’m definitely not an expert in grantmaking; I personally think that “the theory of change seems valuable, and worse projects are regularly funded” is not the right bar to estimate the relative value of a marginal dollar, as it doesn’t take into account funding-gaps, costs, and actual results achieved.
As a data point on the perspective of a mostly uninformed effectiveness-oriented small donor, here’s why I personally haven’t donated to these projects in 2023, starting from the 2 you mention.
I’m not writing this because I think they are good reasons to fund other projects, but as a potentially interesting data-point in the psychology of an uninformed giver.
ALLFED:
Their theory of change seems really cool, but research organizations seem very hard to evaluate as a non-expert. I think 3 things all need to go right for research to be impactful:
The research needs to find “surprising”/”new” impactful interventions (or show that existing top interventions are surprisingly less cost-effective)
The research needs to be reliable and generally high quality
The research needs to be influential and decision-relevant for the right actors.
It’s really hard to evaluate each of the three as a non-expert. I would also be surprised if this was particularly neglected, as ALLFED is very famous in EA, and Denkenberger seems to have a good network. I also don’t know what more funding would lead to, and their track record is not clear to me after >6 years (but that is very much my ignorance; and because evaluating research is hard)
Charity Entrepreneurship/Ambitious Impact:
They’re possibly my favourite EA org (which is saying a lot; the bar is very high). I recommended allocating $50k to CE when I won a donor lottery. But because they’re so obviously cost-effective, if they ever have a funding need, I imagine tons of us would be really eager to jump in and help fill it. Including e.g. the EAIF. So, I personally would consider a donation to CE as counterfactually ~similar to a donation to the EAIF.
Regarding CE-incubated projects, I do donate a bit to them, but I personally believe that some of the medium-large donors in the CE seed network are very thoughtful and experienced grantmakers. So, I don’t expect the unfunded projects to be the most promising CE projects. Some projects like Healthier Hens do scale down due to lack of funding after some time, but I think a main reason in that case was that some proposed interventions turned out to not work or cost more than they expected. See their impact estimates.
Faunalytics:
They are super well known and have been funded by OpenPhil and the EA Animal Welfare Fund for specific projects, I defer to them. While they have been a ACE recommended charity for 8 years, I don’t know if the marginal dollar has more impact there compared to the other extremely impressive animal orgs.
Exotic Tofu:
It seems really hard to evaluate, Elizabeth mentions some issues, but in general my very uninformed opinion is that if it wouldn’t work as a for-profit it might be less promising as a non-profit compared to other (exceptional) animal welfare orgs.
Impact Certificates:
I think the first results weren’t promising, and I fear it’s mostly about predicting the judges’ scores since it’s rare to have good metrics and evaluations. That said, Manifund seems cool, and I made a $12 offer for Legal Impact for Chickens to try it out.[1] Since you donate to them and have relevant specific expertise, you might have alpha here and it might be worth checking out
QURI:
My understanding is that most of their focus in the past few years has been building a new programming language. While technically very impressive, I don’t fully understand the value proposition and after four years they don’t seem to have a lot of users. The previous QURI project www.foretold.io didn’t seem to have worked out, which is a small negative update. I’m personally more optimistic about projects like carlo.app and I like that it’s for-profit.
Edit: see the object-level response from Ozzie; the above is somewhat wrong and I expect other points about other orgs to be wrong in similar ways
Community Building:
I’m personally unsure about the value of non-impact-oriented community building. I see a lot of events like “EA Karaoke Night”, which I think are great but:
I’m not sure they’re the most cost-effective way to mitigate burnout
I think there are very big downsides in encouraging people to rely on “EA” for both social and economic support
I worry that “EA” is getting increasingly defined in terms of social ties instead of impact-focus, and that makes us less impactful and optimize for the wrong things (hopefully, I’ll write a post soon about this. Basically, I find it suboptimal that someone who doesn’t change their career, donate, or volunteer, but goes to EA social events, is sometimes considered closer to the quintessential “EA” compared to e.g. Bill Gates)
Independent grant-funded researchers:
See ALLFED above for why it’s hard for me to evaluate research projects, but mostly I think this obviously depends a lot on the researcher. But I think the point is about better funding methodology/infrastructure and not just more funding.
Lightcone:
I hear conflicting things about the dynamics there (the point about “the bay area community”). I’m very far from the Bay Area, and I think projects there are really expensive compared to other great projects. I also thought they had less of a funding need nowadays, but again I know very little.
Please don’t update much on the above in your decisions on which projects to fund. I know almost nothing about most of the projects above and I’m probably wrong. I also trust grantmakers and other donors have much more information, experience, and grantmaking skills; and that they have thought much more about each of the orgs mentioned. This is just meant to be an answer to “Does anyone have a good explanation for the praise-funding mismatch?” that basically is a bunch of guessed examples for: “many things can be very praise-worthy without being a great funding opportunity for many donors”
But I really don’t expect to have more information than the AWF on this, and I think they’ll be the judge, so rationally, I should probably just have donated the money to the AWF. I think I’m just not the target audience for this.
Quick notes on your QURI section:
“after four years they don’t seem to have a lot of users” → I think it’s more fair to say this has been about 2 years. If you look at the commit history you can see that there was very little development for the first two years of that time.
https://github.com/quantified-uncertainty/squiggle/graphs/contributors
We’ve spent a lot of time at blog posts / research, and other projects, as well as Squiggle Hub. (Though in the last year especially, we’ve focused on Squiggle)
Regarding users, I’d agree it’s not as many as I would have liked, but think we are having some. If you look through the Squiggle Tag, you’ll see several EA groups who have used Squiggle.
We’ve been working with a few EA organizations on Squiggle setups that are mostly private.
I think for-profits have their space, but I also think that nonprofits and open-source/open organizations have a lot of benefits.
Thank you for the context! Useful example of why it’s not trivial to evaluate projects without looking into the details
Of course! In general I’m happy for people to make quick best-guess evaluations openly—in part, that helps others here correct things when there might be some obvious mistakes. :)
My thoughts were:
For many CE-incubated charities, the obvious counterfactual donation would be to GiveWell top charities, and that’s a really high bar.
I consider the possibility that a lot of ALLFED’s potential value proposition comes from a low probability of saving hundreds of millions to billions of lives in scenarios that would counterfactually neither lead to extinction nor produce major continuing effects thousands of years down the road.
If that is so, it is plausible that this kind of value proposition may not be particularly well suited to many neartermist donors (for whom the chain of contingencies leading to impact may be too speculative for their comfort level) or to many strong longtermist donors (for whom the effects thousands to millions of years down the road may be weaker than for other options seen as mitigating extinction risk more).
If you had a moral parliament of 50 neartermists & 50 longtermists that could fund only one organization (and by a 2⁄3 majority vote), one with this kind of potential impact model might do very well!
I think this is right and important. Possible additional layer: some donors are more comfortable with experimental or hits based giving than others. Those people disproportionately go into x-risk. The donors remaining in global poverty/health are both more adverse to uncertainty and have options to avoid it (both objectively, and vibe-wise)
I really agree with the first point, and the really high bar is the main reason all of these projects have room for more funding.
I somewhat disagree with the second point: my impression is that many donors are interested in mitigating non-existential global catastrophic risks (e.g. natural pandemics, climate change), but I don’t have much data to support this.
I don’t think many donors are interested in mitigating non-existential global catastrophic risks is necessarily inconsistent with the potential explanation for why organizations like ALLFED may get substantially more public praise than funding. It’s plausible to me that an org in that position might be unusually good at rating highly on many donors’ charts, without being unusually good at rating at the very top of the donors’ lists:
There’s no real limit on how many orgs one can praise, and preventing non-existential GCRs may win enough points on donors’ scoresheets to receive praise from the two groups I described above (focused neartermists and focused longtermists) in addition to its actual donors.
However, many small/mid-size donors may fund only their very top donation opportunities (e.g., top two, top five, etc.)
Hi Jason,
Here is why I do not recommend donating to ALLFED, for which I work as a contractor. If one wants to:
Minimise existential risk, one had better donate to the best AI safety interventions, namely the Long-Term Future Fund (LTFF).
Maximise nearterm welfare, one had better donate to the best animal welfare interventions.
I estimate corporate campaigns for chicken welfare, like the ones promoted by The Humane League, are 1.37 k times as cost-effective as GiveWell’s top charities.
Maximise nearterm human welfare in a robust way, one had better donate to GiveWell’s funds.
I guess the cost-effectiveness of ALLFED is of the same order of magnitude of that of GiveWell’s funds (relatedly), but it is way less robust (in the sense my best guess will change more upon further investigation).
CEARCH estimated “the cost-effectiveness of conducting a pilot study of a resilient food source to be 10,000 DALYs per USD 100,000, which is around 14× as cost-effective as giving to a GiveWell top charity”. “The result is highly uncertain. Our probabilistic model suggests a 53% chance that the intervention is less cost-effective than giving to a GiveWell top charity, and an 18% chance that it is at least 10× more cost-effective. The estimated cost-effectiveness is likely to fall if the intervention is subjected to further research, due to optimizer’s curse”. I guess CEARCH is overestimating cost-effectiveness (see my comments).
Maximise nearterm human welfare supporting interventions related to nuclear risk, one had better donate to Longview’s Nuclear Weapons Policy Fund.
My impression is that efforts to decrease the number of nuclear detonations are more cost-effective than ones to decrease famine deaths caused by nuclear winter. This is partly informed by CEARCH estimating that lobbying for arsenal limitation is 5 k times as cost-effective as GiveWell’s top charities, although I guess the actual cost-effectivess is more like 0.5 to 50 times that of GiveWell’s top charities.
As always (unless otherwise stated), the views expressed here are my own, not those of ALLFED.
Some hypotheses:
I’m wrong, and they are adequately funded
I’m wrong and they’re not outstanding orgs, but discovering that takes work the praisers haven’t done.
The praise is a way to virtue signal, but people don’t actually put their money behind it.
The praise is truly meant and people put their money behind it, but none of the praise is from the people with real money.
I believe CE has received OpenPhil money and ALLFED CEA and SFF money, just not as much as they wanted. Maybe the difference is not in # of grants approved, but in how much room for funding big funders believe they have or want to fill.
I’m not sure of CE’s funding situation, it was the incubated orgs that they pitched as high-need.
Maybe the OpenPhil AI and meta teams are more comfortable fully funding something than other teams.
ALLFED also gets academic grants, maybe funders fear their money will replace those rather than stack on top of.
OpenPhil has a particular grant cycle, maybe it doesn’t work for some orgs (at least not as their sole support).
I found this list very helpful, thank you!
On exotic tofu: I am not yet convinced that Stiffman doesn’t have the requisite charisma. Is your concern that he’s vegan (hence less relatable to non-vegans), his messaging in Broken Cuisine specifically, or something else? I am sympathetic to the first concern, but not as convinced by the second. In particular, from what little else I’ve read from Stiffman, his messaging is more like his original post on this Forum: positive and minimally doom-y. See, for example, his article in Asterisk, this podcast episode (on what appears to be a decently popular podcast?), and his newsletter.
Have you reached out to him directly about your concerns about his messaging? Your comments seem very plausible to me and reaching out seems to have a high upside.
I sent a message to George Stiffman through a mutual friend and never heard back, so I gave up after 2 pings (to the friend).
Thanks for mentioning places Stiffman comes across better. I’ve read the Asterisk article and found it irrelevant to his consumer-aimed work. Maybe the Bittman podcast is consumer-targeted and an improvement, I dunno. For now I can’t get over that book title and blurb.
Can you elaborate on what you mean by “the EA-offered money comes with strings?”
Not well. I only have snippets of information, and it’s private (Habryka did sign off on that description).
I don’t know if this specifically has come up in regards to Lightcone or Lighthaven, but I know Haybrka has been steadfastly opposed to the kind of slow, cautious, legally-defensive actions coming out of EVF. I expect he would reject funding that demanded that approach (and if he accepted it, I’d be disappointed in him, given his public statements).
Thanks for putting the Exotic Tofu Project on my screen! I also like all the others.
We (me and my cofounder) run yet another “impact certificates” project. We started out with straightforward impact certificates, but the legal hurdles for us and for the certificate issuers turned out too high and possibly (for us) insurmountable, at least in the US.
We instead turned to the system that works for carbon credits. These are not so much traded on the level of the certificate or impact claim but instead there are validators that confirm that the impact has happened according to certain standards and then pay out the impact credits (or carbon credits) associated with that standard.
That system seems more promising to us as it has all the advantages of impact certificate markets but also the advantage that one party (e.g., us) can do the legal battle in the US once for this impact credit (and can even rely on the precedent of carbon credits), and thereby pave the ground for all the other market participants that come after and don’t have to worry about the legalities anymore. There are already a number non-EA organizations that are working toward a similar vision.
Even outside such restrictive jurisdictions as the US, this system has the advantage that it allows for deeper liquidity on the impact credit markets (compared to the auctions for individual impact certificates). But the US is an important market for EA and AI safety, so we couldn’t just ignore it even if it hadn’t been for this added benefit.
We’ve started bootstrapping this system with GiveWiki in January of last year. But over the course of the year we’ve found it very hard to find anyone who wanted to use the system as a donor/grantmaker. Most of the grantmakers we were in touch with had lost their funding in Nov. 2022; others wanted to wait until the system is mature; and many smaller donors had no trouble finding great funding gaps without our help.
We will keep the platform running, but we’ll probably have wait for the next phase of funding overhang when there are more grantmakers and they actually have trouble finding their funding gaps.
(H/t to Dony for linking this thread to me!)
GiveWiki just looks a list of charities to me; what’s the additional thing you are doing?
Frankie made a nice explainer video for that!
What a market does, idealizing egregiously, is that people with special knowledge or insight invest into things early: Thus less informed people (some of whom have more capital) can watch the valuations, and invest into projects with high and increasing valuations or some other valuation-based marker of quality. A process of price discovery.
AngelList, for example, facilitates that. They have a no-action letter from the SEC (and the startups on AngelList have at least a registration D I imagine), so they didn’t have to register as a broker-dealer to be allowed to match startups to investors. I think they have some funds that are led by seasoned investors, and then the newbie investors can follow the seasoned ones by investing into their funds. Or some mechanism of that sort.
We’re probably not getting a no-action letter, and we don’t have the money yet to start the legal process to get our impact credits registered with the CFTC. So instead we recognized that in the above example investors are treating valuations basically like scores. So we’re just using scores for now. (Some rich people say money is just for keeping score. We’re not rich, so we use scores directly.)
The big advantage of actual scores (rather than using monetary valuations like scores) is that it’s legally easy. The disadvantage is that we can’t pitch GiveWiki to profit-oriented investors.
So unlike AngelList, we’re not giving profit-oriented investors the ability to follow more knowledgeable profit-oriented investors, but we’re allowing donors/grantmakers to follow more knowledgeable donors/grantmakers. (One day, with the blessing of the CFTC, we can hopefully lift that limitation.)
We usually frame this as a process of three phases:
Implement the equivalent of price discovery with a score. (The current state of GiveWiki.)
Pay out a play money currency according to the score.
Turn the play money currency into a real impact credit that can be sold for dollars (with the blessing of the CFTC).