Hey there~ I’m Austin, currently building https://manifund.org. Always happy to meet people; reach out at akrolsmir@gmail.com, or find a time on https://calendly.com/austinchen/manifold !
Austin
I’m not aware of any projects that aim to advise what we might call “Small Major Donors”: people giving away perhaps $20k-$100k annually.
We don’t advertise very much, but my org (Manifund) does try to fill this gap:
Our main site, https://manifund.org/, allows individuals and orgs to publish charitable projects and raise funding in public, usually for projects in the range of $10k-$200k
We generally focus on: good website UX, transparency (our grants, reasoning, website code and meeting notes are all public), moving money fast (~1 week rather than months)
We are more self-serve than advisory; we mostly expect our donors to find projects they like themselves, which they can do because the grant proposals include large amounts of detail, plus they can directly chat with the project creators over our comments section
Though, we have experimented with promoting good projects via things like impact certs & quadratic funding rounds, or just posting recommendations on our blog
In the EA space, we’re particularly open to weird arrangements; beyond providing lightweight fiscal sponsorship to hundreds of individuals and experimenting with funding mechanisms, we have eg loaned money to aligned orgs and invested in for-profit enterprises
If you’re interested in donating medium-sized amounts in unusual ways, reach out to me at austin@manifund.org!
I encourage Sentinel to add a paid tier on their Substack, just as an easy mechanism for folks like you & Saul to give money, without paywalling anything. While it’s unlikely for eg $10/mo subscriptions to meaningfully affect Sentinel’s finances at this current stage, I think getting dollars in the bank can be a meaningful proof of value, both to yourselves and to other donors.
@Habryka has stated that Lightcone has been cut off from OpenPhil/GV funding; my understanding is that OP/GV/Dustin do not like the rationalism brand because it attracts right-coded folks. Many kinds of AI safety work also seem cut off from this funding; reposting a comment from Oli :
Epistemic status: Speculating about adversarial and somewhat deceptive PR optimization, which is inherently very hard and somewhat paranoia inducing. I am quite confident of the broad trends here, but it’s definitely more likely that I am getting things wrong here than in other domains where evidence is more straightforward to interpret, and people are less likely to shape their behavior in ways that includes plausible deniability and defensibility.
...
As a concrete example, as far as I can piece together from various things I have heard, Open Phil does not want to fund anything that is even slightly right of center in any policy work. I don’t think this is because of any COIs, it’s because Dustin is very active in the democratic party and doesn’t want to be affiliated with anything that is right-coded. Of course, this has huge effects by incentivizing polarization of AI policy work with billions of dollars, since any AI Open Phil funded policy organization that wants to engage with people on the right might just lose all of their funding because of that, and so you can be confident they will steer away from that.
Open Phil is also very limited in what they can say about what they can or cannot fund, because that itself is something that they are worried will make people annoyed with Dustin, which creates a terrible fog around how OP is thinking about stuff.[1]
Honestly, I think there might no longer a single organization that I have historically been excited about that OpenPhil wants to fund. MIRI could not get OP funding, FHI could not get OP funding, Lightcone cannot get OP funding, my best guess is Redwood could not get OP funding if they tried today (though I am quite uncertain of this), most policy work I am excited about cannot get OP funding, the LTFF cannot get OP funding, any kind of intelligence enhancement work cannot get OP funding, CFAR cannot get OP funding, SPARC cannot get OP funding, FABRIC (ESPR etc.) and Epistea (FixedPoint and other Prague-based projects) cannot get OP funding, not even ARC is being funded by OP these days (in that case because of COIs between Paul and Ajeya).[2] I would be very surprised if Wentworth’s work, or Wei Dai’s work, or Daniel Kokotajlo’s work, or Brian Tomasik’s work could get funding from them these days. I might be missing some good ones, but the funding landscape is really quite thoroughly fucked in that respect. My best guess is Scott Alexander could not get funding, but I am not totally sure.[3]
I cannot think of anyone who I would credit with the creation or shaping of the field of AI Safety or Rationality who could still get OP funding. Bostrom, Eliezer, Hanson, Gwern, Tomasik, Kokotajlo, Sandberg, Armstrong, Jessicata, Garrabrant, Demski, Critch, Carlsmith, would all be unable to get funding[4] as far as I can tell. In as much as OP is the most powerful actor in the space, the original geeks are being thoroughly ousted.[5]
In-general my sense is if you want to be an OP longtermist grantee these days, you have to be the kind of person that OP thinks is not and will not be a PR risk, and who OP thinks has “good judgement” on public comms, and who isn’t the kind of person who might say weird or controversial stuff, and is not at risk of becoming politically opposed to OP. This includes not annoying any potential allies that OP might have, or associating with anything that Dustin doesn’t like, or that might strain Dustin’s relationships with others in any non-trivial way.
Of course OP will never ask you to fit these constraints directly, since that itself could explode reputationally (and also because OP staff themselves seem miscalibrated on this and do not seem in-sync with their leadership). Instead you will just get less and less funding, or just be defunded fully, if you aren’t the kind of person who gets the hint that this is how the game is played now.
And to provide some pushback on things you say, I think now that OPs bridges with OpenAI are thoroughly burned after the Sam firing drama, OP is pretty OK with people criticizing OpenAI (since what social capital is there left to protect here?). My sense is criticizing Anthropic is slightly risky, especially if you do it in a way that doesn’t signal what OP considers good judgement on maintaining and spending your social capital appropriately (i.e. telling them that they are harmful for the world, or should really stop, is bad, but doing a mixture of praise and criticism without taking any controversial top-level stance is fine), but mostly also isn’t the kind of thing that OP will totally freak out about. I think OP used to be really crazy about this, but now is a bit more reasonable, and it’s not the domain where OP’s relationship to reputation-management is causing the worst failures.
I think all of this is worse in the longtermist space, though I am not confident. At the present it wouldn’t surprise me very much if OP would defund a global health grantee because their CEO endorsed Trump for president, so I do think there is also a lot of distortion and skew there, but my sense is that it’s less, mostly because the field is much more professionalized and less political (though I don’t know how they think, for example, about funding on corporate campaign stuff which feels like it would be more political and invite more of these kinds of skewed considerations).
Also, to balance things, sometimes OP does things that seem genuinely good to me. The lead reduction fund stuff seems good, genuinely neglected, and I don’t see that many of these dynamics at play there (I do also genuinely care about it vastly less than OPs effect on AI Safety and Rationality things).
Also, Manifold, Manifund, and Manifest have never received OP funding—I think in the beginning we were too illegible for OP, and by the time we were more established and OP had hired a fulltime forecasting grantmaker, I would speculate that were seen as too much of a reputational risk given eg our speaker choices at Manifest.
5 homegrown EA projects, seeking small donors
This looks awesome! $1k struck me as a pretty modest prize pool given the importance of the questions; I’d love to donate $1k towards increasing this prize, if you all would accept it (or possibly more, if you think it would be useful.)
I’d suggest structuring this as 5 more $200 prizes (or 10 $100 honorable mentions) rather than doubling the existing prizes to $400 -- but really it’s up to you, I’d trust your allocations here. Let me know if you’d be interested!
The act of raising funding from “EA general public” is quite rare at the moment—most orgs I’m familiar with get the vast majority of their funding from a handful of institutions (OP, EA Funds, SFF, some donor circles).
I do think fundraising from the public can be a good forcing function and I wish more EA nonprofits tried to do so. Especially meta/EA internal orgs like 80k or EA Forum or EAG (or Lightcone), since there, “how much is a user willing to donate” could be a very good metric for the how much value they are receiving from their work.
One of the best things that happened to Manifold early on was when our FTX Future Fund regrantor offered to cover up to half of our $2m seed round—contingent on us raising the other half from other sources. We then had to build the muscle of fundraising from regular Silicon Valley angels/VCs, which especially served us well when Future Fund went kaput.
Manifund tries to make public fundraising for EA projects much easier, and there have been a few success cases such as MATS and Act I—though in the end most of our dollars moved come from our regrantors.
If you are a mechanical engineer digging around for new challenges and you’re not put off by everyone else’s failure to turn a profit, I’d be enthusiastic about your building a lamp and would do my best to help you get in touch with people you could learn from.
If this describes you, I’d also love to help (eg with funding) -- reach out to me at austin@manifund.org!
If far-UV is so great, why isn’t it everywhere?
Thanks for posting this! I appreciate the transparency from the CEA team around organizing this event and posting about the results; putting together this kind of stuff is always effortful for me, so I want to celebrate when others do it.
I do wish this retro had a bit more in the form of concrete reporting about what was discussed, or specific anecdotes from attendees, or takeaways for the broader EA community; eg last year’s MCF reports went into substantial depth on these, which really enjoyed. But again, these things can be hard to write up, perfect shouldn’t be the enemy of good enough, and I’m grateful for the steps that y’all have already taken towards showing your work in public.
Thanks for the questions! Most of our due diligence happens in the step where the Manifund team decides whether to approve a particular grant; this generally happens after a grant has met its minimum funding bar and the grantee has signed our standard grant agreement (example). At that point, our due diligence usually consists of reviewing their proposal as written for charitable eligibility, as well as a brief online search, looking through the grant recipient’s eg LinkedIn and other web presences to get a sense of who they are. For larger grants on our platform (eg $10k+), we usually have additional confidence that the grant is legitimate coming from the donors or regrantors themselves.
In your specific example, it’s very possible that I personally could have missed cross-verifying your claim of attending Yale (with the likelihood decreasing the larger the grant is for). Part of what’s different about our operations is that we open up the screening process so that anyone on the internet can chime in if they see something amiss; to date we’ve paused two grants (out of ~160) based on concerns raised from others.
I believe we’re classified as a public charity and take on expenditure responsibility for our grants, via the terms of our grant agreement and the status updates we ask for from grantees.
And yes, our general philosophy is that Manifund as a platform is responsible for ensuring that a grant is legitimate under US 501c3 law, while being agnostic about the impact of specific grants—that’s the role of donors and regrantors on our platform.
I’d really appreciate you leaving thoughts on the projects, even if you decided not to fund them. I expect that most project organizers would also appreciate your feedback, to help them understand where their proposals as written are falling short. Copy & paste of your personal notes would be great!
Hey! It is not too late; in fact, people can continue signing up to claim and direct funds anytime before phase 3.
(I’m still working on publishing the form; if it’s not up today, I’ll let y’all know and would expect it to be up soon after)
It’s hard to say much about the source of funding without leaking too much information; I think I can say that they’re a committed EA who has been around the community a while, who I deeply respect and is generally excited to give the community a voice.
FWIW, I think the connection between Manifest and “receiving funding from Manifund or EA Community Choice” is pretty tenuous. Peter Wildeford who you quoted has both raised $10k for IAPS on Manifund and donated $5k personally towards a EA community project. This, of course, does not indicate that Peter supports Manifest to any degree whatsoever; rather, it shows that sharing a funding platform is a very low bar for association.
Appreciate the questions! In general, I’m not super concerned about adversarial action this time around, since:
I generally trust people in the community to do the right thing
The money can’t be withdrawn to your own pocket, so the worst case is that some people get to direct more funding than they properly deserve
The total funding at stake is relatively small
We reserve the right to modify this, if we see people trying to exploit things
Specifically:
I plan to mostly rely on self-reports, plus maybe quick sanity checks that a particular person actually exists.
Though, if we’re scaling this up for future rounds, a neat solution I just thought of would be to require people to buy in a little bit, eg they have to donate $10 of their own money to unlock the funds. This would act as a stake towards telling the truth—if we determine that someone is misrepresenting their qualifications then they lose their stake too.
Haha, I love that post (and left some comments from our past experience running QF). We don’t have clever tricks planned to address those shortcomings; I do think collusion and especially usability are problems with QF in general (though, Vitalik has some proposal on bounded pairwise QF that might address collusion?)
We’re going with QF because it’s a schelling point/rallying flag for getting people interested in weird funding mechanisms. It’s not perfect, but it’s been tested enough in the wild for us to have some literature behind it, while not having much actual exposure within EA. If we run this again, I’d be open to mechanism changes!We don’t permit people to create a bunch of accounts to claim the bonus multiple times; we’d look to prevent this by tracking the signup behavior on Manifund. Also, all donation activity is done in public, so I think there will be other scrutiny of weird funding patterns.
Meanwhile I think sharing this on X and encouraging their followers to participate is pretty reasonable—while we’re targeting EA Community Choice at medium-to-highly engaged EAs, I do also hope that this would draw some new folks into our scene!
Yes, we’re happy to allocate funds to the org that ran that initiative for them to spend unrestricted towards other future initiatives!
Yes, community members can donate in any proportion to the projects in this round. The math of quadratic funding roughly means that your first $1 to a project receives the largest match, then the next $3, then the next $5, $7, etc. Or: your match to a project is proportional to the square root of how much you’ve donated.
You can get some intuition by playing with the linked simulator; we’ll also show calculations about current match rates directly on our website. But you also don’t have to worry very much about the quadratic funding equation if you don’t want to, and you can just send money to whatever projects you like!
Glad you like it! As you might guess, the community response to this first round will inform what we do with this in the future. If a lot of people and projects participate, then we’ll be a lot more excited to run further iterations and raise more funding for this kind of event; I think success with this round would encourage larger institutional donors to want to participate.
It currently seems unlikely that we could raise a sizable matching pool (or initial funding pool) from small donations; I think like $100k at a minimum for making this kind of thing worth running. If people want to send small donations, I’d encourage those to go directly to the projects we host!
re: ongoing process, the quadratic funding mechanism typically plays out across different rounds—though I have speculated about an ongoing version before.We don’t have specific, formal plans to use the microregrantor decisions in this round for other purposes, but of course if we notice people leaving thoughtful comments and giving excellent donations in this round, we’ll take notice and consider them for future regranting and other opportunities!
Also, all the granting decisions here will be done in public, so I highly encourage other EA orgs to use the data generated for their own purposes (eg evaluating potential new grantmakers).
That’s good to know—I assume Oli was being somewhat hyperbolic here. Do you (or anyone else) have examples of right-of-center policy work that OpenPhil has funded?