Manifund is launching a new regranting program! We will allocate ~$2 million over the next six months based on the recommendations of our regrantors. Grantees can apply for funding through our site; we’re also looking for additional regrantors and donors to join.
What is regranting?
Regranting is a funding model where a donor delegates grantmaking budgets to different individuals known as “regrantors”. Regrantors are then empowered to make grant decisions based on the objectives of the original donor.
This model was pioneered by the FTX Future Fund; in a 2022 retro they considered regranting to be very promising at finding new projects and people to fund. More recently, Will MacAskill cited regranting as one way to diversify EA funding.
What is Manifund?
Manifund is the charitable arm of Manifold Markets. Some of our past work:
Impact certificates, with Astral Codex Ten and the OpenPhil AI Worldviews Contest
Forecasting tournaments, with Charity Entrepreneurship and Clearer Thinking
Donating prediction market winnings to charity, funded by the Future Fund
How does regranting on Manifund work?
Our website makes the process simple, transparent, and fast:
A donor contributes money to Manifold for Charity, our registered 501c3 nonprofit
The donor then allocates the money between regrantors of their choice. They can increase budgets for regrantors doing a good job, or pick out new regrantors who share the donor’s values.
Regrantors choose which opportunities (eg existing charities, new projects, or individuals) to spend their budgets on, writing up an explanation for each grant made.
We expect most regrants to start with a conversation between the recipient and the regrantor, and after that, for the process to take less than two weeks.
Alternatively, people looking for funding can post their project on the Manifund site. Donors and regrantors can then decide whether to fund it, similar to Kickstarter.
The Manifund team screens the grant to make sure it is legitimate, legal, and aligned with our mission. If so, we approve the grant, which sends money to the recipient’s Manifund account.
The recipient withdraws money from their Manifund account to be used for their project.
Differences from the Future Fund’s regranting program
Anyone can donate to regrantors. Part of what inspired us to start this program is how hard it is to figure out where to give as a longtermist donor—there’s no GiveWell, no ACE, just a mass of opaque, hard-to-evaluate research orgs. Manifund’s regranting infrastructure lets individual donors outsource their giving decisions to people they trust, who may be more specialized and more qualified at grantmaking.
All grant information is public. This includes the identity of the regrantor and grant recipient, the project description, the grant size, and the regrantor’s writeup. We strongly believe in transparency as it allows for meaningful public feedback, accountability of decisions, and establishment of regrantor track records.
Almost everything is done through our website. This lets us move faster, act transparently, set good defaults, and encourage discourse about the projects in comment sections.
We recognize that not all grants are suited for publishing; for now, we recommend sensitive grants apply to other donors (such as LTFF, SFF, OpenPhil).
We’re starting with less money. The Future Fund ended up distributing ~$100m over the 6 months of their program; we currently have about ~$2m to distribute and are fundraising for more.
Round 1: Longtermist Regrants
We’re launching with a cohort of 14 regrantors, each given a budget of $50k-$400k to direct to projects they believe will be the most impactful. We chose regrantors who are aligned with our values and prioritize mitigating global catastrophic risks, though ultimately regrantors can choose to give to projects under any cause area.
This round is backed by an anonymous donor’s contribution of $1.5 million, plus smaller grants from EA funders. Round 1 will end after this initial pool is spent, or after 6 months have passed.
Get involved with Manifund Regrants
For grantees: list your project on our site
If you are working on a longtermist project and looking for funding, you can post the details on our site here. Examples of projects we’ve funded:
$13k to Rachel Freedman, for medical expenses/PhD salary supplement
$25k to Joseph Bloom, for independent mech interp research
$2.5k to Vipul Naik, for the Donations List Website (retroactive grant)
We’re interested in proposals for AI safety, AI governance, forecasting, biorisk, and EA meta; we expect to best fund individuals and orgs looking for $1k-$200k.
For regrantors: apply for your own regrant budget
We’re accepting applications from people who want to join as regrantors! In some cases, we’ll offer to sponsor regrantors and provide budgets, and in others we’ll just offer to list regrantors so they can receive donations from other users that they can go on to donate.
For large donors: designate your own regrantors
We’re interested in anyone who would like to direct $100k+ this year through a regranting program. If that is you, reach out to austin@manifund.org
or book a call!
Why might you choose to donate via a regranting program?
You care about longtermism, but don’t know which projects need money. Longtermist projects can be speculative, opaque, and nascent, making it harder for you to know where to direct their money. Regranting allows you to outsource these decisions to people who better understand the field.
You have specific regrantors whose judgement you trust. Regranting surfaces opportunities that established EA grantmakers might otherwise miss, as regrantors can tap into their personal networks and fields of expertise. Regrantors can also initiate projects, by reaching out to grantees, launching prizes, and starting orgs.
You want to see your money move quickly. Our regranting model requires less overhead than traditional grantmaking, as one person is responsible for the budget rather than a committee. This also allows for faster grant turnaround times, solving a key pain point for grantees. We think the world would be a better place if impactful projects could start a few weeks to months earlier.
You want to donate through a 501c3. Manifund regrantors can give to other nonprofits, individuals, and for-profit orgs. If you operate a donor-advised fund or want the tax advantages of giving through a 501c3, we can facilitate that, so long as we vet that your regrantors make grants compatible with our charitable mission.
For everyone: talk to us!
We welcome feedback of all kinds. Whether you’re a potential grantee, regrantor, or donor, we’d love to hear about your pain points with existing funding systems, and what kinds of projects you find exciting. Hop in our Discord and come chat with us, or comment on specific projects through our site!
Just want to flag that I’m really happy to see this. I think that the funding space could really use more labor/diversity now.
Some quick/obvious thoughts:
- Website is pretty great, nice work there. I’m jealous of the speed/performance, kudos.
- I imagine some of this information should eventually be private to donors. Like, the medical expenses one.
- I’d want to eventually see Slack/Discord channels for each regrantor and their donors, or some similar setup. I think that communication between some regranters and their donors could be really good.
- I imagine some regranters would eventually work in teams. From being both on LTFF and seeing the FTX regrantor program, I did kind of like the LTFF policy of vote averaging. Personally, I think I do grantmaking best when working on a team. I think that the “regrantor” could be a “team leader”, in the sense that they could oversee people under them.
- As money amounts increase, I’d like to see regranters getting paid. It’s tough work. I think we could really use more part-time / full-time work here.
- I think if I were in charge of something like this, I’d have a back-office of coordinated investigations for everyone. Like, one full-time person who just gathers information about teams/people, and relays that to regranters.
- As I wrote about here, I’m generally a lot more enthusiastic about supporting sizeable organizations than tiny ones. I’d hope that this could be a good project to fund projects within sizeable organizations.
- I want to see more attention on reforming/improving the core aspects/community/bureaucracy of EA. These grantmakers seem very AI safety focused.
- Ideally it could be possible to have ratings/reviewers of how the regranters are to work with. Some grantmakers can be far more successful than others at delivering value to grantees and not being a pain to work with.
- I probably said this before, but I’m not very excited by Impact Certificates. More “traditional” grantmaking seems much better.
- One obvious failure mode is that regranters might not actually spend much of their money. It might be difficult to get good groups to apply. This is not easy work.
Good luck!
Thanks, Ozzie! I expect we’ll be chatting more about working together. Some point-by-point replies:
Yeah, idk, this is one of the areas I think I’m idiosyncratic; I think in an ideal world that almost no information would be kept private (maybe excepting infohazards like bomb construction techniques.) I think this attitude has been incredibly good for Manifold’s growth and development, and am patterning my approach to Manifund based on this.
In any case, right now we’re one small player amidst a variety of EA funders, so I’m not very worried about our radical transparency stance. I do also think there’s a good chance we change courses (maybe 30% in the next 12 months?) based on grantee/regrantor/donor feedback.
Interesting idea! My best guess is that our specific donor for Round 1 doesn’t have the time/interest to do this, but if other donors are looking for a more hands-on role, we’d be open to that as well!
This sounds promising. Right now we’re in a pretty proto-mvp stage, trying to catch up to what other funders have figured out. I do think collaboration across grantmakers is generally good; though caveat that teamwork does impose coordination costs, and for small (<$20k?) grants I’d lean towards moving faster.
We went back and forth on this. Right now all our regrantors are volunteers, while FF paid theirs. I think getting the specific compensation scheme right is tricky (pay by grant? by hour? by % commission?) and I suspect FF’s may have had bad incentives + optics, so we opted to launch first and revisit this later.
(to be continued)
Even assuming radical transparency within the relevant communities is the best policy, there are ways to mitigate the Googleability of the information for someone searching on the grantee’s name.
As a practical matter, not taking those steps makes the publicness of the sensitive information significantly depend on the distinctiveness of the person’s name. John Smith has significantly less of a downside to applying than someone with a near-unique moniker, and that will result in suboptimal distributions.
Thanks for the replies! Quickly,
> but if other donors are looking for a more hands-on role, we’d be open to that as well!
My guess is that some donors don’t exactly want to be hands-on for specific grants, but do want to get updates and ask specific questions to the grantmaker. This could be a bit of a pain to the grantmaker, but would be a better experience for the funder. In some cases, this seems worth it (mainly if we want to get more funders).
> trying to catch up to what other funders have figured out
For what it’s worth, at the LTFF (and I think other EA Funds), there’s a voting system where people vote on scores, and proposals that achieve a certain average or higher get funded. I agree this is overkill for many small grants.
> and I suspect FF’s may have had bad incentives + optics, so we opted to launch first and revisit this later.
I like that strategy. Later on though, I’d imagine that maybe regranters could request different perks. If there were a marketplace—funders choose regranters—then some regranter could request ~5%, and funders would take that into consideration. I suspect some granters don’t need the money, but others might only be able to do it if there were pay.
Ah, I see, makes sense. Perhaps a strategy of “we send out weekly updates to the donor about where their money has been going” is a better fit than “live chat”. Will think about this!
Def makes sense for grants above some dollar threshold (eg $100k?) I would love to be a fly on the wall (or even participate in LTFF grantmaking?) to learn what best practices have been, and see if they make sense for us
Haha, love the idea of a regrantors picking their own compensation models in a competitive marketplace!
Part 2:
Open to this as well, if there are specific individuals you think would do this well as a fulltime job! (Though, in this model I’m not quite sure what the role of the regrantor is—just to scout out opportunities?)
Yes, we’d love for individual projects to apply through Manifund regranting, because we think we could pay for those in a pretty lightweight manner. I like RP’s model of having individual projects apply for funding themselves.
On the larger point of large vs small orgs, I’m personally someone who thrived moving from Google ⇒ Streamlit (series B startup) ⇒ founding Manifold. I agree EA is pretty strange, and think we could benefit from overhauling the ecosystem to be more like the tech scene, which seems to be much better than EA at executing at their objectives. This could be either management practices at high-growth startups like Stripe (see High Growth Handbook) applied to the ecosystem as a whole; or instituting a more efficient venture ecosystem like YC for seed-stage orgs.
I agree with you on the importance of organizational reform, though it’s extra unclear if that kind of thing is addressable by a regranting program (vs someone enacting change from within OpenPhil). Perhaps we’ll ourselves address this if/when Manifund itself represents a significant chunk of EA funding, but that seems like a version 2 problem while we’re currently at v0.
Yeah! One of my pet ideas is “Yelp for EA orgs”; useful for individual regrantors, but also for EA grantmakers as a whole? Could integrate into our platform if we end up becoming a directory for all EA projects and funders… (also a v1/v2 problem, though regrantor reviews might be v0)
Agree to disagree for now! I have written up what I think are outstanding issues with impact certs, but I’m still bullish on the idea (esp with a large eg AI Safety yearly prize set up). Would love to do a podcast/debate on this sometime!
Yes, this is one of my top concerns with our regranting program so far; we’ve issued budgets to regrantors a few weeks ago and so far grantmaking pace is slower than expected. I do think this public launch will help though, as it solicits more applications for them to look over.
Thank you again for the extensive feedback!
I get the impression there’s a lot of human work that could be done to improve the process.
- Communicate with potential+historic recipients. Get information from them (to relay to grantmakers), and also inform them about the grantmaker’s preferences.
- Follow up with grantees to see how progress went / do simple evaluations.
- Do due diligence into several of the options.
- Maintain a list of expert advisors that could be leaned on in different situations.
- Do ongoing investigations and coordination with funded efforts. I think a lot of value comes from funding relationships that last 3-10+ years.
> I agree EA is pretty strange, and think we could benefit from overhauling the ecosystem to be more like the tech scene, which seems to be much better than EA at executing at their objectives.
This is a long discussion. To me, a lot of the reason way startups work is that the tail outcomes are really important—important enough to justify lots of effective payment for talent early on. But this only makes sense under the expectations of lots of growth, which is only doable with what eventually need to be large organizations.
> I agree with you on the importance of organizational reform, though it’s extra unclear if that kind of thing is addressable by a regranting program (vs someone enacting change from within OpenPhil). Perhaps we’ll ourselves address this if/when Manifund itself represents a significant chunk of EA funding, but that seems like a version 2 problem while we’re currently at v0.
My guess is that this depends a lot on finding the right grantmaker, and it’s very possible that’s just not realistic soon. I agree this can wait, there’s a lot of stuff to do, just giving ideas to be thinking about for later.
> Would love to do a podcast/debate on this sometime!
That sounds fun! Maybe we could just try to record a conversation next time we might see each other in person.
It looks a little suspicious and I think the optics are bad that the girlfriend of Manifold’s founder is getting $50,000 to give away. It might just be a situation in which power grants special privilidge, and being close to power also grants some special privilidge. I wouldn’t object to this to much if she had an impressive track record or if she had some special merit that made me think she had earned the privilidge, but as far as I am aware she doesn’t have a history of being a well-calibrated or successful grant giver. In fact, she is young and inexperienced, hasn’t graduated from college, and as far as a few minutes of Google searches reveals, has never had a “real” job (although has had internships and has run a student organization at her college). In fact, her only qualification seems to be that she is dating Manifold’s founder. Are you just choosing regrantors from people you know and like? Giving sinecures and special privilidge to friends seems… kind of corrupt.
I think this comment might be interpreted as harsh and mean, and that isn’t my intention. I have actually met her in real life and she seems like a nice person. I’m really worried that posting this comment will make the founder and her upset, and I’m purposely excluding the names to make it ever-so-slightly less transparent, and ever-so-slightly harder for other people to Google around and figure out details. The main problem I perceive is simply that she is young and hasn’t had much experience yet; the ability and status she is given here seems out of proportion to her level of skill/experience. I don’t have any inside information to suspect that there is anything malicious going on here. It is also possible that she is a brilliant thinker with excellent judgement, and that I just haven’t seen any evidence of it. But the situation just seems… is “corrupt” too strong of a word?
Haha, thanks for bringing this up. One correction, Rachel and I are married (as of last month).
A quick background on this is that around February, Scott Alexander of Astral Codex Ten asked Manifold to set up an impact market to be able to run the ACX Forecasting Minigrants round (which is the site you see now at https://manifund.org). At the time, our existing team on Manifold were already occupied, and I had seen Rachel’s work on various programming projects such as openbook.fyi. After careful consideration, and checking in with both Scott and Manifold for Charity’s board of advisors, I decided to bring her on for a 6-week consulting engagement, which we’ve since renewed and turned into a fulltime offer.
Obviously, we recognize the potential conflicts of interest and didn’t make this decision lightly. My best judgement is that Rachel has done fantastically in this position so far, comparable to eg what I would expect a new grad at Google. (If you’re technical, I invite you to judge her commits on our open source repository).
The $50k regrantor budget that both her and I have are primarily to allow us to dogfood our own site. For the two of us to build a useful product for regrantors to use, it’s important that we have on-the-ground experience of making regrants ourselves. You’re also welcome to evaluate specifically the two grants she’s recommended so far (to Rachel Freedman and the Donations List Website)!
Congrats on your marriage!
Thanks for explaining this context. I still think it is a little weird, but considering the framing that you are dogfooding your system makes it a bit more palatable. I also appreciate that your board of advisors approved and it seems like you have generally been cognizant of the conflict of interest.
I don’t think someone being young should be weighted highly in the assessment of their capacity to give good grants. I also think it’s important to remember that the majority of philanthropists come to have the power to give out grants due to success in the for-profit world and/or through good fortune, neither of which are necessarily correlated with being well positioned to give good grants. As a result, I don’t think the bar that Rachel needs to meet is so high that we should think that it’s unlikely that her being chosen as a regranter is based on merit.
That being said, the optics aren’t great so I understand where the original commenter is coming from.
Could you not dogfood just as easily with $50 (or fake money in a dev account)?
People are not going to get the experience of making consequential decisions with $50, particularly if they’re funding individuals and small projects (as opposed to established charities with fairly smooth marginal utility curves like AMF).
That said, I’m sympathetic to the same argument for $5k or 10k.
Yeah, I think Rachel also herself feels a bit imposter-syndrome-y about her budget allocation and might end up delegating part of her remainder to another regrantor.
I just disagree with everyone here (Anon/Tyler/Linch/Rachel). $10k pays for like 1-2 months of salary post-tax, which is like… a single regrant. I’d claim “feedback loops from intense dogfooding is why the Manifold Markets user experience is notably better than similar EA efforts” coupled with “the user experience of EA grantmaking has been awful to date, and we think we can do better” (excepting the parts that involved Linch funding us, we love you Linch). Not just software UX but the end-to-end feeling of what being a grantee is like, speed of response, quantity of feedback, etc.
I’m also pretty inclined to dismiss “optics are bad” arguments. I again invite anyone to judge, on the object level, 1) how do Rachel’s grants look? 2) how does the Manifund site UX feel? 3) how does her code look?. And as always, if you think you can make better regrants than us, audition for the role!
Fair. I don’t have a good sense of what grant size your applicants ask for, particularly on the lower end. In my own experience as a grantmaker, my own grants have had maybe 2.5 orders of magnitude of variation.
I definitely agree with the second part. I feel like many grantmakers in EA seem to treat grantmaking as roughly their third or fourth most important priority, which I think does not compromise judgement quality too much but does not bode well for other important desiderata like “are grantees happy with the process” and “are donors happy with the level of communication.”
Too soon to tell with the first part; I feel like there are many projects both in and out of EA that seemed to do really well in the initial rush of hype and manic energy, and then kind of splutter out afterwards. Hopefully you guys will improve upon future iterations however!
Oh don’t worry, I suck too. :)
It would seem almost if not equally effective to dogfood the UX as well with a $10K allotment, making each grant 20 percent of the amount one would have allocated at 50K. (Incidentally, I like this idea as a sort of work trial for candidate grantmakers more generally.)
If a 20⁄80 real vs play money split wouldn’t elicit realistic user behavior, what are the implications of that for your other project (Manifold)?
I don’t have an opinion on anyone’s suitability as a grantmaker; I’m just not convinced dogfooding is a rationale for handing out 50K (or that code/UX quality are relevant to assessing handing out that sum).
IMHO seems possible to be rigorous with imaginary money, as some are with prediction markets or fantasy football. Particularly so if the exercise feels critical to the success of the platform.
I think the site looks great btw, just pushing back on this :)
I agree in the context of what I call deciding between different “established charities with fairly smooth marginal utility curves,” which I think is more analogous to prediction markets or fantasy football or (for that matter) picking fake stocks.
But as someone who in the past has applied for funding for projects (though not on Manifund), if someone said, “hey we have 50k (or 500k) to allocate and we want to ask the following questions about your project,” I’d be pretty willing to either reply to their emails or go on a call.
If on the other hand they said “we have $50 (or $50k virtual dollars) to allocate between 5 projects and I want you to ask the following questions about your project as a way to test our product” maybe I’d still be willing to talk to them. However, in this scenario, a) it’s pretty unambiguous that I’m doing them a favor[1] and b) while I’ll try to keep my presentation the same, in practice this is likely to bias my decisions somewhat.[2]
Now I know some startup advisors recommend doing mockup testing without telling users that your product is incomplete, but a) I think this is kind of scummy in the context of applying for funding, and b) would-be EA grantmakers in the past have justifiably gotten flak for this exact behavior.
Or eg, making a calculated impact-maximizing decision for the greater good, if consequentialist ethics are a better model than contractual ethics.
In some cases for the better! Eg one advantage I’ve found as a “researcher” role at Rethink Priorities is that people seem more likely to give me honest assessments and criticisms than when I put on my “funder” hat. But regardless of whether being a mock grantmaker is better epistemically than being a real grantmaker, you are still not going through a real use case if you’re planning to build out your product for real grantmakers.
FWIW: I want to offer a strong dissenting voice that I do not like how this has been handled in this comment section. Saying something isn’t intended to be harsh and mean doesn’t make it not harsh and mean. You can point out things that concern you without singling out individual people and I think the average person would have found this incredibly hurtful and off-putting.
Apologies, I usually try to respond to claims on the object level or occasionally try to enforce epistemic norms, and don’t usually bother enforcing politeness/niceness norms (in part because I think this is not my comparative advantage). I do frequently try to reach out privately if I notice people say hurtful things or people might be hurt due to the relevant situation. I agree that Anonymous EA Forum user’s comment may come across as unnecessarily aggressive to many readers[1] and perhaps it was wrong for me to reply without noting that. I thought Elizabeth’s comment was quite good in that context.
If I were in Rachel’s situation, these comments might easily have led me to be quite insecure.
Oh uh I assume KMF intended to address Anonymous EA Forum user, but clicked “reply” in the wrong place; “harsh and mean” are quotes from Anon’s post. (I have a hard time seeing how her comment applies to anything you said).
Oh I don’t interpret her as saying that my comments are mean by themselves, but that maybe the whole discussion was mean or at least insensitive. Eg my comments helped “platform” AEAFu.
Sorry, Linch- Austin is totally right. I am just useless at the Forum :)
First want to say that I was also pretty uncomfortable about this, and initially told Austin I didn’t want to do it—you’re right that I am not qualified to be a grant maker, at least in the sense that I would not be hired as one in another context. I don’t deserve whatever status that role happens to bestow upon me, and I don’t particularly want the power.
That said, me being a regrantor has made the UX much better, since I’m basically the sole person pushing code and dogfooding is so powerful: it’s changed the prompting questions on write-ups, the way projects in need of more funding are displayed, the way funding targets are specified, and lots of other tiny things that were a bit uglier or higher friction or simply broken before. Less concretely, my model of what it feels like to write a publicly visible grant writeup, to search for giving opportunities, to select grant amounts, etc. has become more vivid, which I expect to be strategically useful going forward. And it’s only been a few weeks so far.
So, I agree it’s bad optically (I agree-voted your comment), but ultimately think this was a good call. Especially because the counterfactual of having not given me the $50k is not that it would be going to some better-evaluated grants than the ones I’ve made, but that it would be sitting in a bank account, and (obviously) I think the grants I’ve made are better than that.
I think this is an important point and it’s good you brought it up. But I think you were unnecessarily harsh, and you could have done the same good with less harshness.
A version I would have preferred:
This leaves open the option you’re missing something, and that this could be remedied. You can always become harsh later if you’re not given a satisfactory answer (it looks like you feel you were).
FWIW I don’t feel fully satisfied by Austin’s answer but I also think the person with standing to object is the anonymous donor. One reason I’m excited about this infrastructure is that it allows people to accumulate track records and for funders to choose who to delegate to.
Thanks, Elizabeth, I appreciate the reworded message!
In particular, one of the goals of our regranting program is to let people without grantmaking experience build that experience/track record in a small-stakes context. We don’t require that regrantors have experience, and may actually favor regrantors who do not already have significant grantmaking expertise (on the theory that senior grantmakers are likely to be able to locate money for those opportunities)
Depends who chose the regrantors? If the two humans most behind Manifund chose them, I agree it’s odd/bad that they chose themselves (independent of their relationship). If donors chose them that seems surprising but fine.
Edit: oh they say “We chose regrantors.” Hmm. I think choosing themselves is a mistake.
The $400k regrantors were nominated by our anonymous donor; the $50k ones were chosen by the Manifund team. We chose ourselves mainly to be able to dogfood the regranting process. (I would also note that the default at every EA grantmaking org is for the team themselves to make the grants)
Cool, thanks. It wasn’t obvious to me that you saw yourselves as grantmakers in addition to people who build infrastructure/tools for grantmakers. If your donors also saw you as grantmakers then there’s no issue I think; if they would be surprised that you chose yourselves that seems bad.
Tbh I see myself as a person who build infrastructure, but also as said in my other comment, being a regrantor helps me do that much better. And yes we told our primary donor we were doing this in advance.
Looks like a great program. Thanks for posting!
In response to an emailed question about “are the regranting pots backed by FTX money?”:
Manifold for Charity (501c3) has received 3 main donations so far:
The aforementioned $1.5m from an anonymous individual donor, for regrants
~$400k from SFF (1/3rd of their last grant to us), unrestricted
$500k from the FTX Future Fund, for Charity Prediction Markets.
We intend to finance regrants out of the first 2 pots; the status of the last pot is in a bit of limbo; we’re still running the charity prediction market program in the meantime, but have only spent ~$120k of it so far, and haven’t committed to never using it for other purposes. (As you might imagine, the ethical questions here are somewhat thorny, and we’re mostly hoping to fundraise from other sources to avoid them; but also don’t want to unnecessarily tie our hands)
We’ve separately received a $1m regrant from Future Fund, structured as an investment in Manifold Markets (a C Corporation), for which Alameda received equity.
Um I’m sorry but is that QUALY THE LIGHTBULB as one of the regrantors? I saw a screenshot at first and thought it was a meme 😅
Indeed, Qualy has graciously agreed to serve as a regrantor for Manifund. Excited to see what they recommend!
Doesn’t having an anonymous regrantor conflict with your principle of transparency?
So exciting, thank you!! And what a team!
Quick question: Do you know if you can provide funding for studies e.g. PhDs?[1]
The website sounds promising: It says you’ve already provided funding for a “PhD salary supplement” and also “We support regrants to registered charities and individuals. For-profit organizations may also be eligible, pending due diligence. As a US-registered 501c3, we do not permit donations to political campaigns.” But I think that funding tuition fees can sometimes be a bit trickier...
Yes, we should be able to do this! Let us know if you have a specific PhD or study in mind :)
Asking for a friend—will email now :)
Very glad to see that happening, regranting solves a bunch of unsolved problems with centralized grantmaking.
Thanks, we hope so too! (To be clear, we also have a lot of respect for centralized grantmaking orgs and the work they do; and have received funding through some in EA such as LTFF and SFF.)
I think this project may be leaving significant amounts of impact on the table if the diversity of the regranting team is not increased. The key advantages one might suggest of regranting over traditional grantmaking is it is better at the ‘explore’ side of the ‘explore-exploit’ spectrum. However, this is significantly reduced without more diversity amongst regranters, as the people they know and networks they are in may be similar.
I don’t know and haven’t researched the history of each regranter (they all seem impressive and capable from a cursory glance!) However my impression is there is no one from Africa or Latin America (two burgening areas of xrisk activity), and also much of Asia (with the majority of the world’s population in it, as well as at least two great powers in China and India). Failing to have such geographic diversity may make it harder to seek out the highest impact opportunities, and seems to miss some of the key advantages of regranting.
There are other axes as well that improved diversity amongst regranters may be better. The inclusion of more women may be another one that is important. Increased diversity of backgrounds, including more people with humanities/social science. It also may be interesting to have either a ‘devil’s advocate’ regranter who is skeptical of longtermism to fund critical work on longtermism and xrisk, or to have a regranter who approaches xrisk from a less ‘EA’ paradigm to increase the diversity of approaches to xrisk mitigation that may be funded.
How to increase these diversities and more I’d a difficult question. The point isn’t just a box-ticking exercise, but rather to have a portfolio of regranters diverse enough that we can be confident that as large a diversity of relevant projects can be touched by the programme as possible.
I think one of the difficulties here is that Austin and his colleagues are doing this through their personal and professional networks, just like anyone starting a project. Unless the Manifold team were to make some sort of a blinded application process (which has it’s own costs and burdens), I don’t really see a way around the obstacle of recruiting mainly from people you know. An organization can change this overtime. But despite how much I dislike how much harder it is to get funded if you’re not well-connected in the Bay Area community, not relying on connections at the start strikes me as very challenging.
I’ll also share my opinion that Austin has the right response here: he is open to suggestions and willing to consider them.
Hi Gideon, I’m a regranter and I live in Latin America. This has basically no effect on what I choose to fund.
I also care a lot more about animals than most (maybe all) the regranters.
The thing you want to select for in a group of regranters is the diversity of knowledge of opportunities and some intellectual diversity of approaches to longtermism. It seems like Manifund did a decent job here? I think i was selected for my forecasting on Manifold but there are others who work in pandemic preparedness and a lot of people work in different ways no AI safety (which is sort of what the donor wanted).
We’re happy to consider more diverse regrantors—if you have specific candidates in mind, please send them this launch post, or make an intro to us (
austin@manifund.org
)!Thanks for the feedback! Geographic diversity in particular does seem pretty important for getting the most out of regranting—much dissatisfaction about the current funding situation comes from how much harder it is to get funded if you’re not well-connected in the Bay Area community.
I’m disappointed that we currently don’t even have much UK representation, since that’s the other EA hub. This is largely because we are based in the SF so are better connected here. As Austin said, happy to hear suggestions for people connected in other areas who could better surface new opportunities!
Without claiming that the other dimensions of diversity aren’t important, I see geographic and cause area diversity as the most important dimensions.
Regranters are much more likely to grant to people they either know personally or through their networks. They are also much more likely to grant to projects they understand.
Yes, one issue with our initial cohort is that it’s extremely US/Bay Area-centric. We’re especially excited for people outside this hub to apply for a regrant budget!
If I had to, I would guess that there is maybe one Republican leaning regrantor in there. Do you think we should actively recruit more Trump supporting regrantors, as this would also increase diversity? It also looks like almost all of the regrantors have university degrees. Do you think there should be more regrantors with no formal education? Approximately 14% of the world adult population is illiterate after all.
Which of the humanities should people be recruited from?
Haha, I think you meant this sarcastically but I would actually love to find Republican, or non-college-educated, or otherwise non-”traditional EA” regrantors. (If this describes you or someone you know, encourage them to apply!)
So when do we get a futarchy funder?
Haha, Marcus has already been using prediction markets to predict grant outcomes! No futarchy yet though...