Hey there~ I’m Austin, currently building https://manifold.markets. Always happy to meet people; reach out at akrolsmir@gmail.com, or find a time on https://calendly.com/austinchen/manifold !
Austin
So, as a self-professed mechanism geek, I feel like the Shapley Value stuff should be my cup of tea, but I must confess I’ve never wrapped my head around it. I’ve read Nuno’s post and played with the calculator, but still have little intuitive sense of how these things work even with toy examples, and definitely no idea on how they can be applied in real-world settings.
I think delineating impact assignment for shared projects is important, though I generally look to the business world for inspiration on the most battle-tested versions of impact assignment (equity, commissions, advertising fees, etc). Startup/tech company equity & compensation, for example, at least provides a clear answer to “how much does the employer value your work”. The answer is suboptimal in many ways (eg my guess is startups by default assign too much equity to the founders), but at least it provides a simple starting point; better to make up numbers and all that.
Thanks for updating your post and for the endorsement! (FWIW, I think the LTFF remains an excellent giving opportunity, especially if you’re in less of a position to evaluate specific regrantors or projects.)
Manifund is pretty small in comparison to these other grantmakers (we’ve moved ~$3m to date), but we do try to encourage transparency for all of our grant decisions; see for example here and here.
A lot of our transparency just comes from the fact that we have our applicants post their application in public—the applications have like 70% of the context that the grantmaker has. This is a pretty cheap win; I think many other grantmakers could do if they just got permission from the grantees. (Obviously, not all applications are suited for public posting, but I would guess ~80%+ of EA apps would be.)
This is awesome! I’ve been a fan of Timothy’s since his Full Stack Economics days, and it’s great to see more collaborations between the forecasting world and journalism. AI journalism is an especially pivotal area, and so I’m glad for the additional rigor in the form of Metaculus question operationalizations.
Hey Ben! I’m guessing you’re asking because the Collins’s don’t seem particularly on-topic for the conference? For Manifest, we’ll typically invite a range of speakers & guests, some of whom don’t have strong pre-existing connections to forecasting; perhaps they have interesting things to share from outside the realm of forecasting, or are otherwise thinkers we respect, and are curious to learn more about prediction markets.
(Though in this specific case, Simone and Malcolm have published a great book covering different forms of governance, which is topical to our interest in futarchy; and I believe their education nonprofit makes use of internal prediction markets for predicting student outcomes!)
In principle we’d be happy to forward donations to RP, CLTR or other charities (in principle any 501c3, doesn’t have to be EA); in practice the operational costs of tracking these things mean that we don’t really want to be doing this except for larger donation sizes.
Although since EA Philippines has set its minimum project threshold at a fairly low $500, I’d 95% expect them to succeed and that this wouldn’t come up.
Thanks for the feedback!
(2) hm, we could pay $10/mo for the professional tier to change the supabase URL address, the Scrooge in me didn’t think it was worth it but perhaps...
(3) interesting—I don’t think we’ve considered an option to let people pledge without funds added yet; will see if that makes sense.
Hey Dawn! At Manifund we support crypto-based donations for adding to your donation balance; USDC over Eth or Solana is preferred but we could potentially process other crypto depending on the size you have in mind. We generally prefer to do this for larger donation sizes (eg $5k+) because of the operational overhead, but I’d be willing to make an exception in this case to help support the EA Philippines folks. More details here.
Hi there, Austin from Manifund here! I can’t speak for the EA Philippines team, but some reasons we think our platform is a good way for raising donations:
For users like you, registering should be pretty fast, <2min (you can sign up with any email or Google account). And you can easily add money via credit card; we also support bank transfers, DAF, and crypto for larger donation sizes.
As we’re set up as a 501c3, US-based donors can get a tax deduction for donating to projects that we host.
On Manifund, we have a network of donors who already have their budgets loaded on our site, and sometimes regrantors as well—last year, our regrantors had $2m to recommend to projects.
One comparable project to EA Philippines might be AI Safety Camp, who launched with an EA Forum post like this one, and went on to raise $56k on Manifund!
We’ve established a track record of reliable payouts & operational support; we’ve paid out ~$3m to 128 projects over the last year or so.
I’d love to hear if you have any troubles using the Manifund platform; or if you have experiences with other platforms that you think serve this function better, as we’re always trying to improve.
Very grateful for the kind words, Elizabeth! Manifund is facing a funding shortfall at the moment, and will be looking for donors soon (once we get the ACX Grants Impact Market out the door), so I really appreciate the endorsement here.
(Fun fact: Manifund has never actually raised donations for our own core operations/salary; we’ve been paid ~$75k in commission to run the regrantor program, and otherwise have been just moving money on behalf of others.)
Yeah, I agree neglectedness is less important but it does capture something important; I think eg climate change is both important and tractable but not neglected. In my head, “importance” is about “how much would a perfectly rational world direct at this?” while “neglected” is “how far are we from that world?”.
Also agreed that the lack of external funding is an update that forecasting (as currently conceived) has more hype than real utility. I tend to think this is because of the narrowness of how forecasting is currently framed, though (see my comments on tractability above)
That’s a great resource I wasn’t aware of, thanks (did you make it?). I do think that OpenPhil has spent a commendable amount of money on forecasting to date (though: nowhere near half Animal Welfare, more like a tenth). But I think this has been done very unsystematically, with no dedicated grantmaker. My understanding it was like, a side project of Luke Muehlhauser for a long time; when I reached out in Jan ’23 he said they were not making new forecasting grants until they filled this role. Even if it took a year, I’m glad this program is now launched!
Yes, it’s a meta topic; I’m commenting less on the importance of forecasting in an ITN framework and more on its neglectedness. This stuff basically doesn’t get funding outside of EA, and even inside EA had no institutional commitment; outside of random one-of grants, the largest forecasting funding program I’m aware of over the last 2 years were $30k in “minigrants” funded by Scott Alexander out of pocket.
But on the importance of it: insofar as you think future people matter and that we have the ability and responsibility to help them, forecasting the future is paramount. Steering today’s world without understanding the future would be like trying to help people in Africa, but without overseas reporting to guide you—you’ll obviously do worse if you can’t see outcomes of your actions.
You can make a reasonable argument (as some other commenters do!) that the tractability of forecasting to date hasn’t been great; I agree that the most common approaches of “tournament setting forecasting” or “superforecaster consulting” haven’t produced much of decision-relevance. But there are many other possible approaches (eg FutureSearch.ai is doing interesting things using an LLM to forecast), and I’m again excited to see what Ben and Javier do here.
Awesome to hear! I’m happy that OpenPhil has promoted forecasting to its own dedicated cause area with its own team; I’m hoping this provides more predictable funding for EA forecasting work, which otherwise has felt a bit like a neglected stepchild compared to GCR/GHD/AW. I’ve spoken with both Ben and Javier, who are both very dedicated to the cause of forecasting, and am excited to see what their team does this year!
It really was a time-suck, and I really have experienced the relating point in the past! But I loved putting time into Manifund instead of reading yet another decision-irrelevant post.
Happy to hear you enjoyed your time regranting! I’d love to get a quick estimate on how much time you spent as a regrantor, just for the purposes of our calibration. My napkin math: (8 grants made * 6h) + (16 grants investigated * 1h) = 64h?
I expect more quickly diminishing returns within the grantmaking of a given regrantor than I would for a more centralized operation. This is principally because independent regrantors have more limited deal flow, making their early grants look unusually strong.
I think this could become true eventually; but imo currently, most of our small ($50k) budget regrantors could effectively allocate $200-$500k/year budgets. Eg you mentioned earlier that many opportunities of the form “start this great org” require >$50k; also, many regrants on Manifund include a statement like “I would give more here if I could but my budget is limited”.
I also want to note that the overall regranting model can easily scale by adding additional regrantors; we’ve received a lot of inbound interest in becoming regrantors despite little outreach, and many highly-trusted EA folks (even some grantmakers!) appreciate the greater flexibility offered by the regranting model.
At best, low-responsibility, low-social-downside giving now feels not as effective as it could be. At worst, this giving behavior makes me feel like a self-inhibited, intentionless, incomplete person.
Concretely, I think I will halt recurring donations. I want to give in bulk, less frequently, more thoughtfully, and perhaps not to recognisable charities. If this feels like it goes against the spirit of the Giving What We Can Pledge, then I will exit the pledge.
Thanks for writing this bit; it mirrors my own thinking on my personal donation allocation as I’ve spent more time in the core EA ecosystem. While I was working at Google, sending a yearly donation to Givewell’s top charities seemed reasonable; now I have a much better handle on what opportunities may be more effective.
In fact, your regranting process seems reminiscent of early EA. Pre-Givewell, Holden & Elie spent a bunch of time investigating orgs themselves and made judgement calls about where to send their money. In contrast, EA donations today are characterized by a lot of deference to other experts and evaluators (Givewell, OpenPhil, ACE etc); I like the regranting captures some of the original spirit of the movement.
Ah, sorry you got that impression from my question! I mostly meant Harvard in terms of “desirability among applicants” as opposed to “established bureaucracy”. My outside impression is that a lot of people I respect a lot (like you!) made the decision to go work at OP instead of one of their many other options. And that I’ve heard informal complaints from leaders of other EA orgs, roughly “it’s hard to find and keep good people, because our best candidates keep joining OP instead”. So I was curious to learn more about OP’s internal thinking about this effect.
Did you ever consider starting your own company (software or otherwise) for earning to give?
I have this impression of OpenPhil as being the Harvard of EA orgs—that is, it’s the premier choice of workplace for many highly-engaged EAs, drawing in lots of talent, with distortionary effects on other orgs trying to hire 😅
When should someone who cares a lot about GCRs decide not to work at OP?
For sure, I think a slightly more comprehensive comparison of grantmakers would include the stats for the number of grants, median check size, and amount of public info for each grant made.
Also, perhaps # of employees, or ratio of grants per employee? Like, OpenPhil is ~120 FTE, Manifund/EA Funds are ~2, this naturally leads to differences in writeup-producing capabilities.