Hey there~ I’m Austin, currently building https://manifold.markets. Always happy to meet people; reach out at firstname.lastname@example.org, or find a time on https://calendly.com/austinchen/manifold !
Yeah, I agree neglectedness is less important but it does capture something important; I think eg climate change is both important and tractable but not neglected. In my head, “importance” is about “how much would a perfectly rational world direct at this?” while “neglected” is “how far are we from that world?”.
Also agreed that the lack of external funding is an update that forecasting (as currently conceived) has more hype than real utility. I tend to think this is because of the narrowness of how forecasting is currently framed, though (see my comments on tractability above)
That’s a great resource I wasn’t aware of, thanks (did you make it?). I do think that OpenPhil has spent a commendable amount of money on forecasting to date (though: nowhere near half Animal Welfare, more like a tenth). But I think this has been done very unsystematically, with no dedicated grantmaker. My understanding it was like, a side project of Luke Muehlhauser for a long time; when I reached out in Jan ’23 he said they were not making new forecasting grants until they filled this role. Even if it took a year, I’m glad this program is now launched!
Yes, it’s a meta topic; I’m commenting less on the importance of forecasting in an ITN framework and more on its neglectedness. This stuff basically doesn’t get funding outside of EA, and even inside EA had no institutional commitment; outside of random one-of grants, the largest forecasting funding program I’m aware of over the last 2 years were $30k in “minigrants” funded by Scott Alexander out of pocket.
But on the importance of it: insofar as you think future people matter and that we have the ability and responsibility to help them, forecasting the future is paramount. Steering today’s world without understanding the future would be like trying to help people in Africa, but without overseas reporting to guide you—you’ll obviously do worse if you can’t see outcomes of your actions.
You can make a reasonable argument (as some other commenters do!) that the tractability of forecasting to date hasn’t been great; I agree that the most common approaches of “tournament setting forecasting” or “superforecaster consulting” haven’t produced much of decision-relevance. But there are many other possible approaches (eg FutureSearch.ai is doing interesting things using an LLM to forecast), and I’m again excited to see what Ben and Javier do here.
Awesome to hear! I’m happy that OpenPhil has promoted forecasting to its own dedicated cause area with its own team; I’m hoping this provides more predictable funding for EA forecasting work, which otherwise has felt a bit like a neglected stepchild compared to GCR/GHD/AW. I’ve spoken with both Ben and Javier, who are both very dedicated to the cause of forecasting, and am excited to see what their team does this year!
It really was a time-suck, and I really have experienced the relating point in the past! But I loved putting time into Manifund instead of reading yet another decision-irrelevant post.
Happy to hear you enjoyed your time regranting! I’d love to get a quick estimate on how much time you spent as a regrantor, just for the purposes of our calibration. My napkin math: (8 grants made * 6h) + (16 grants investigated * 1h) = 64h?
I expect more quickly diminishing returns within the grantmaking of a given regrantor than I would for a more centralized operation. This is principally because independent regrantors have more limited deal flow, making their early grants look unusually strong.
I think this could become true eventually; but imo currently, most of our small ($50k) budget regrantors could effectively allocate $200-$500k/year budgets. Eg you mentioned earlier that many opportunities of the form “start this great org” require >$50k; also, many regrants on Manifund include a statement like “I would give more here if I could but my budget is limited”.
I also want to note that the overall regranting model can easily scale by adding additional regrantors; we’ve received a lot of inbound interest in becoming regrantors despite little outreach, and many highly-trusted EA folks (even some grantmakers!) appreciate the greater flexibility offered by the regranting model.
At best, low-responsibility, low-social-downside giving now feels not as effective as it could be. At worst, this giving behavior makes me feel like a self-inhibited, intentionless, incomplete person.Concretely, I think I will halt recurring donations. I want to give in bulk, less frequently, more thoughtfully, and perhaps not to recognisable charities. If this feels like it goes against the spirit of the Giving What We Can Pledge, then I will exit the pledge.
At best, low-responsibility, low-social-downside giving now feels not as effective as it could be. At worst, this giving behavior makes me feel like a self-inhibited, intentionless, incomplete person.
Concretely, I think I will halt recurring donations. I want to give in bulk, less frequently, more thoughtfully, and perhaps not to recognisable charities. If this feels like it goes against the spirit of the Giving What We Can Pledge, then I will exit the pledge.
Thanks for writing this bit; it mirrors my own thinking on my personal donation allocation as I’ve spent more time in the core EA ecosystem. While I was working at Google, sending a yearly donation to Givewell’s top charities seemed reasonable; now I have a much better handle on what opportunities may be more effective.
In fact, your regranting process seems reminiscent of early EA. Pre-Givewell, Holden & Elie spent a bunch of time investigating orgs themselves and made judgement calls about where to send their money. In contrast, EA donations today are characterized by a lot of deference to other experts and evaluators (Givewell, OpenPhil, ACE etc); I like the regranting captures some of the original spirit of the movement.
Ah, sorry you got that impression from my question! I mostly meant Harvard in terms of “desirability among applicants” as opposed to “established bureaucracy”. My outside impression is that a lot of people I respect a lot (like you!) made the decision to go work at OP instead of one of their many other options. And that I’ve heard informal complaints from leaders of other EA orgs, roughly “it’s hard to find and keep good people, because our best candidates keep joining OP instead”. So I was curious to learn more about OP’s internal thinking about this effect.
Did you ever consider starting your own company (software or otherwise) for earning to give?
I have this impression of OpenPhil as being the Harvard of EA orgs—that is, it’s the premier choice of workplace for many highly-engaged EAs, drawing in lots of talent, with distortionary effects on other orgs trying to hire 😅
When should someone who cares a lot about GCRs decide not to work at OP?
Thanks, really appreciated this post.
In case anyone is looking for a bank recommendation, I would recommend Mercury, for their excellent UX and good pricing model. We use them for both Manifold the for-profit, and Manifold for Charity. They do provide ~5% yield to for-profits through Mercury Treasury (we use a different interest provider but if we could do it over again, we would definitely choose Mercury Treasury instead). Unfortunately, they don’t provide Treasury to nonprofits. Mercury can also do payments to intl accounts with a 1% FX exchange rate (worse than Wise, but Wise is kind of a PITA and kicked us off their platform :P). Referral link if interested: https://mercury.com/r/manifund
We do also have Stripe Opal for banking and other kinds of money movement, though that fits Manifold & Manifund because we do a significant amount of programmatic money movements—most EA orgs won’t need that.
I’m grateful for the CEA Community Health team—interpersonal issues can be tricky to navigate, but the Health team is consistently nice, responsive, helpful and has many useful resources compiled for making good decisions, whether it be about running an event or managing grant dynamics.
How is the search going for the new LTFF chair? What kind of background and qualities would the ideal candidate have?
I’ve heard this argument a lot (eg in the context of impact markets) and I agree that this consideration is real, but I’m not sure that it should be weighted heavily. I think it depends a lot on what the distribution of impact looks like: the size of the best positive outcomes vs the worst negative ones, their relative frequency, how different interventions (eg adding screening steps) reduces negative projects but also discourages positive ones.
For example, if in 100 projects, you have [1x +1000, 4x −100, 95x ~0], then I think black swarm farming still does a lot better than some process where you try to select the top 10 or something. Meanwhile if your outcomes look more like [2x +1000, 3x −1000, 95x ~0] then careful filtering starts to matter a lot.
My intuition is that the best projects are much better than the worst projects are bad, and also that the best projects don’t necessarily look that good at the outset. (To use the example I’m most familiar with, Manifold looked pretty sketchy when we applied for ACX Grants, and got turned down by YC and EA Bahamas; I’m still pretty impressed that Scott figured we were worth funding :P)
“Focused Research Org”
I really appreciated this list of examples and it’s updated me a bit towards checking in with LTFF & others a bit more. That said, I’m not sure adverse selection is a problem that Manifund would want to dedicate significant resources towards solving.
One frame: is longtermist funding more like “admitting a Harvard class/YC batch” or more like “pre-seed/seed-stage funding”? In the former case, it’s more important for funders to avoid bad grants; the prestige of the program and its peer effects are based on high average quality in each cohort. In the latter case, you are “black swan farming”; the important thing is to not miss out on the one Facebook that 1000xs, and you’re happy to fund 99 duds in the meantime.
I currently think the latter is a better representation of longtermist impact, but 1) impact is much harder to measure than startup financial results, and 2) having high average quality/few bad grants might be better for fundraising...