Just confirming that informing our own decisions was part of the motivation for past grants, and I expect it to play an important role for our forecasting grants in the future.
[The forecasting money] seems to have overwhelmingly gone to community forecasting sites like Manifold and Metaculus. I don’t see anything like “paying 3 teams of 3 forecasters to compete against each other on some AI timelines questions”.
That’s directionally true, but I think “overwhelmingly” isn’t right.
We did not fund Manifold.
One of our largest forecasting grants went to FRI, which is not a platform.
While it’s fair to say that Metaculus is mostly a platform, it also runs externally-funded tournaments, and has a pro forecaster service.
There were a few grants to more narrowly defined projects.
Most of these are currently not assgined to forecasting as a cause area, but you can find themhere(searching for “forecast” in our grants database), see especially those before August 2021.[Update: we have updated the labels, and these grants are now listed here ].
I expect that we’ll make more of these types of grants now that forecasting is a designated area with more capacity.
I’m glad to see the debate on decision relevance in the comments! I think that if we end up considering forecasting a successful focus area in 5-10 years, thinking hard about the value-add to decision-making will likely have played a crucial role in this success.
As for my own view, I do agree that judgmental / subjective probability forecasting hasn’t been as much of a success story as one might have expected about 10 years ago. I also agree that many of the stories people tell about the impact of forecasting naturally raise questions like “so why isn’t this a huge industry now? Why is this project a non-profit?”. We are likely to ask questions of this kind to prospective grantees way more often than grantmakers in other focus areas.
However, I (unsurprisingly) also disagree with the stronger claim that the lack of a large judgmental forecasting industry is conclusive evidence that forecasting doesn’t provide value, and is just an EA hobby horse. While I don’t have capacity to engage in this debate deeply, a few points of rebuttal:
I do think there have been some successes. For instance, the XPT mentioned in this comment certainly affected the personal beliefs of some people in the EA community, and thereby had an influence on resource allocation and career decisions.
Forecasting, as such, is a large industry. I’d assign considerable weight to the idea that making judgmental forecasting a success of the kind that model-driven forecasting approaches have been in areas like finance, marketing or sports, is a harder, but solvable task. There might simply be a free-riding problem for investing the resources necessary for figuring out the best way to make it work.
As a related indirect argument, forecasting has a pretty straightforward a priori case (more accurate information leads to better decision-making), and there are plenty of candidate explanations for why its widespread adoption would have been difficult despite forecasting having the potential to be widely useful (e.g. I’m sympathetic to the points made by MaxRa here). Thus, even after updating on the observation that judgmental forecasting hasn’t conquered the world yet, I don’t think we should assign high confidence that it will forever stay a niche industry.
As others have pointed out, only a fairly small fraction of Open Phil’s spending has gone into forecasting so far (about 1%), and this is unlikely to dramatically change in the future. The forecasting community doesn’t need to become a multi-billion industry to justify that level of spending.