I think forecasting is attractive to many people in EA like myself because EA skews towards curious people from STEM backgrounds who like games. However, I’m yet to see a robust case for it being an effective use of charitable funds (if there is, please point me to it). I’m worried we are not being objective enough and trying to find the facts that support the conclusion rather than the other way round.
I think the fact that forecasting is a popular hobby is probably pretty distorting of priorities.
There are now thousands of EAs whose experience of forecasting is participating in fun competitions which have been optimised for their enjoyment. This mass of opinion and consequent discourse has very little connection to what should be the ultimate end goal of forecasting: providing useful information to decision makers.
For example, I’d love to know how INFER is going. Are the forecasts relevant to decision makers? Who reads their reports? How well do people figuring out what to forecast understand the range of policy options available and prioritise forecasts to inform them? Is there regular contact and a trusting relationship at senior executive level? Would it help more if the forecasting were faster, or broader in scope?
These are all very important questions but are invisible to forecaster participants so end up not being talked about much.
Yeah, it seems similar to other areas where the discussion around the cause area and the cause area itself may be quite different. (see also the disparity in resources vs discussion around global health vs ai)
The interest within the EA community in forecasting long predates the existence of any gamified forecasting platforms, so it seems pretty unlikely that at a high level the EA community is primarily interested because it’s a fun game (this doesn’t prove more recent interest isn’t driven by the gamified platforms, though my sense is that the current level of relative interest seems similar to where it was a decade ago, so it doesn’t feel like it made a huge shift).
Also, AI timelines forecasting work has been highly decision-relevant to a large number of people within the EA community. My guess is it’s the single research intervention that has caused the largest shift in altruistic capital allocation in the last few years. There also exists a large number of pretty simple arguments in favor of forecasting work being valuable, which have been made in many places (some links here, also a bunch of Robin Hanson’s work on prediction markets).
At a higher level, there are also many instances of new types of derivatives markets increasing efficiency of some market, which would probably also apply to prediction markets.
I feel like the prediction-markets themselves are best modeled as derivative markets. And then you are talking about second-order derivative markets here. But IDK, mostly sounds like semantics.
I’m considering elaborating on this in a full post, but I will do so quickly here as well: It appears to me that there’s potentially a misunderstanding here, leading to unnecessary disagreement.
I think that the nature of forecasting in the context of decision-making within governments and other large institutions is very different from what is typically seen on platforms like Manifold, PolyMarket, or even Metaculus. I agree that these platforms often treat forecasting more as a game or hobby, which is fine, but very different from the kind of questions policymakers want to see answered.
I (and I hope this aligns with OP’s vision) would want to see a greater emphasis on advancing forecasting specifically tailored for decision-makers. This focus diverges significantly from the casual or hobbyist approach observed on these platforms. The questions you ask should probably not be public, and they are usually far more boring. In practice, it looks more like an advanced Delphi method than it looks like Manifold Markets. I’m somewhat surprised to see interpretations of this post suggesting a need for more funding in the type of forecasting that is more recreational, which, in my view, is and should not be a priority.
E: One obvious exemption to the dichotomy I describe above is that the more fun forecasting platforms can be a good way of identifying Superforecasters good forecasters.
Would you count Holden’s take here as a robust case for funding forecasting as an effective use of charitable funds?
It’s not controversial to say a highly general AI system, such as PASTA, would be momentous. The question is, when (if ever) will such a thing exist?
Over the last few years, a team at Open Philanthropy has investigated this question from multiple angles.
One forecasting method observes that:
No AI model to date has been even 1% as “big” (in terms of computations performed) as a human brain, and until recently this wouldn’t have been affordable—but that will change relatively soon.
And by the end of this century, it will be affordable to train enormous AI models many times over; to train human-brain-sized models on enormously difficult, expensive tasks; and even perhaps to perform as many computations as have been done “by evolution” (by all animal brains in history to date).
This method’s predictions are in line with the latest survey of AI researchers: something like PASTA is more likely than not this century.
A number of other angles have been examined as well.
One challenge for these forecasts: there’s no “field of AI forecasting” and no expert consensus comparable to the one around climate change.
It’s hard to be confident when the discussions around these topics are small and limited. But I think we should take the “most important century” hypothesis seriously based on what we know now, until and unless a “field of AI forecasting” develops.
Actually, maybe it’s also useful to just look at the biggest grants from that list:
$7,993,780 over two years to the Applied Research Laboratory for Intelligence and Security at the University of Maryland, to support the development of two forecasting platforms, in a project led by Dr. Adam Russell. The forecasting platforms will be provided as a resource to help answer questions for policymakers (writeup)
two grants totaling $6,305,675 over three years to support the Forecasting Research Institute (FRI)’s work on projects to advance the science of forecasting as a tool to improve public policy and reduce existential risk. This includes developing a new modular forecasting platform and conducting research to test different forecasting techniques. This follows our October 2021 support ($275,000) for planning work by FRI Chief Scientist Philip Tetlock, and falls within our work on global catastrophic risks (writeup)
$3,000,000 to Metaculus to support work to improve its online forecasting platform, which allows forecasters to make predictions about world events. We believe that this work will help to provide more accurate and calibrated forecasts in domains relevant to Open Philanthropy’s work, such as artificial intelligence and biosecurity and pandemic preparedness, and enable organizations and individuals working in those areas to make better decisions. This follows our May 2022 support ($5,500,000) and falls within our work on global catastrophic risks (writeup)
Thanks for sharing. It’s a start, but it’s certainly not a proven Theory of Change. For example, Tetlock himself said that nebulous long-term forecasts are hard to do because there’s no feedback loop. Hence, a prediction market on an existential risk will be inherently flawed.
I don’t think that really works. You can get feedback from 5 years in 5 years. Metaculus already has some suggestions as to people who are good 5 year forecasters.
Personally, I think specifically forecasting for drug development could be very impactful: Both in the general sense of aligning fields around the probability of success of different approaches (at a range of scales—very relevant both for scientists and funders) and the more specific regulatory use case (public predictions of safety/efficacy of medications as part of approvals by FDA/EMA etc.)
More broadly, predicting the future is hugely valuable. Insofar as effective altruism aims to achieve consequentialist goals, the greatest weakness of consequentialism is uncertainty about the effects of our actions. Forecasting targets that problem directly. The financial system creates a robust set of incentives to predict future financial outcomes—trying to use forecasting to build a tool with broader purpose than finance seems like it could be extremely valuable.
I don’t really do forecasting myself so I can’t speak to the field’s practical ability to achieve its goals (though as an outsider I feel optimistic), so perhaps there are practical reasons it might not be a good investment. But overall to me it definitely feels like the right thing to be aiming at.
Whether or not forecasting is a good use of funds, good decision-making is probably correlated with impact.
So I’m open to the idea that forecasting hasn’t been a good use of funds, but it seems it should be a priori. Forecasting in one sense is predicting how decisions will go. How could that not be a good idea in theory.
More robust cases in practice:
Forecasters have good track records and are provably good thinkers
They can red team institutional decisions “what will be the impacts of this”
In some sense this is similar to research
Forecasting is becoming a larger part of the discourse and this is probably good. It is much more common to see the Economist, the FT, Matt Yglesias, twitter discourse referencing specific testable predictions
In making AI policy specifically it seems very valuable to guess progress and guess the impact of changes.
To me it looks like Epoch and Metaculus do useful work here that people find valuable.
I think forecasting is attractive to many people in EA like myself because EA skews towards curious people from STEM backgrounds who like games. However, I’m yet to see a robust case for it being an effective use of charitable funds (if there is, please point me to it). I’m worried we are not being objective enough and trying to find the facts that support the conclusion rather than the other way round.
I think the fact that forecasting is a popular hobby is probably pretty distorting of priorities.
There are now thousands of EAs whose experience of forecasting is participating in fun competitions which have been optimised for their enjoyment. This mass of opinion and consequent discourse has very little connection to what should be the ultimate end goal of forecasting: providing useful information to decision makers.
For example, I’d love to know how INFER is going. Are the forecasts relevant to decision makers? Who reads their reports? How well do people figuring out what to forecast understand the range of policy options available and prioritise forecasts to inform them? Is there regular contact and a trusting relationship at senior executive level? Would it help more if the forecasting were faster, or broader in scope?
These are all very important questions but are invisible to forecaster participants so end up not being talked about much.
Yeah, it seems similar to other areas where the discussion around the cause area and the cause area itself may be quite different. (see also the disparity in resources vs discussion around global health vs ai)
The interest within the EA community in forecasting long predates the existence of any gamified forecasting platforms, so it seems pretty unlikely that at a high level the EA community is primarily interested because it’s a fun game (this doesn’t prove more recent interest isn’t driven by the gamified platforms, though my sense is that the current level of relative interest seems similar to where it was a decade ago, so it doesn’t feel like it made a huge shift).
Also, AI timelines forecasting work has been highly decision-relevant to a large number of people within the EA community. My guess is it’s the single research intervention that has caused the largest shift in altruistic capital allocation in the last few years. There also exists a large number of pretty simple arguments in favor of forecasting work being valuable, which have been made in many places (some links here, also a bunch of Robin Hanson’s work on prediction markets).
At a higher level, there are also many instances of new types of derivatives markets increasing efficiency of some market, which would probably also apply to prediction markets.
FYI, just wrote a small piece on “Higher-order forecasts”, which I see as the equivalent to derivatives. https://forum.effectivealtruism.org/posts/PB57prp5kEMDgwJsm/higher-order-forecasts
I agree they can help with efficiency.
I feel like the prediction-markets themselves are best modeled as derivative markets. And then you are talking about second-order derivative markets here. But IDK, mostly sounds like semantics.
Yea, that’s a reasonable way of looking at it. Agreed it is just semantics.
As semantics though, my guess is that “nth-order forecasts” will be more intuitive to most people than something like “n-1th order derivatives”.
I’m considering elaborating on this in a full post, but I will do so quickly here as well: It appears to me that there’s potentially a misunderstanding here, leading to unnecessary disagreement.
I think that the nature of forecasting in the context of decision-making within governments and other large institutions is very different from what is typically seen on platforms like Manifold, PolyMarket, or even Metaculus. I agree that these platforms often treat forecasting more as a game or hobby, which is fine, but very different from the kind of questions policymakers want to see answered.
I (and I hope this aligns with OP’s vision) would want to see a greater emphasis on advancing forecasting specifically tailored for decision-makers. This focus diverges significantly from the casual or hobbyist approach observed on these platforms. The questions you ask should probably not be public, and they are usually far more boring. In practice, it looks more like an advanced Delphi method than it looks like Manifold Markets. I’m somewhat surprised to see interpretations of this post suggesting a need for more funding in the type of forecasting that is more recreational, which, in my view, is and should not be a priority.
E: One obvious exemption to the dichotomy I describe above is that the more fun forecasting platforms can be a good way of identifying
Superforecastersgood forecasters.Would you count Holden’s take here as a robust case for funding forecasting as an effective use of charitable funds?
This is my own (possibly very naive) interpretation of one motivation behind some of Open Phil’s forecasting-related grants.
Actually, maybe it’s also useful to just look at the biggest grants from that list:
Thanks for sharing. It’s a start, but it’s certainly not a proven Theory of Change. For example, Tetlock himself said that nebulous long-term forecasts are hard to do because there’s no feedback loop. Hence, a prediction market on an existential risk will be inherently flawed.
I don’t think that really works. You can get feedback from 5 years in 5 years. Metaculus already has some suggestions as to people who are good 5 year forecasters.
None of the above are prediction markets.
Personally, I think specifically forecasting for drug development could be very impactful: Both in the general sense of aligning fields around the probability of success of different approaches (at a range of scales—very relevant both for scientists and funders) and the more specific regulatory use case (public predictions of safety/efficacy of medications as part of approvals by FDA/EMA etc.)
More broadly, predicting the future is hugely valuable. Insofar as effective altruism aims to achieve consequentialist goals, the greatest weakness of consequentialism is uncertainty about the effects of our actions. Forecasting targets that problem directly. The financial system creates a robust set of incentives to predict future financial outcomes—trying to use forecasting to build a tool with broader purpose than finance seems like it could be extremely valuable.
I don’t really do forecasting myself so I can’t speak to the field’s practical ability to achieve its goals (though as an outsider I feel optimistic), so perhaps there are practical reasons it might not be a good investment. But overall to me it definitely feels like the right thing to be aiming at.
Thanks for the comment, Grayden. For context, readers may want to check the question post Why is EA so enthusiastic about forecasting?.
Thanks for sharing, but nobody on that thread seems to be able to explain it! Most people there, like here, seem very sceptical
COI—I work in forecasting.
Whether or not forecasting is a good use of funds, good decision-making is probably correlated with impact.
So I’m open to the idea that forecasting hasn’t been a good use of funds, but it seems it should be a priori. Forecasting in one sense is predicting how decisions will go. How could that not be a good idea in theory.
More robust cases in practice:
Forecasters have good track records and are provably good thinkers
They can red team institutional decisions “what will be the impacts of this”
In some sense this is similar to research
Forecasting is becoming a larger part of the discourse and this is probably good. It is much more common to see the Economist, the FT, Matt Yglesias, twitter discourse referencing specific testable predictions
In making AI policy specifically it seems very valuable to guess progress and guess the impact of changes.
To me it looks like Epoch and Metaculus do useful work here that people find valuable.