I remember people being excited about ‘OWID for forecasting’, especially for the far-future, last year.
Have you explored the idea of developing forecasting / estimation expertise internally, so that you are able to report on more speculative questions? (I might be wrong, my impression is that you don’t do that much reporting on forecasts or more speculative estimates, except maybe for quite legible / lower CI stuff like this).
I think of Epoch AI as doing something like an ‘OWID for AI forecasting’ model, but would be excited to see more folks do this kind of data reporting in other domains!
Within the OWID team, there’s a mix of enthusiasm and skepticism about forecasting. Many of us see it as a promising tool for a more evidence-based understanding of the world, while others express reservations. Much of this skepticism stems from the fact that, often, forecasts lack clear justifications. While the raw forecast is presented, many sites and projects fail to thoroughly explain the reasoning behind these projections. To make forecasting more valuable and accessible, we believe this aspect needs significant improvement.
For now, we’re not planning to start publishing forecasts ourselves. It’s quite a significant and potentially risky step, not to mention it being quite outside our core expertise. It might even be considered off-brand: people primarily come to OWID for our ability to synthesize the state of knowledge around many issues, not necessarily for us to put forth our own speculative hypotheses about future events.
That said, we’ve recently collaborated with Metaculus and Good Judgment on projects aimed at forecasting OWID charts. These have been really fascinating projects and served as good first experiments for us in forecasting. We’re open-minded about further incorporating forecasting in the future without straying too far from our mission and core competencies.
I remember people being excited about ‘OWID for forecasting’, especially for the far-future, last year.
Have you explored the idea of developing forecasting / estimation expertise internally, so that you are able to report on more speculative questions? (I might be wrong, my impression is that you don’t do that much reporting on forecasts or more speculative estimates, except maybe for quite legible / lower CI stuff like this).
I think of Epoch AI as doing something like an ‘OWID for AI forecasting’ model, but would be excited to see more folks do this kind of data reporting in other domains!
Within the OWID team, there’s a mix of enthusiasm and skepticism about forecasting. Many of us see it as a promising tool for a more evidence-based understanding of the world, while others express reservations. Much of this skepticism stems from the fact that, often, forecasts lack clear justifications. While the raw forecast is presented, many sites and projects fail to thoroughly explain the reasoning behind these projections. To make forecasting more valuable and accessible, we believe this aspect needs significant improvement.
For now, we’re not planning to start publishing forecasts ourselves. It’s quite a significant and potentially risky step, not to mention it being quite outside our core expertise. It might even be considered off-brand: people primarily come to OWID for our ability to synthesize the state of knowledge around many issues, not necessarily for us to put forth our own speculative hypotheses about future events.
That said, we’ve recently collaborated with Metaculus and Good Judgment on projects aimed at forecasting OWID charts. These have been really fascinating projects and served as good first experiments for us in forecasting. We’re open-minded about further incorporating forecasting in the future without straying too far from our mission and core competencies.
Super interesting, thanks for explaining your reasoning, Ed! (Strong upvoted for your explanation)
+1, I’d be excited for more rigor and norms around reasoning transparency in forecasting as well.
Wow, thanks for linking to the Metaculus and Good Judgement collaborations. Super cool!