Around discussions of AI & Forecasting, there seems to be some assumption like:
1. Right now, humans are better than AIs at judgemental forecasting. 2. When humans are better than AIs at forecasting, AIs are useless. 3. At some point, AIs will be better than humans at forecasting. 4. At that point, when it comes to forecasting, humans will be useless.
This comes from a lot of discussion and some research comparing “humans” to “AIs” in forecasting tournaments.
As you might expect, I think this model is incredibly naive. To me, it’s asking questions like, ”Are AIs better than humans at writing code?” “Are AIs better than humans at trading stocks?” ”Are AIs better than humans at doing operations work?”
I think it should be very clear that there’s a huge period, in each cluster, where it makes sense for humans and AIs to overlap. “Forecasting” is not one homogeneous and singular activity, and neither is programming, stock trading, or doing ops. There’s no clear line for automating “forecasting”—there are instead a very long list of different skills one could automate, with a long tail of tasks that would get increasingly expensive to automate.
Autonomous driving is another similar example. There’s a very long road between “helping drivers with driver-assist features” and “complete level-5 automation, to the extent that almost no human are no longer driving for work purposes.”
A much better model is a more nuanced one. Break things down into smaller chunks, and figure out where and how AIs could best augment or replace humans at each of those. Or just spend a lot of time working with human forecasting teams to augment parts of their workflows.
I am not so aware of the assumption you make up front, and would agree with you that anyone making such an assumption is being naive. Not least because humans on average (and even supers under many conditions) are objectively inaccurate at forecasting—even if relatively good given we don’t have anything better yet.
I think the more interesting and important when it comes to AI forecasting and claiming they are “good”, is to look at the reasoning process that they undertaken to do that. How are they forming reference classes, how are they integrating specific information, how are they updating their posterior to form an accurate inference and likelihood of the event occurring? Right now, they can sort of do (1), but from my experience don’t do well at all at integration, updating, and making a probabilistic judgment. In fairness, humans often don’t either. But we do it more consistently than current AI.
For your post, this suggests to me that AI could be used to help base rate/reference class creation, and maybe loosely support integration.
Around discussions of AI & Forecasting, there seems to be some assumption like:
1. Right now, humans are better than AIs at judgemental forecasting.
2. When humans are better than AIs at forecasting, AIs are useless.
3. At some point, AIs will be better than humans at forecasting.
4. At that point, when it comes to forecasting, humans will be useless.
This comes from a lot of discussion and some research comparing “humans” to “AIs” in forecasting tournaments.
As you might expect, I think this model is incredibly naive. To me, it’s asking questions like,
”Are AIs better than humans at writing code?”
“Are AIs better than humans at trading stocks?”
”Are AIs better than humans at doing operations work?”
I think it should be very clear that there’s a huge period, in each cluster, where it makes sense for humans and AIs to overlap. “Forecasting” is not one homogeneous and singular activity, and neither is programming, stock trading, or doing ops. There’s no clear line for automating “forecasting”—there are instead a very long list of different skills one could automate, with a long tail of tasks that would get increasingly expensive to automate.
Autonomous driving is another similar example. There’s a very long road between “helping drivers with driver-assist features” and “complete level-5 automation, to the extent that almost no human are no longer driving for work purposes.”
A much better model is a more nuanced one. Break things down into smaller chunks, and figure out where and how AIs could best augment or replace humans at each of those. Or just spend a lot of time working with human forecasting teams to augment parts of their workflows.
I am not so aware of the assumption you make up front, and would agree with you that anyone making such an assumption is being naive. Not least because humans on average (and even supers under many conditions) are objectively inaccurate at forecasting—even if relatively good given we don’t have anything better yet.
I think the more interesting and important when it comes to AI forecasting and claiming they are “good”, is to look at the reasoning process that they undertaken to do that. How are they forming reference classes, how are they integrating specific information, how are they updating their posterior to form an accurate inference and likelihood of the event occurring? Right now, they can sort of do (1), but from my experience don’t do well at all at integration, updating, and making a probabilistic judgment. In fairness, humans often don’t either. But we do it more consistently than current AI.
For your post, this suggests to me that AI could be used to help base rate/reference class creation, and maybe loosely support integration.