Executive summary: This exploratory post argues that conditional forecasting—eliciting structured probabilistic beliefs about related questions—can make expert models more transparent and comparable, offering a promising approach to reasoning about complex, uncertain domains like emerging technologies where traditional forecasting struggles.
Key points:
Conditional forecasting can surface latent expert models: Asking experts to provide conditional probabilities (e.g., P(U|O)) helps clarify and structure their beliefs, turning intuitive, fuzzy mental models into more legible causal graphs.
Comparing models reveals deep disagreements: Instead of just comparing forecast outcomes, eliciting and comparing the structure of experts’ conditional beliefs helps identify where disagreements stem from—different assumptions, primitives, or parameter weightings.
Mutual information helps prioritize questions: The authors propose using mutual information (I(U;C)) to quantify how informative a crux question is to a main forecast, helping rank and choose valuable intermediate questions.
Forecasting with different primitives highlights mental model differences: Disagreement often arises because people conceptualize problems using different foundational building blocks (“primitives”); surfacing these differences can lead to better communication and model integration.
Practical applications show promise but need more testing: A small experiment at Manifest 2023 showed that a fine-tuned GPT-3.5 could generate crux questions rated more informative than those from humans, but larger trials are needed.
Invitation to collaborate: The authors are exploring these ideas further at Metaculus and invite others interested in applying or refining such techniques to reach out.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: This exploratory post argues that conditional forecasting—eliciting structured probabilistic beliefs about related questions—can make expert models more transparent and comparable, offering a promising approach to reasoning about complex, uncertain domains like emerging technologies where traditional forecasting struggles.
Key points:
Conditional forecasting can surface latent expert models: Asking experts to provide conditional probabilities (e.g., P(U|O)) helps clarify and structure their beliefs, turning intuitive, fuzzy mental models into more legible causal graphs.
Comparing models reveals deep disagreements: Instead of just comparing forecast outcomes, eliciting and comparing the structure of experts’ conditional beliefs helps identify where disagreements stem from—different assumptions, primitives, or parameter weightings.
Mutual information helps prioritize questions: The authors propose using mutual information (I(U;C)) to quantify how informative a crux question is to a main forecast, helping rank and choose valuable intermediate questions.
Forecasting with different primitives highlights mental model differences: Disagreement often arises because people conceptualize problems using different foundational building blocks (“primitives”); surfacing these differences can lead to better communication and model integration.
Practical applications show promise but need more testing: A small experiment at Manifest 2023 showed that a fine-tuned GPT-3.5 could generate crux questions rated more informative than those from humans, but larger trials are needed.
Invitation to collaborate: The authors are exploring these ideas further at Metaculus and invite others interested in applying or refining such techniques to reach out.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.