I especially appreciate the rationale summaries, and generally I’d encourage you to lean more into identifying underlying cruxes as opposed to quantitative estimates. (E.g. I’m skeptical on experts being sufficiently well calibrated to give particularly informative timeline forecasts).
I’m looking forward to the risk-related surveys. Would be interesting to hear their thoughts on the likelihood of concrete risks. One idea that comes to mind would be conditional forecasts on specific interventions to reduce risks.
Also, I wonder whether the presentation on the website could also feature some more “snack-sized” insights, next to the more long-form report-focussed presentation. E.g. the Chicago Booth expert surveys on economic experts focus on ~2-3 questions per article, with a handful of rationales of experts quoted in full. It keeps me coming back because it’s informative and takes up less than 5 minutes of my time.
Thanks for digging in! We’ve gotten similar feedback on “snack-sized” insights and have it on our list.
Could you say more on “generally I’d encourage you to lean more into identifying underlying cruxes as opposed to quantitative estimates”? I’m not sure I understand what this means in practice, because I think of the two as intimately related. This is likely a product of my view that cruxy questions have a high value of information (some FRI work on this here).
In case it’s of interest, our risk-focused work tends to be in self-contained projects (example), so we can pull in respondents with intimate knowledge of the risk model. Nevertheless, we’ll include some risk questions in future waves.
The 2 questions you mention were free text. We asked respondents to list, for example, cognitive limitations of AI. We then created a list of the most common responses to create a resolvable forecasting question for the subsequent wave.
Hi Connacher! Thanks for the responses, makes sense.
On your question, one example I often miss from expert surveys is something like this open-ended question: “Do you have any other considerations that would help with understanding this topic?”
I generally agree that quantitative questions are intimately connected with identifying cruxes. Being quantitative about concrete events is a neat way of forcing the experts get more concrete and incentivize them to not get lost in a vague story, etc. But I suspect that often the individual insights from the experts might seem like cruxes to them, as they’re not used to think like that. So I think giving experts some prompts to just pour out their thoughts is often neglected. Furthermore, sometimes quantitative questions don’t fully capture all important angles of an issue and so it’s useful to give responders many chances to add additional comments.
Do you view this as separate from the rationale data we also collect? One low-burden way to do this is to just include something like your text in the rationale prompt.
It’s great that you already have a rationale prompt for each question. I would probably recommend having one prompt like this at the end, with “(Optional)” in front so experts can share all further thoughts they think might be useful.
Thanks for the work, this is great!
I especially appreciate the rationale summaries, and generally I’d encourage you to lean more into identifying underlying cruxes as opposed to quantitative estimates. (E.g. I’m skeptical on experts being sufficiently well calibrated to give particularly informative timeline forecasts).
I’m looking forward to the risk-related surveys. Would be interesting to hear their thoughts on the likelihood of concrete risks. One idea that comes to mind would be conditional forecasts on specific interventions to reduce risks.
Also, I wonder whether the presentation on the website could also feature some more “snack-sized” insights, next to the more long-form report-focussed presentation. E.g. the Chicago Booth expert surveys on economic experts focus on ~2-3 questions per article, with a handful of rationales of experts quoted in full. It keeps me coming back because it’s informative and takes up less than 5 minutes of my time.
https://kentclarkcenter.org/us-economic-experts-panel/
PS: Just in case something went wrong:
The wave 1 report questions imply there would be a section “Cognitive Limitations I” that is not included in the wave 1 report
Same for “Adoption Barriers I” that is not in the wave 2 report.
Thanks for digging in! We’ve gotten similar feedback on “snack-sized” insights and have it on our list.
Could you say more on “generally I’d encourage you to lean more into identifying underlying cruxes as opposed to quantitative estimates”? I’m not sure I understand what this means in practice, because I think of the two as intimately related. This is likely a product of my view that cruxy questions have a high value of information (some FRI work on this here).
In case it’s of interest, our risk-focused work tends to be in self-contained projects (example), so we can pull in respondents with intimate knowledge of the risk model. Nevertheless, we’ll include some risk questions in future waves.
The 2 questions you mention were free text. We asked respondents to list, for example, cognitive limitations of AI. We then created a list of the most common responses to create a resolvable forecasting question for the subsequent wave.
Hi Connacher! Thanks for the responses, makes sense.
On your question, one example I often miss from expert surveys is something like this open-ended question: “Do you have any other considerations that would help with understanding this topic?”
I generally agree that quantitative questions are intimately connected with identifying cruxes. Being quantitative about concrete events is a neat way of forcing the experts get more concrete and incentivize them to not get lost in a vague story, etc. But I suspect that often the individual insights from the experts might seem like cruxes to them, as they’re not used to think like that. So I think giving experts some prompts to just pour out their thoughts is often neglected. Furthermore, sometimes quantitative questions don’t fully capture all important angles of an issue and so it’s useful to give responders many chances to add additional comments.
Do you view this as separate from the rationale data we also collect? One low-burden way to do this is to just include something like your text in the rationale prompt.
It’s great that you already have a rationale prompt for each question. I would probably recommend having one prompt like this at the end, with “(Optional)” in front so experts can share all further thoughts they think might be useful.