Thanks for this post—I really enjoyed reading and contemplating it the last few days. I’m a big supporter of helping make forecasting more decision relevant and finding ways to help integrate it into the decision-making process.
In summary I’m unsure how I feel about the potential benefits of this framework versus other methods, and particularly how likely it is to fit within the conditions that policy (especially in central Governments) is formed. I’ve laid out a few more detailed thoughts of mine below for some reflections, as I note you are still looking to refine this, but upfront I’ve put some questions which even if you didn’t want to respond to here, may be helpful when thinking of where to take this.
What’s the specific error in policy-making that you see this process as overcoming most effectively (e.g. using prediction as a basis of better understanding the interactions of policy decisions; aggregating reasoning and judgements to improve decision accuracy; improving transparency and consistency of reasoning to reduce misinterpretations and improve prediction accuracy; etc.)?
This framework is quite lengthy and has several steps that are interdependent. The challenge with other methods which have been found to have significant accuracy benefits (e.g. aggregating individual Bayesian networks/models[1]) is that they are complex and time-consuming. How do you see the case for this QCI over just recommending organisations invest in Bayesian modelling, given the time/accuracy trade-off seems very close?
A lot of policy making is negotiation and stakeholder management. How do you see this process fitting with that? Do you see it as trying to rise above it and being a tool which is “pure” and then hoping that people politicians have a strong enough will to push through the solution. Or would you suggest integrating such stakeholders views into the “collective expertise” process—potentially risking accuracy due to misaligned incentives.
My more specific points
To remove the risk of misinterpretation, I understand the QCI process to be:
1) Identify the purpose of the decision (i.e. exploratory, achieving an objective, risk mitigation) --> 2) Identify the predictors in the causal chain --> 3) Forecast the future of these predictors --> 4) Predict how policy interventions change those forecasts.
Assuming I’ve not missed a key component, my main reflection is that every stage involves highly complex and uncertain reasoning, but the system doesn’t seek to build that. To be fair, you may have omitted that due to space but I feel that’s a big gap given how people’s ability to reason under uncertainty is fundamental to this process working. Almost every policy decision is a prediction task: e.g. “will x action lead to y outcome”, and forecasting the future is already intrinsic to most policy-making processes. The issue typically is: (1) people don’t realise that’s what they are doing so don’t structure their decision making correctly; and (2) people don’t like uncertainty and fail to employ good reasoning to counter the numerous errors that their judgements involve. I accept good forecasters avoid some of these errors by nature of them being good forecasters, but I’m sceptical of how well that should be assumed to translate into the other non-forecast elements of this process. I acknowledge there are some easy gains to be had just from (1)/process itself, namely around quantification of predictions as that can help reduce misinterpretation errors. However, even there I’d suggest there are significant benefits that can be provided to the process from simple actions such clearly standardising such quantification (i.e. verbal to numeric probability tables) and capturing second order probabilities (for the most important predictors are expressed etc.). The creation of the concept diagram is really good for this and could be an easy step towards helping land better reasoning practices.
I’m also curious on why SME has been leveraged so heavily through-out, most notably in the forecasting process. I see the need for SMEs to help structure the forecasting questions but less clear on why their involvement in the forecasts themselves would be beneficial? Unless the SMEs have been trained/are accurate forecasters already, the evidence I’m aware of leans to the argument that expertise and experience is no real predictor to forecasting accuracy, so aggregating SME’s forecasts with Supers’ would seem to risk decreasing accuracy. The rationale may be to sacrifice accuracy for buy-in, which I’d appreciate. Ultimately, it increases the importance of how methods such as Delphi is implemented, as there is evidence that an important element of Delphi’s effectiveness is how accurate the majority and/or most vocal member of the group—as they have an outsized impact on how people integrate/weigh information and arguments[2] (better reasoning methods would help mitigate this).
Thanks for this post—I really enjoyed reading and contemplating it the last few days. I’m a big supporter of helping make forecasting more decision relevant and finding ways to help integrate it into the decision-making process.
In summary I’m unsure how I feel about the potential benefits of this framework versus other methods, and particularly how likely it is to fit within the conditions that policy (especially in central Governments) is formed. I’ve laid out a few more detailed thoughts of mine below for some reflections, as I note you are still looking to refine this, but upfront I’ve put some questions which even if you didn’t want to respond to here, may be helpful when thinking of where to take this.
What’s the specific error in policy-making that you see this process as overcoming most effectively (e.g. using prediction as a basis of better understanding the interactions of policy decisions; aggregating reasoning and judgements to improve decision accuracy; improving transparency and consistency of reasoning to reduce misinterpretations and improve prediction accuracy; etc.)?
This framework is quite lengthy and has several steps that are interdependent. The challenge with other methods which have been found to have significant accuracy benefits (e.g. aggregating individual Bayesian networks/models[1]) is that they are complex and time-consuming. How do you see the case for this QCI over just recommending organisations invest in Bayesian modelling, given the time/accuracy trade-off seems very close?
A lot of policy making is negotiation and stakeholder management. How do you see this process fitting with that? Do you see it as trying to rise above it and being a tool which is “pure” and then hoping that people politicians have a strong enough will to push through the solution. Or would you suggest integrating such stakeholders views into the “collective expertise” process—potentially risking accuracy due to misaligned incentives.
My more specific points
To remove the risk of misinterpretation, I understand the QCI process to be:
1) Identify the purpose of the decision (i.e. exploratory, achieving an objective, risk mitigation) --> 2) Identify the predictors in the causal chain --> 3) Forecast the future of these predictors --> 4) Predict how policy interventions change those forecasts.
Assuming I’ve not missed a key component, my main reflection is that every stage involves highly complex and uncertain reasoning, but the system doesn’t seek to build that. To be fair, you may have omitted that due to space but I feel that’s a big gap given how people’s ability to reason under uncertainty is fundamental to this process working. Almost every policy decision is a prediction task: e.g. “will x action lead to y outcome”, and forecasting the future is already intrinsic to most policy-making processes. The issue typically is: (1) people don’t realise that’s what they are doing so don’t structure their decision making correctly; and (2) people don’t like uncertainty and fail to employ good reasoning to counter the numerous errors that their judgements involve. I accept good forecasters avoid some of these errors by nature of them being good forecasters, but I’m sceptical of how well that should be assumed to translate into the other non-forecast elements of this process. I acknowledge there are some easy gains to be had just from (1)/process itself, namely around quantification of predictions as that can help reduce misinterpretation errors. However, even there I’d suggest there are significant benefits that can be provided to the process from simple actions such clearly standardising such quantification (i.e. verbal to numeric probability tables) and capturing second order probabilities (for the most important predictors are expressed etc.). The creation of the concept diagram is really good for this and could be an easy step towards helping land better reasoning practices.
I’m also curious on why SME has been leveraged so heavily through-out, most notably in the forecasting process. I see the need for SMEs to help structure the forecasting questions but less clear on why their involvement in the forecasts themselves would be beneficial? Unless the SMEs have been trained/are accurate forecasters already, the evidence I’m aware of leans to the argument that expertise and experience is no real predictor to forecasting accuracy, so aggregating SME’s forecasts with Supers’ would seem to risk decreasing accuracy. The rationale may be to sacrifice accuracy for buy-in, which I’d appreciate. Ultimately, it increases the importance of how methods such as Delphi is implemented, as there is evidence that an important element of Delphi’s effectiveness is how accurate the majority and/or most vocal member of the group—as they have an outsized impact on how people integrate/weigh information and arguments[2] (better reasoning methods would help mitigate this).
BARD: A structured technique for group elicitation of Bayesian networks to support analytic reasoning
Network Structures of Collective Intelligence: The Contingent Benefits of Group Discussion