“While useful, even models that produced a perfect probability density function for precisely selected outcomes would not prove sufficient to answer such questions. Nor are they necessary.”
I recommend reading DMDU since it goes into much more detail than I can do justice.
Yet, I believe you are focusing heavily on the concept of the distribution existing while the claim should be restated.
Deep uncertainty implies that the range of reasonable distributions allows so many reasonable decisions that attempting to “agree on assumptions then act” is a poor frame. Instead, you want to explore all reasonable distributions then “agree on decisions”.
If you are in a state where reasonable people are producing meaningfully different decisions, ie different sign from your convention above, based on the distribution and weighting terms. Then it becomes more useful to focus on the timeline and tradeoffs rather than the current understanding of the distribution:
Explore the largest range of scenarios (in the 1/n case each time you add another plausible scenario it changes all scenario weights)
Understand the sequence of actions/information released
Identify actions that won’t change with new info
Identify information that will meaningfully change your decision
Identify actions that should follow given the new information
Quantify tradeoffs forced with decisions
This results is building an adapting policy pathway rather than making a decision or even choosing a model framework.
Value is derived from expanding the suite of policies, scenarios and objectives or illustrating the tradeoffs between objectives and how to minimize those tradeoffs via sequencing.
This is in contrast to emphasizing the optimal distribution (or worse, point estimate) conditional on all available data. Since that distribution is still subject to change in time and evaluated under different weights by different stakeholders.
“While useful, even models that produced a perfect probability density function for precisely selected outcomes would not prove sufficient to answer such questions. Nor are they necessary.”
I recommend reading DMDU since it goes into much more detail than I can do justice.
Yet, I believe you are focusing heavily on the concept of the distribution existing while the claim should be restated.
Deep uncertainty implies that the range of reasonable distributions allows so many reasonable decisions that attempting to “agree on assumptions then act” is a poor frame. Instead, you want to explore all reasonable distributions then “agree on decisions”.
If you are in a state where reasonable people are producing meaningfully different decisions, ie different sign from your convention above, based on the distribution and weighting terms. Then it becomes more useful to focus on the timeline and tradeoffs rather than the current understanding of the distribution:
Explore the largest range of scenarios (in the 1/n case each time you add another plausible scenario it changes all scenario weights)
Understand the sequence of actions/information released
Identify actions that won’t change with new info
Identify information that will meaningfully change your decision
Identify actions that should follow given the new information
Quantify tradeoffs forced with decisions
This results is building an adapting policy pathway rather than making a decision or even choosing a model framework.
Value is derived from expanding the suite of policies, scenarios and objectives or illustrating the tradeoffs between objectives and how to minimize those tradeoffs via sequencing.
This is in contrast to emphasizing the optimal distribution (or worse, point estimate) conditional on all available data. Since that distribution is still subject to change in time and evaluated under different weights by different stakeholders.