Hi Froolow, thanks for taking the time to write up this piece. I found your explanations clear and concise, and the worked examples really helped to demonstrate your point. I really appreciate the level of assumed knowledge and abstraction—nothing too deep assumed. I wish there were more posts like this on the forum!
Here are some questions this made me think about:
Do you have any recommended further reading? Two examples of things I’d like to hear about:
1)a) Really well done applications of uncertainty analysis which changed long standing decisions
1)b) Theoretical work, or textbook demonstrations for giving foundational understanding
1)c) The most speculative work you know of working with uncertainty analysis
I think (1c) would be particularly useful for porting this analysis to longtermist pursuits. There is little evidence in these field, and little ability to get evidence. So I would want to consider similar case studies, but perhaps this is on a larger scale than common-use health economics.
Are there some levels above PSA of uncertainty of model formation of parameter covariance?
Many level seem to potentially suffer from any underlying structural flaws in the model. PSA seems to question this via Monte Carlo. But, if for example there were covarying parameters, are there methods for assigning model or ‘hyperparameter’[^hyp] uncertainty?
Somewhat relatedly:
I’m concerned that in thresholding a single parameter what’s actually happening that a separate more pivotal parameters effects are over weighting this parameter. This would be more of a problem in scenario analysis since nothing else is varying. But Under PSA, perhaps this could arise through non-representative sampling distributions?
I think something funky might be happening under this form of risk adjustment. Variance of outcome has been adjusted by pulling out the tails, but I don’t think this mimics the decision making of a risk-adverse individual. Instead I think you would want to form the expected return, and compare this to a the expected return from a risk adverse motivation function.
Meta: I hope it doesn’t come across as suggesting this should reduce use of uncertainty analysis in any of these questions! I’m just wondering about how this is dealt with in normal health economics practice :)
[^hyp] : I don’t think hyperparameter is the correct term here, but some sort of adjustment of sampling distribution.
Thank you for the kind words—and it is always nice to get follow-up questions!
Further reading
In terms of recommended further reading, almost all UK-based Health Economists swear by ‘the Briggs book’. This contains step-by-step instructions for doing almost everything I describe above, as well as more detail around motivation and assumptions.
If you don’t want to shell out for a textbook, an excellent exploration of uncertainty is Claxton et al 2015 where the authors demonstrated that the value of additional information on the uncertainty of streptokinase following heart attack was so small as to be negligible, which implies that a major shift in health policy could have been undertaken five years earlier and in the absence of several massive expensive trials. Claxton is one of the co-authors of the Briggs book, so knows his stuff inside out.
In terms of EA specific follow-ups, I have always really loved Kwakkel & Pruyt 2013 for their use of uncertainty analysis in a framework that EAs would recognise as longtermist. Their first example is on mineral scarcity in the medium-term future, and they go through a process very similar to that which is done for x-risk type calculations, but with what I regard as a significantly higher degree of rigour and transparency. If someone asked me to model out AI alignment scenarios I would follow this paper almost to the letter, although I would warn anyone casually clicking through that this is pretty hardcore stuff that you can’t just knock together in Excel (see their Fig 1, for example).
I note you also ask for the most speculative use of uncertainty analysis, for which I have a rather interesting answer. I remember once reading a paper on the use of Monte Carlo modelling of parameter uncertainty to resolve the Fermi Paradox (that is, why has no alien intelligence contacted us if the universe is so vast). The paper really entertained me, but I completely forgot the reference until I tracked the paper down to link it for you now—it is Sandberg, Drexler & Ord 2018, and the ‘Ord’ in the third author position is Toby Ord, who I suspect is better known to forum members as one of the founders of EA—what a lovely coincidence!
Model covariance
You are right to raise covariance in Monte Carlo simulations as a clear issue with the way I have presented the topic, but you’ll be pleased to know that this is basically a solved problem in Health Economics which I just skimmed over in the interests of time. The ‘textbook’ method of solving the problem is to use a ‘Cholesky Decomposition’ on the covariance matrix and sample from that. In recent years I’ve also started experimenting with microsimulating the underlying process which generates the correlated results, with some mixed success (but it is cool when it works!).
Risk adjustment
Your comments on risk adjustment are completely correct—amongst many of the problems my approach causes it takes unlikely outcomes (ie high standard deviation away from average) and implicitly turns them into outcomes which are proportionally even more unlikely, sometimes to the point of requiring completely impossible inputs to generate those outputs. I hope I caveated the weakness of the method appropriately, because it isn’t a good model of how humans approach risk (more of a proof of concept)
There is a fairly novel method just breaking into the Health Economics literature called a CERAC, which uses the process you outline of treating a model as a portfolio with an expected return and downside risk of those returns being penalised accordingly. I suspect something like this is the best way to handle risk adjustment in a model without an explicit model of risk-preference specified across all possible outcomes. Unfortunately to use the technique as described you need a cost-effectiveness threshold, which doesn’t exist in EA (and will never exist in EA as a matter of first-principles). As I mentioned, I work in an exclusively expected utility context so I’m not familiar enough with the technique to be confident of adapting it to EA, although if someone with a better maths background than me wanted to give it a shot I suspect that would be a pretty valuable extension of the general principle I outline.
RE 2: could you please clarify your question? Perhaps provide an example of what you’d like to do? There’s nothing about Monte Carlo methods that stop you correlating parameters (although you do need to quantify the correlations); although it’s normally easier and more interpretable to instead form your model on the basis of uncorrelated input parameters then use functions of these parameters to induce correlation in the output.
Hi Froolow, thanks for taking the time to write up this piece. I found your explanations clear and concise, and the worked examples really helped to demonstrate your point. I really appreciate the level of assumed knowledge and abstraction—nothing too deep assumed. I wish there were more posts like this on the forum!
Here are some questions this made me think about:
Do you have any recommended further reading? Two examples of things I’d like to hear about:
1)a) Really well done applications of uncertainty analysis which changed long standing decisions
1)b) Theoretical work, or textbook demonstrations for giving foundational understanding
1)c) The most speculative work you know of working with uncertainty analysis
I think (1c) would be particularly useful for porting this analysis to longtermist pursuits. There is little evidence in these field, and little ability to get evidence. So I would want to consider similar case studies, but perhaps this is on a larger scale than common-use health economics.
Are there some levels above PSA of uncertainty of model formation of parameter covariance? Many level seem to potentially suffer from any underlying structural flaws in the model. PSA seems to question this via Monte Carlo. But, if for example there were covarying parameters, are there methods for assigning model or ‘hyperparameter’[^hyp] uncertainty?
Somewhat relatedly:
I’m concerned that in thresholding a single parameter what’s actually happening that a separate more pivotal parameters effects are over weighting this parameter. This would be more of a problem in scenario analysis since nothing else is varying. But Under PSA, perhaps this could arise through non-representative sampling distributions?
I think something funky might be happening under this form of risk adjustment. Variance of outcome has been adjusted by pulling out the tails, but I don’t think this mimics the decision making of a risk-adverse individual. Instead I think you would want to form the expected return, and compare this to a the expected return from a risk adverse motivation function.
Meta: I hope it doesn’t come across as suggesting this should reduce use of uncertainty analysis in any of these questions! I’m just wondering about how this is dealt with in normal health economics practice :)
[^hyp] : I don’t think hyperparameter is the correct term here, but some sort of adjustment of sampling distribution.
Thank you for the kind words—and it is always nice to get follow-up questions!
Further reading
In terms of recommended further reading, almost all UK-based Health Economists swear by ‘the Briggs book’. This contains step-by-step instructions for doing almost everything I describe above, as well as more detail around motivation and assumptions.
If you don’t want to shell out for a textbook, an excellent exploration of uncertainty is Claxton et al 2015 where the authors demonstrated that the value of additional information on the uncertainty of streptokinase following heart attack was so small as to be negligible, which implies that a major shift in health policy could have been undertaken five years earlier and in the absence of several massive expensive trials. Claxton is one of the co-authors of the Briggs book, so knows his stuff inside out.
In terms of EA specific follow-ups, I have always really loved Kwakkel & Pruyt 2013 for their use of uncertainty analysis in a framework that EAs would recognise as longtermist. Their first example is on mineral scarcity in the medium-term future, and they go through a process very similar to that which is done for x-risk type calculations, but with what I regard as a significantly higher degree of rigour and transparency. If someone asked me to model out AI alignment scenarios I would follow this paper almost to the letter, although I would warn anyone casually clicking through that this is pretty hardcore stuff that you can’t just knock together in Excel (see their Fig 1, for example).
I note you also ask for the most speculative use of uncertainty analysis, for which I have a rather interesting answer. I remember once reading a paper on the use of Monte Carlo modelling of parameter uncertainty to resolve the Fermi Paradox (that is, why has no alien intelligence contacted us if the universe is so vast). The paper really entertained me, but I completely forgot the reference until I tracked the paper down to link it for you now—it is Sandberg, Drexler & Ord 2018, and the ‘Ord’ in the third author position is Toby Ord, who I suspect is better known to forum members as one of the founders of EA—what a lovely coincidence!
Model covariance
You are right to raise covariance in Monte Carlo simulations as a clear issue with the way I have presented the topic, but you’ll be pleased to know that this is basically a solved problem in Health Economics which I just skimmed over in the interests of time. The ‘textbook’ method of solving the problem is to use a ‘Cholesky Decomposition’ on the covariance matrix and sample from that. In recent years I’ve also started experimenting with microsimulating the underlying process which generates the correlated results, with some mixed success (but it is cool when it works!).
Risk adjustment
Your comments on risk adjustment are completely correct—amongst many of the problems my approach causes it takes unlikely outcomes (ie high standard deviation away from average) and implicitly turns them into outcomes which are proportionally even more unlikely, sometimes to the point of requiring completely impossible inputs to generate those outputs. I hope I caveated the weakness of the method appropriately, because it isn’t a good model of how humans approach risk (more of a proof of concept)
There is a fairly novel method just breaking into the Health Economics literature called a CERAC, which uses the process you outline of treating a model as a portfolio with an expected return and downside risk of those returns being penalised accordingly. I suspect something like this is the best way to handle risk adjustment in a model without an explicit model of risk-preference specified across all possible outcomes. Unfortunately to use the technique as described you need a cost-effectiveness threshold, which doesn’t exist in EA (and will never exist in EA as a matter of first-principles). As I mentioned, I work in an exclusively expected utility context so I’m not familiar enough with the technique to be confident of adapting it to EA, although if someone with a better maths background than me wanted to give it a shot I suspect that would be a pretty valuable extension of the general principle I outline.
RE 2: could you please clarify your question? Perhaps provide an example of what you’d like to do? There’s nothing about Monte Carlo methods that stop you correlating parameters (although you do need to quantify the correlations); although it’s normally easier and more interpretable to instead form your model on the basis of uncorrelated input parameters then use functions of these parameters to induce correlation in the output.