we see how the output depends on a particular input even in the face of variations in all the other inputs—we don’t hold everything else constant. In other words, this is a global sensitivity analysis.
I’m a bit confused. In the GiveDirectly case for ‘value of increasing consumption’, you’re still holding the discount rate constant, right?
To address the recurring caveat, I wonder if we could plot the posterior mode/stdev against the input confidence interval length. Basically, taking GiveWell’s point estimate as the prior mean, how do the cost-effectiveness estimates (and their uncertainty) change as we vary our uncertainty over the input parameters.
I’m a bit confused. In the GiveDirectly case for ‘value of increasing consumption’, you’re still holding the discount rate constant, right?
Nope, it varies. One way you can check this intuitively is: if the discount rate and all other parameters were held constant, we’d have a proper function and our scatter plot would show at most one output value for each input.
taking GiveWell’s point estimate as the prior mean, how do the cost-effectiveness estimates (and their uncertainty) change as we vary our uncertainty over the input parameters.
There are (at least) two versions I can think of:
Adjust all the input uncertainties in concert. That is, spread all the point estimates by ±20% or all by ±30% , etc. This would be computationally tractable, but I’m not sure it would get us too much extra. I think the key problem with the current approach which would remain is that we’re radically more uncertain about some of the inputs than the others.
Adjust all the input uncertainties individually. That is, spread point estimate 1 by ±20%, point estimate 2 by ±10%, etc. Then, spread point estimate 1 by ±10%, spread point estimate 2 by ±20%, etc. Repeat for all combinations of spreads and inputs. This would actually give us somewhat useful information, but would be computational intractable given the number of input parameters.
Some quick points:
The scatterplots would like nicer with hollow circles.
I’m a bit confused. In the GiveDirectly case for ‘value of increasing consumption’, you’re still holding the discount rate constant, right?
To address the recurring caveat, I wonder if we could plot the posterior mode/stdev against the input confidence interval length. Basically, taking GiveWell’s point estimate as the prior mean, how do the cost-effectiveness estimates (and their uncertainty) change as we vary our uncertainty over the input parameters.
More to come!
Nope, it varies. One way you can check this intuitively is: if the discount rate and all other parameters were held constant, we’d have a proper function and our scatter plot would show at most one output value for each input.
There are (at least) two versions I can think of:
Adjust all the input uncertainties in concert. That is, spread all the point estimates by ±20% or all by ±30% , etc. This would be computationally tractable, but I’m not sure it would get us too much extra. I think the key problem with the current approach which would remain is that we’re radically more uncertain about some of the inputs than the others.
Adjust all the input uncertainties individually. That is, spread point estimate 1 by ±20%, point estimate 2 by ±10%, etc. Then, spread point estimate 1 by ±10%, spread point estimate 2 by ±20%, etc. Repeat for all combinations of spreads and inputs. This would actually give us somewhat useful information, but would be computational intractable given the number of input parameters.