Point estimates are fine for multiplication, lossy for division
I think one caveat here is that, if we want to obtain an expected value as output, the input point estimates should refer to the mean instead of the median. They are the same or similar for non-heavy-tailed distributions (like uniform or normal), but could differ a lot for heavy-tailed ones (like exponential or lognormal). When setting a lognormal to a point estimate, I think people often use the geometric mean between 2 percentiles (e.g. 5th and 95th percentiles), which corresponds to the median, not mean. Using the median in this case will underestimate the expected value, because it equals (see here):
E(X) = Median(X)*e^(sigma^2/â2), where sigma^2 is the variance of log(X).
Here you mention that this âlognormal meanâ can lead to extreme results, but I think that is a feature as long as we think the lognormal is modelling the right tail correctly. If we do not think so, we can still use the mean of:
Truncated lognormal distribution.
Minimum between a lognormal distribution and a maximum value (after which we think the lognormal no longer models the right tail well).
Interval estimates are prone to personal bias. Itâs easy to create an interval estimate intuitively. When objectivity is important and the evidence base is sparse, point estimates are easier to form and are more transparent.
In my mind:
Being objective is about faithfully representing the information we have about reality, even if that means being more uncertain.
The evidence base being sparse suggests we are uncertain about what reality actually looks like, which means a faithful representation of it will more easily be achieved by intervals, not point estimates. For example, I think using interval estimates in the Drake equation in much more important that in the cost-effectiveness analyses of GiveWellâs top charities.
One compromise to achieve transparency while mainting the benefits of interval estimates is using pessimistic, realistic and optimistic point estimates. One the one hand, this may result in wider intervals because the product between 2 5th percentiles is rarer than a 5th percentile, so the pessimistic final estimate will be more pessimistic than its inputs. On the other hand, we can think as the wider intervals as accounting for structural uncertainty of the model.
Thanks for your feedback, Vasco. Itâs led me to make extensive changes to the post:
More analysis on the pros/âcons of modelling with distributions. I argue that sometimes itâs good that the crudeness of point-estimate work reflects the crudeness of the evidence available. Interval-estimate work is more honest about uncertainty, but runs the risk of encouraging overconfidence in the final distribution.
I include the lognormal mean in my analysis of means. You have convinced me that the sensitivity of lognormal means to heavy right tails is a strength, not a weakness! But the lognormal mean appears to be sensitive to the size of the confidence interval you use to calculate itâwhich means subjective methods are required to pick the size, introducing bias.
Overall I agree that interval estimation is better suited to the Drake equation than to GiveWell CEAs. But Iâd summarise my reasons as follows:
The Drake Equation really seeks to ask âhow likely is it that we have intelligent alien neighbours?â, but point-estimate methods answer the question âwhat is the expected number of intelligent alien neighbours?â. With such high variability the expected number is virtually useless, but the distribution of this number allows us to estimate the number of alien neighbours. GiveWell CEAs probably have much less variation and hence a point-estimate answer is relatively more useful
Reliable research on the numbers that go into the Drake equation often doesnât exist, so itâs not too bad to âmake upâ interval estimates to go into it. We know much more about the charities GiveWell studies, so made-up distributions (even those informed by reliable point-estimates) are much less permissible.
You have convinced me that the sensitivity of lognormal means to heavy right tails is a strength, not a weakness!
Yes, but only as long as we think the heavy right tail is being accurately modelled! Jaime Sevilla has this post on which methods to use to aggregate forecasts.
Interval-estimate work is more honest about uncertainty, but runs the risk of encouraging overconfidence in the final distribution.
I think it is worth flagging that risk, but I would say:
In general, if a given method is more accurate, it seems reasonable to follow that method everything else equal.
One can always warn about not overweighting results estimated with intervals.
Intuitively, there seems to be much higher risk of being overconfident about a point estimate than about a mean estimated with intervals together with a confidence interval. For example, regarding Toby Ordâs best guess given in Table 6.1 of The Precipice for the existential risk from nuclear war between 2021 and 2120, I think it is easier to be overconfident about A than B:
A. 0.1 %.
B. 0.1 % (90 % confidence interval, 0.03 % to 0.3 %). Toby mentions that:
âThere is significant uncertainty remaining in these estimates and they should be treated as representing the right order of magnitudeâeach could easily be a factor of 3 higher or lowerâ.
But the lognormal mean appears to be sensitive to the size of the confidence interval you use to calculate itâwhich means subjective methods are required to pick the size, introducing bias.
Yes, for the same median, the wider the interval, the greater the mean. If one is having a hard time linking 2 given estimates to a confidence interval, one can try the narrowest and widest reasonable intervals, and see if the lognormal mean will vary a lot.
We know much more about the charities GiveWell studies, so made-up distributions (even those informed by reliable point-estimates) are much less permissible.
I think people with knowledge about GiveWellâs cost-effectiveness analyses would be able to come up with reasonable distributions. A point estimate is equivalent to assigning probability 1 to that estimate, and 0 to all other outcomes, so it is easy to come up with something better (although it may well not be worth the effort).
I think I have been trying to portray the point-estimate/âinterval-estimate trade-off as a difficult decision, but probably interval estimates are the obvious choice in most cases.
So Iâve re-done the âShould we always use interval estimates?â section to be less about pros/âcons and more about exploring the importance of communicating uncertainty in your results. I have used the Ord example you mentioned.
Nice post, Stan!
I think one caveat here is that, if we want to obtain an expected value as output, the input point estimates should refer to the mean instead of the median. They are the same or similar for non-heavy-tailed distributions (like uniform or normal), but could differ a lot for heavy-tailed ones (like exponential or lognormal). When setting a lognormal to a point estimate, I think people often use the geometric mean between 2 percentiles (e.g. 5th and 95th percentiles), which corresponds to the median, not mean. Using the median in this case will underestimate the expected value, because it equals (see here):
E(X) = Median(X)*e^(sigma^2/â2), where sigma^2 is the variance of log(X).
Here you mention that this âlognormal meanâ can lead to extreme results, but I think that is a feature as long as we think the lognormal is modelling the right tail correctly. If we do not think so, we can still use the mean of:
Truncated lognormal distribution.
Minimum between a lognormal distribution and a maximum value (after which we think the lognormal no longer models the right tail well).
In my mind:
Being objective is about faithfully representing the information we have about reality, even if that means being more uncertain.
The evidence base being sparse suggests we are uncertain about what reality actually looks like, which means a faithful representation of it will more easily be achieved by intervals, not point estimates. For example, I think using interval estimates in the Drake equation in much more important that in the cost-effectiveness analyses of GiveWellâs top charities.
One compromise to achieve transparency while mainting the benefits of interval estimates is using pessimistic, realistic and optimistic point estimates. One the one hand, this may result in wider intervals because the product between 2 5th percentiles is rarer than a 5th percentile, so the pessimistic final estimate will be more pessimistic than its inputs. On the other hand, we can think as the wider intervals as accounting for structural uncertainty of the model.
Thanks for your feedback, Vasco. Itâs led me to make extensive changes to the post:
More analysis on the pros/âcons of modelling with distributions. I argue that sometimes itâs good that the crudeness of point-estimate work reflects the crudeness of the evidence available. Interval-estimate work is more honest about uncertainty, but runs the risk of encouraging overconfidence in the final distribution.
I include the lognormal mean in my analysis of means. You have convinced me that the sensitivity of lognormal means to heavy right tails is a strength, not a weakness! But the lognormal mean appears to be sensitive to the size of the confidence interval you use to calculate itâwhich means subjective methods are required to pick the size, introducing bias.
Overall I agree that interval estimation is better suited to the Drake equation than to GiveWell CEAs. But Iâd summarise my reasons as follows:
The Drake Equation really seeks to ask âhow likely is it that we have intelligent alien neighbours?â, but point-estimate methods answer the question âwhat is the expected number of intelligent alien neighbours?â. With such high variability the expected number is virtually useless, but the distribution of this number allows us to estimate the number of alien neighbours. GiveWell CEAs probably have much less variation and hence a point-estimate answer is relatively more useful
Reliable research on the numbers that go into the Drake equation often doesnât exist, so itâs not too bad to âmake upâ interval estimates to go into it. We know much more about the charities GiveWell studies, so made-up distributions (even those informed by reliable point-estimates) are much less permissible.
Thanks again, and do let me know what you think!
Nice, thanks for the update!
Yes, but only as long as we think the heavy right tail is being accurately modelled! Jaime Sevilla has this post on which methods to use to aggregate forecasts.
I think it is worth flagging that risk, but I would say:
In general, if a given method is more accurate, it seems reasonable to follow that method everything else equal.
One can always warn about not overweighting results estimated with intervals.
Intuitively, there seems to be much higher risk of being overconfident about a point estimate than about a mean estimated with intervals together with a confidence interval. For example, regarding Toby Ordâs best guess given in Table 6.1 of The Precipice for the existential risk from nuclear war between 2021 and 2120, I think it is easier to be overconfident about A than B:
A. 0.1 %.
B. 0.1 % (90 % confidence interval, 0.03 % to 0.3 %). Toby mentions that:
âThere is significant uncertainty remaining in these estimates and they should be treated as representing the right order of magnitudeâeach could easily be a factor of 3 higher or lowerâ.
Yes, for the same median, the wider the interval, the greater the mean. If one is having a hard time linking 2 given estimates to a confidence interval, one can try the narrowest and widest reasonable intervals, and see if the lognormal mean will vary a lot.
I think people with knowledge about GiveWellâs cost-effectiveness analyses would be able to come up with reasonable distributions. A point estimate is equivalent to assigning probability 1 to that estimate, and 0 to all other outcomes, so it is easy to come up with something better (although it may well not be worth the effort).
Thanks again!
I think I have been trying to portray the point-estimate/âinterval-estimate trade-off as a difficult decision, but probably interval estimates are the obvious choice in most cases.
So Iâve re-done the âShould we always use interval estimates?â section to be less about pros/âcons and more about exploring the importance of communicating uncertainty in your results. I have used the Ord example you mentioned.
Makes sense, thanks!