My intuitions point the other way with regards to point estimates vs distributions. Distributions seem like the correct format here, and they could allow for value of information calculations, sensitivity, to highlight disagreements which people wouldn’t notice with point estimates, to better combine. The bottom line could also change when using estimates, e.g., as in here.
That said, they do have a learning curve and I agree with you that they add additional complexity/upfront cost.
Agreed that there are some contexts where there’s more value in getting distributions, like with the Fermi paradox.
Or, before the grants are given out, you could ask people to give an ex ante distribution for “what will be your ex post point estimate of the value of this grant?” That feeds directly into VOI calculations, and it is clearly defined what the distribution represents. But note that it requires focusing on point estimates ex post.
> Or, before the grants are given out, you could ask people to give an ex ante distribution for “what will be your ex post point estimate of the value of this grant?” That feeds directly into VOI calculations, and it is clearly defined what the distribution represents. But note that it requires focusing on point estimates ex post.
Aha, but you can also do this when the final answer is also a distribution. In particular, you can look at the KL-divergence between the initial distribution and the answer, and this is also a proper scoring rule.
More generally, I think there is a difference between what would have been best for this analysis, and you might be right that point estimates would have been better, and what EA/longtermism should be aiming to have, which I think are more uncertain estimates in the shape of distributions.
My intuitions point the other way with regards to point estimates vs distributions. Distributions seem like the correct format here, and they could allow for value of information calculations, sensitivity, to highlight disagreements which people wouldn’t notice with point estimates, to better combine. The bottom line could also change when using estimates, e.g., as in here.
That said, they do have a learning curve and I agree with you that they add additional complexity/upfront cost.
Agreed that there are some contexts where there’s more value in getting distributions, like with the Fermi paradox.
Or, before the grants are given out, you could ask people to give an ex ante distribution for “what will be your ex post point estimate of the value of this grant?” That feeds directly into VOI calculations, and it is clearly defined what the distribution represents. But note that it requires focusing on point estimates ex post.
> Or, before the grants are given out, you could ask people to give an ex ante distribution for “what will be your ex post point estimate of the value of this grant?” That feeds directly into VOI calculations, and it is clearly defined what the distribution represents. But note that it requires focusing on point estimates ex post.
Aha, but you can also do this when the final answer is also a distribution. In particular, you can look at the KL-divergence between the initial distribution and the answer, and this is also a proper scoring rule.
More generally, I think there is a difference between what would have been best for this analysis, and you might be right that point estimates would have been better, and what EA/longtermism should be aiming to have, which I think are more uncertain estimates in the shape of distributions.