This is a short post on an idea from statistics and its relevance to EA. The initial post highlighted the fact that expectations cannot always be sensibly thought of as representative values from distributions.
Probability served three ways
Suppose an event X is reported to have some probability p. We’re all aware at this point that, in practice, that p comes from some fitted model. Even if that means fitted inside someone’s head. This means it comes with uncertainty. However, it can be difficult to visualize what uncertainty in probability means.
Luckily, we can also model probabilities directly. A sample from the Beta distribution can be used as the parameter of a Bernoulli coin toss. The following three Beta distributions all have the same expectation: 1/2.
The interpretation here is:
The probability is either very high or very low—we don’t know which.
The probability is uniformly distributed—it could be anywhere.
We’re fairly sure the probability is right in the middle.
Suppose we encounter in some discussion a point estimate of a probability. For example E[p]=1/2. Or perhaps, the idea of expectation might not even be stated explicitly—but no other uncertainty information is given. It is natural to wonder: which flavour of p are we are talking about?
Implication for planning
Suppose a highly transmissible new disease infallibly kills some subset of humans. Or malevolent aliens. Or whatever is salient for the reader. Interpret the p in our example as the probability an arbitrary human is in the affected group a year from now.
Under distribution 1., and with roughly 32% probability, more than 99% of people are affected.
Under distribution 3., and with roughly 47% probability, the proportion of the population affected is between 45% and 55%.
I’m going to baldly assert that knowing which distribution we face should alter our response to it. Despite the coincidence of expectations. Which distribution represents the worst x-risk? Which would it be easiest to persuade people to take action on?
[Stats4EA] Uncertain Probabilities
This is a short post on an idea from statistics and its relevance to EA. The initial post highlighted the fact that expectations cannot always be sensibly thought of as representative values from distributions.
Probability served three ways
Suppose an event X is reported to have some probability p. We’re all aware at this point that, in practice, that p comes from some fitted model. Even if that means fitted inside someone’s head. This means it comes with uncertainty. However, it can be difficult to visualize what uncertainty in probability means.
Luckily, we can also model probabilities directly. A sample from the Beta distribution can be used as the parameter of a Bernoulli coin toss. The following three Beta distributions all have the same expectation: 1/2.
The interpretation here is:
The probability is either very high or very low—we don’t know which.
The probability is uniformly distributed—it could be anywhere.
We’re fairly sure the probability is right in the middle.
Suppose we encounter in some discussion a point estimate of a probability. For example E[p]=1/2. Or perhaps, the idea of expectation might not even be stated explicitly—but no other uncertainty information is given. It is natural to wonder: which flavour of p are we are talking about?
Implication for planning
Suppose a highly transmissible new disease infallibly kills some subset of humans. Or malevolent aliens. Or whatever is salient for the reader. Interpret the p in our example as the probability an arbitrary human is in the affected group a year from now.
Under distribution 1., and with roughly 32% probability, more than 99% of people are affected.
Under distribution 3., and with roughly 47% probability, the proportion of the population affected is between 45% and 55%.
I’m going to baldly assert that knowing which distribution we face should alter our response to it. Despite the coincidence of expectations. Which distribution represents the worst x-risk? Which would it be easiest to persuade people to take action on?