This document explores and develops methods for forecasting extreme outcomes, such as the maximum of a sample of n independent and identically distributed random variables. I was inspired to write this by Jaime Sevilla’s recent post with research ideas in forecasting and, in particular, his suggestion to write an accessible introduction to the Fisher–Tippett–Gnedenko Theorem.
I’m very grateful to Jaime Sevilla for proposing this idea and for providing great feedback on a draft of this document.
Summary
The Fisher–Tippett–Gnedenko Theorem is similar to a central limit theorem, but for the maximum of random variables. Whereas central limit theorems tell us about what happens on average, the Fisher–Tippett–Gnedenko Theorem tells us what happens in extreme cases. This makes it especially useful in risk management, when we need to pay particular attention to worst case outcomes. It could be a useful tool for forecasting tail events.
This document introduces the theorem, describes the limiting probability distribution and provides a couple of examples to illustrate the use (and misuse!) of the Fisher–Tippett–Gnedenko Theorem for forecasting. In the process, I introduce a tool that computes the distribution of the maximum n iid random variables that follow a normal distribution centrally but with an (optional) right Pareto tail.
When we have lots of data, we should try to fit our data to a GEV distribution since this is the distribution that the maximum should converge to (if it converges)
When we have subjective judgements about the distribution of the maximum (e.g. a 90% credible interval and median forecast), we can use these to determine parameters of a GEV distribution that fits these judgements
When we know or have subjective judgements about the distribution of the random variables we’re maximising over, the theorem can help us determine the distribution of the maximum of n such random variables for large n – but this can give very bad results when our assumptions / judgements are wrong
Limitations:
To get accurate forecasts about the maximum of n random variables based on the distribution of the underlying random variables, we need accurate judgements about the right tail of the underlying random variables because the maximum will very likely be drawn from the tail, especially as n gets large
Even for data that is very well described by a normal distribution for typical values, normality can break down at the tails and this can greatly affect the resulting forecasts
I use the example of human height: naively assuming normality underestimates how extreme the tallest and shortest humans are because height is “only” normally distributed up to 2-3 standard deviations around the mean
Modelling the tail separately (even with quite a crude model) can improve forecasts
This simple tool might be good enough for forecasting purposes in many cases
It assumes that the underlying r.v.s are iid and normally distributed up to k standard deviations above the mean and that there is a Pareto tail beyond this point
Inputs:
90% CI for the underlying r.v.s
n (the number of samples of the underlying random variables)
k (the number of SDs above the mean at which the Pareto tail starts); set this high if you don’t want a Pareto tail
Output: cumulative distribution function, approximate probability density function and approximate expectation of the maximum of n samples of the underlying random variables
Request for feedback: I’m not an experienced forecaster and I don’t know what kind of information and tools would be most useful for forecasters. Let me know how this kind of work could be extended or adapted to be more useful!
Forecasting extreme outcomes
Link post
This document explores and develops methods for forecasting extreme outcomes, such as the maximum of a sample of n independent and identically distributed random variables. I was inspired to write this by Jaime Sevilla’s recent post with research ideas in forecasting and, in particular, his suggestion to write an accessible introduction to the Fisher–Tippett–Gnedenko Theorem.
I’m very grateful to Jaime Sevilla for proposing this idea and for providing great feedback on a draft of this document.
Summary
The Fisher–Tippett–Gnedenko Theorem is similar to a central limit theorem, but for the maximum of random variables. Whereas central limit theorems tell us about what happens on average, the Fisher–Tippett–Gnedenko Theorem tells us what happens in extreme cases. This makes it especially useful in risk management, when we need to pay particular attention to worst case outcomes. It could be a useful tool for forecasting tail events.
This document introduces the theorem, describes the limiting probability distribution and provides a couple of examples to illustrate the use (and misuse!) of the Fisher–Tippett–Gnedenko Theorem for forecasting. In the process, I introduce a tool that computes the distribution of the maximum n iid random variables that follow a normal distribution centrally but with an (optional) right Pareto tail.
Summary:
The Fisher–Tippett–Gnedenko Theorem says (roughly) that if the maximum of n iid random variables—which is itself a random variable—converges as n grows to infinity, then it must converge to a generalised extreme value (GEV) distribution
Use cases:
When we have lots of data, we should try to fit our data to a GEV distribution since this is the distribution that the maximum should converge to (if it converges)
When we have subjective judgements about the distribution of the maximum (e.g. a 90% credible interval and median forecast), we can use these to determine parameters of a GEV distribution that fits these judgements
When we know or have subjective judgements about the distribution of the random variables we’re maximising over, the theorem can help us determine the distribution of the maximum of n such random variables for large n – but this can give very bad results when our assumptions / judgements are wrong
Limitations:
To get accurate forecasts about the maximum of n random variables based on the distribution of the underlying random variables, we need accurate judgements about the right tail of the underlying random variables because the maximum will very likely be drawn from the tail, especially as n gets large
Even for data that is very well described by a normal distribution for typical values, normality can break down at the tails and this can greatly affect the resulting forecasts
I use the example of human height: naively assuming normality underestimates how extreme the tallest and shortest humans are because height is “only” normally distributed up to 2-3 standard deviations around the mean
Modelling the tail separately (even with quite a crude model) can improve forecasts
This simple tool might be good enough for forecasting purposes in many cases
It assumes that the underlying r.v.s are iid and normally distributed up to k standard deviations above the mean and that there is a Pareto tail beyond this point
Inputs:
90% CI for the underlying r.v.s
n (the number of samples of the underlying random variables)
k (the number of SDs above the mean at which the Pareto tail starts); set this high if you don’t want a Pareto tail
Output: cumulative distribution function, approximate probability density function and approximate expectation of the maximum of n samples of the underlying random variables
Request for feedback: I’m not an experienced forecaster and I don’t know what kind of information and tools would be most useful for forecasters. Let me know how this kind of work could be extended or adapted to be more useful!
I expect the time-poor reader to get most of the value from this document by reading the informal statement of the Fisher–Tippett–Gnedenko Theorem, the overview of the generalised extreme value distribution, and the shortest and tallest people in the world example, and then maybe making a copy and playing around with the tool for forecasting the maximum of n random variables that follow normal distributions with Pareto tails (consulting this as needed).