I recently ran a quick Fermi workshop, and have been asked for notes several times since. I’ve realized that it’s not that hard for me to post them, and it might be relatively useful for someone.
Quick summary of the workshop
What is a Fermi estimate?
Walkthrough of the main steps for Fermi estimation
Notice a question
Break it down into simpler sub-questions to answer first
Don’t stress about the details when estimating answers to the sub-questions
Consider looking up some numbers
Put everything together
Sanity check
Different models: an example
Examples!
Discussion & takeaways
Resources
Guesstimate is a great website for Fermi estimation (although you can also use scratch paper or spreadsheets if that’s what you prefer)
I don’t see mention of quantifying the uncertainty in each component and aggregating this (usually via simulation). Is this not fundamental to Fermi? (Is it only a special version of Fermi, the “Monte Carlo” version?)
Uncertainty is super important, and it’s really useful to flag. It’s possible I should have brought it up more during the workshop, and I’ll consider doing that if I ever run something similar.
However, I do think part of the point of a Fermi estimate is to be easy and quick.
In practice, the way I’ll sometimes incorporate uncertainty into my Fermis is by running the numbers in three ways:
my “best guess” for every component (2 hours of podcast episode, 100 episodes),
the “worst (reasonable) case” for every component (only 90? episodes have been produced, and they’re only 1.5 hours long, on average), and
the “best case” for every component (150 episodes, average of 3 hours).
Then this still takes very little time and produces a reasonable range: ~135 to 450 hours of podcast (with a best guess of 200 hours). (Realistically, if I were taking enough care to run the numbers 3 times, I’d probably put more effort into the “best guess” numbers I produced.) I also sometimes do something similar with a spreadsheet/more careful Fermi.
I could do something more formal with confidence intervals and the like, and it’s truly possible I should be doing that. But I really think there’s a lot of value in just scratching something rough out on a sticky note during a conversation to e.g. see if a premise that’s being entertained is worth the time, or to see if there are big obvious differences that are being missed because the natural components being considered are clunky and incompatible (before they’re put together to produce the numbers we actually care about).
Note that tools like Causal and Guesstimate make including uncertainty pretty easy and transparent.
I really think there’s a lot of value in just scratching something rough out on a sticky note during a conversation to e.g. see if a premise that’s being entertained is worth the time
I agree, but making uncertainty explicit makes it even better. (And I think it’s an important epistemic/numeracy thing to cultivate and encourage). So I think if you are giving a workshop you should make this part of it at least to some extent.
I could do something more formal with confidence intervals and the like
I think this would be worth digging into. It can make a big difference and it’s a mode we should be moving towards IMO, and should this be at the core of our teaching and learning materials. And there are ways of doing this that are not so challenging.
(Of course maybe in this particular podcast example it is now so important but in general I think it’s VERY important.)
“Worst case all parameters” is very unlikely. So is “best case everything”.
See the book “how to measure everything” for a discussion. Also the Causal and Guesstimate apps.
I recently ran a quick Fermi workshop, and have been asked for notes several times since. I’ve realized that it’s not that hard for me to post them, and it might be relatively useful for someone.
Quick summary of the workshop
What is a Fermi estimate?
Walkthrough of the main steps for Fermi estimation
Notice a question
Break it down into simpler sub-questions to answer first
Don’t stress about the details when estimating answers to the sub-questions
Consider looking up some numbers
Put everything together
Sanity check
Different models: an example
Examples!
Discussion & takeaways
Resources
Guesstimate is a great website for Fermi estimation (although you can also use scratch paper or spreadsheets if that’s what you prefer)
This is a great post on Fermi estimation
In general, you can look at a bunch of posts tagged “Fermi Estimation” on LessWrong or look at the Forum wiki description
Disclaimers:
I am not a Fermi pro, nor do I have any special qualifications that would give me credibility :)
This was a short workshop, aimed mostly at people who had done few or no Fermi estimates before
***************
***************
***************
***************
***************
***************
***************
***************
***************
***************
I attended and thoroughly enjoyed your workshop! Thanks for posting these notes
Thanks for coming to the workshop, and for writing this note!
I don’t see mention of quantifying the uncertainty in each component and aggregating this (usually via simulation). Is this not fundamental to Fermi? (Is it only a special version of Fermi, the “Monte Carlo” version?)
Uncertainty is super important, and it’s really useful to flag. It’s possible I should have brought it up more during the workshop, and I’ll consider doing that if I ever run something similar.
However, I do think part of the point of a Fermi estimate is to be easy and quick.
In practice, the way I’ll sometimes incorporate uncertainty into my Fermis is by running the numbers in three ways:
my “best guess” for every component (2 hours of podcast episode, 100 episodes),
the “worst (reasonable) case” for every component (only 90? episodes have been produced, and they’re only 1.5 hours long, on average), and
the “best case” for every component (150 episodes, average of 3 hours).
Then this still takes very little time and produces a reasonable range: ~135 to 450 hours of podcast (with a best guess of 200 hours). (Realistically, if I were taking enough care to run the numbers 3 times, I’d probably put more effort into the “best guess” numbers I produced.) I also sometimes do something similar with a spreadsheet/more careful Fermi.
I could do something more formal with confidence intervals and the like, and it’s truly possible I should be doing that. But I really think there’s a lot of value in just scratching something rough out on a sticky note during a conversation to e.g. see if a premise that’s being entertained is worth the time, or to see if there are big obvious differences that are being missed because the natural components being considered are clunky and incompatible (before they’re put together to produce the numbers we actually care about).
Note that tools like Causal and Guesstimate make including uncertainty pretty easy and transparent.
I agree, but making uncertainty explicit makes it even better. (And I think it’s an important epistemic/numeracy thing to cultivate and encourage). So I think if you are giving a workshop you should make this part of it at least to some extent.
I think this would be worth digging into. It can make a big difference and it’s a mode we should be moving towards IMO, and should this be at the core of our teaching and learning materials. And there are ways of doing this that are not so challenging.
(Of course maybe in this particular podcast example it is now so important but in general I think it’s VERY important.)
“Worst case all parameters” is very unlikely. So is “best case everything”.
See the book “how to measure everything” for a discussion. Also the Causal and Guesstimate apps.