Uncertainty is super important, and it’s really useful to flag. It’s possible I should have brought it up more during the workshop, and I’ll consider doing that if I ever run something similar.
However, I do think part of the point of a Fermi estimate is to be easy and quick.
In practice, the way I’ll sometimes incorporate uncertainty into my Fermis is by running the numbers in three ways:
my “best guess” for every component (2 hours of podcast episode, 100 episodes),
the “worst (reasonable) case” for every component (only 90? episodes have been produced, and they’re only 1.5 hours long, on average), and
the “best case” for every component (150 episodes, average of 3 hours).
Then this still takes very little time and produces a reasonable range: ~135 to 450 hours of podcast (with a best guess of 200 hours). (Realistically, if I were taking enough care to run the numbers 3 times, I’d probably put more effort into the “best guess” numbers I produced.) I also sometimes do something similar with a spreadsheet/more careful Fermi.
I could do something more formal with confidence intervals and the like, and it’s truly possible I should be doing that. But I really think there’s a lot of value in just scratching something rough out on a sticky note during a conversation to e.g. see if a premise that’s being entertained is worth the time, or to see if there are big obvious differences that are being missed because the natural components being considered are clunky and incompatible (before they’re put together to produce the numbers we actually care about).
Note that tools like Causal and Guesstimate make including uncertainty pretty easy and transparent.
I really think there’s a lot of value in just scratching something rough out on a sticky note during a conversation to e.g. see if a premise that’s being entertained is worth the time
I agree, but making uncertainty explicit makes it even better. (And I think it’s an important epistemic/numeracy thing to cultivate and encourage). So I think if you are giving a workshop you should make this part of it at least to some extent.
I could do something more formal with confidence intervals and the like
I think this would be worth digging into. It can make a big difference and it’s a mode we should be moving towards IMO, and should this be at the core of our teaching and learning materials. And there are ways of doing this that are not so challenging.
(Of course maybe in this particular podcast example it is now so important but in general I think it’s VERY important.)
“Worst case all parameters” is very unlikely. So is “best case everything”.
See the book “how to measure everything” for a discussion. Also the Causal and Guesstimate apps.
Uncertainty is super important, and it’s really useful to flag. It’s possible I should have brought it up more during the workshop, and I’ll consider doing that if I ever run something similar.
However, I do think part of the point of a Fermi estimate is to be easy and quick.
In practice, the way I’ll sometimes incorporate uncertainty into my Fermis is by running the numbers in three ways:
my “best guess” for every component (2 hours of podcast episode, 100 episodes),
the “worst (reasonable) case” for every component (only 90? episodes have been produced, and they’re only 1.5 hours long, on average), and
the “best case” for every component (150 episodes, average of 3 hours).
Then this still takes very little time and produces a reasonable range: ~135 to 450 hours of podcast (with a best guess of 200 hours). (Realistically, if I were taking enough care to run the numbers 3 times, I’d probably put more effort into the “best guess” numbers I produced.) I also sometimes do something similar with a spreadsheet/more careful Fermi.
I could do something more formal with confidence intervals and the like, and it’s truly possible I should be doing that. But I really think there’s a lot of value in just scratching something rough out on a sticky note during a conversation to e.g. see if a premise that’s being entertained is worth the time, or to see if there are big obvious differences that are being missed because the natural components being considered are clunky and incompatible (before they’re put together to produce the numbers we actually care about).
Note that tools like Causal and Guesstimate make including uncertainty pretty easy and transparent.
I agree, but making uncertainty explicit makes it even better. (And I think it’s an important epistemic/numeracy thing to cultivate and encourage). So I think if you are giving a workshop you should make this part of it at least to some extent.
I think this would be worth digging into. It can make a big difference and it’s a mode we should be moving towards IMO, and should this be at the core of our teaching and learning materials. And there are ways of doing this that are not so challenging.
(Of course maybe in this particular podcast example it is now so important but in general I think it’s VERY important.)
“Worst case all parameters” is very unlikely. So is “best case everything”.
See the book “how to measure everything” for a discussion. Also the Causal and Guesstimate apps.