Thanks for writing this! It’s always useful to get reminders for the sort of mistakes we can fail to notice even if when they’re significant.
I also think it would be a lot more helpful to walk through how this mistake could happen in some real scenarios in the context of EA (even though these scenarios would naturally be less clear-cut and more complex).
Lastly, it might be worth noting the many other tools we have to represent random variables. Some options off the top of my head:
* Expectation & variance: Sometimes useful for normal distributions and other intuitive distributions (eg QALY per $ for many interventions at scale).
* Confidence intervals: Useful for many cases where the result is likely to be in a specific range (eg effect size for a specific treatment).
* Probabilities for specific outcomes or events: Sometimes useful for distributions with important anomalies (eg impact of a new organization), or when looking for specific combinations of multiple distributions (eg the probability that AGI is coming soon and also that current alignment research is useful).
* Full model of the distribution: Sometimes useful for simple \ common distributions (all the examples that come to mind aren’t in the context of EA, oh well).
One small note: The examples are there to make the category clearer. These aren’t all cases where expected value is wrong \ inappropriate to use. Specifically, for some of them, I think using expected value works great.
Thanks for writing this! It’s always useful to get reminders for the sort of mistakes we can fail to notice even if when they’re significant.
I also think it would be a lot more helpful to walk through how this mistake could happen in some real scenarios in the context of EA (even though these scenarios would naturally be less clear-cut and more complex).
Lastly, it might be worth noting the many other tools we have to represent random variables. Some options off the top of my head:
* Expectation & variance: Sometimes useful for normal distributions and other intuitive distributions (eg QALY per $ for many interventions at scale).
* Confidence intervals: Useful for many cases where the result is likely to be in a specific range (eg effect size for a specific treatment).
* Probabilities for specific outcomes or events: Sometimes useful for distributions with important anomalies (eg impact of a new organization), or when looking for specific combinations of multiple distributions (eg the probability that AGI is coming soon and also that current alignment research is useful).
* Full model of the distribution: Sometimes useful for simple \ common distributions (all the examples that come to mind aren’t in the context of EA, oh well).
One small note: The examples are there to make the category clearer. These aren’t all cases where expected value is wrong \ inappropriate to use. Specifically, for some of them, I think using expected value works great.
Hopefully, we’ll get there! It’ll be mostly Bayesian though :)