Millions is probably a safe bet/lower bound: majority won’t be via direct twitter reads, but via mainstream media using it in their writing.
With twitter, we have a better overview in the case of our other research on seasonality (still in review!). Altmetric estimate is it was shared with accounts with an upper bound of 13M followers. However, in this case, almost all the shares were due to people retweeting my summary. Per twitter stats, it got 2M actual impressions. Given the fact the NPI research was shared and referenced more, it’s probably more >1M reads just on twitter.
Re: forecasting (or bets). In a broad sense, I do agree. In practice I’m a bit skeptical that a forecasting mindset is that good for generating ideas about “what actions to take”. “Successful planning and strategy” is often something like “making a chain of low-probability events happen”, which seems distinct, or even at tension with typical forecasting reasoning. Also, empirically, my impression is that forecasting skills can be broadly decomposed into two parts—building good models / aggregates of other peoples models, and converting those models into numbers. For most people, the “improving at converting non-numerical information into numbers” part has initially much better marginal returns (e.g. just do calibration trainings...), but I suspect doesn’t do that much for the “model feedback”.
Thanks for the response, seems like a safe bet, yeah. :)
Re forecasting, “making low-probability events happen” is a very interesting framing, thanks! I still am maybe somewhat more positive about forecasting:
many questions involve the actions of highly capable agents and therefore requiring at least some thinking in the direction of this framing
the practice of deriving concrete forecasting questions from my models seems very valuable for my own thinking, and some feedback from a generalist crowd about how likely some event will happen, and seeing in the comments what variables they believe are relevant + having some people posting new info that relate to the question seems fairly valuable, too, because you can easily miss important things
Millions is probably a safe bet/lower bound: majority won’t be via direct twitter reads, but via mainstream media using it in their writing.
With twitter, we have a better overview in the case of our other research on seasonality (still in review!). Altmetric estimate is it was shared with accounts with an upper bound of 13M followers. However, in this case, almost all the shares were due to people retweeting my summary. Per twitter stats, it got 2M actual impressions. Given the fact the NPI research was shared and referenced more, it’s probably more >1M reads just on twitter.
Re: forecasting (or bets). In a broad sense, I do agree. In practice I’m a bit skeptical that a forecasting mindset is that good for generating ideas about “what actions to take”. “Successful planning and strategy” is often something like “making a chain of low-probability events happen”, which seems distinct, or even at tension with typical forecasting reasoning. Also, empirically, my impression is that forecasting skills can be broadly decomposed into two parts—building good models / aggregates of other peoples models, and converting those models into numbers. For most people, the “improving at converting non-numerical information into numbers” part has initially much better marginal returns (e.g. just do calibration trainings...), but I suspect doesn’t do that much for the “model feedback”.
Thanks for the response, seems like a safe bet, yeah. :)
Re forecasting, “making low-probability events happen” is a very interesting framing, thanks! I still am maybe somewhat more positive about forecasting:
many questions involve the actions of highly capable agents and therefore requiring at least some thinking in the direction of this framing
the practice of deriving concrete forecasting questions from my models seems very valuable for my own thinking, and some feedback from a generalist crowd about how likely some event will happen, and seeing in the comments what variables they believe are relevant + having some people posting new info that relate to the question seems fairly valuable, too, because you can easily miss important things