Wow, thanks a lot for the work, and for sharing your insights here, I’m really impressed you were able to get involved and contribute on such a massive scale!
Minor thing I stumbled on: > reaching millions of educated readers with our arguments
If this is based on the upper bound of 20 million followers of the accounts who tweeted about the paper, I’m somewhat sceptical that more than 10% of those have actually read even one of your arguments. Would expect that maybe 5% have read the specific tweet and .1% have gone more in depth on the paper?
Also, I wonder what you think of forecasting as another route to tie longtermists to the reality-mast. It seems like it’s much less effortful, but also won’t provide nearly as much high quality feedback compared when you actually interact with the systems that you’re interested in understanding better.
[Epistemic status: massive extrapolation from a few hard numbers.]
Yeah that paper is just the big one, and just its Twitter audience; there are 7 papers, 100 or so major newspaper spots, and a dozen big Wiki spots. (e.g. The masks paper was on the BBC, ACX, NYT, Wired, Guardian, Mail, MR...) I’ve not actually estimated the total audience but I would eyeball a 95% CI as like [6m, 300m] using a weak operational audience of “people who read 1+ of our main claims presented as having good evidence”.
As for in-depth readers: ~160k people downloaded the papers, 6k of which saved it to Mendeley, the poor sods. 20k deep readers sounds about right.
Millions is probably a safe bet/lower bound: majority won’t be via direct twitter reads, but via mainstream media using it in their writing.
With twitter, we have a better overview in the case of our other research on seasonality (still in review!). Altmetric estimate is it was shared with accounts with an upper bound of 13M followers. However, in this case, almost all the shares were due to people retweeting my summary. Per twitter stats, it got 2M actual impressions. Given the fact the NPI research was shared and referenced more, it’s probably more >1M reads just on twitter.
Re: forecasting (or bets). In a broad sense, I do agree. In practice I’m a bit skeptical that a forecasting mindset is that good for generating ideas about “what actions to take”. “Successful planning and strategy” is often something like “making a chain of low-probability events happen”, which seems distinct, or even at tension with typical forecasting reasoning. Also, empirically, my impression is that forecasting skills can be broadly decomposed into two parts—building good models / aggregates of other peoples models, and converting those models into numbers. For most people, the “improving at converting non-numerical information into numbers” part has initially much better marginal returns (e.g. just do calibration trainings...), but I suspect doesn’t do that much for the “model feedback”.
Thanks for the response, seems like a safe bet, yeah. :)
Re forecasting, “making low-probability events happen” is a very interesting framing, thanks! I still am maybe somewhat more positive about forecasting:
many questions involve the actions of highly capable agents and therefore requiring at least some thinking in the direction of this framing
the practice of deriving concrete forecasting questions from my models seems very valuable for my own thinking, and some feedback from a generalist crowd about how likely some event will happen, and seeing in the comments what variables they believe are relevant + having some people posting new info that relate to the question seems fairly valuable, too, because you can easily miss important things
Wow, thanks a lot for the work, and for sharing your insights here, I’m really impressed you were able to get involved and contribute on such a massive scale!
Minor thing I stumbled on:
> reaching millions of educated readers with our arguments
If this is based on the upper bound of 20 million followers of the accounts who tweeted about the paper, I’m somewhat sceptical that more than 10% of those have actually read even one of your arguments. Would expect that maybe 5% have read the specific tweet and .1% have gone more in depth on the paper?
Also, I wonder what you think of forecasting as another route to tie longtermists to the reality-mast. It seems like it’s much less effortful, but also won’t provide nearly as much high quality feedback compared when you actually interact with the systems that you’re interested in understanding better.
[Epistemic status: massive extrapolation from a few hard numbers.]
Yeah that paper is just the big one, and just its Twitter audience; there are 7 papers, 100 or so major newspaper spots, and a dozen big Wiki spots. (e.g. The masks paper was on the BBC, ACX, NYT, Wired, Guardian, Mail, MR...) I’ve not actually estimated the total audience but I would eyeball a 95% CI as like [6m, 300m] using a weak operational audience of “people who read 1+ of our main claims presented as having good evidence”.
As for in-depth readers: ~160k people downloaded the papers, 6k of which saved it to Mendeley, the poor sods. 20k deep readers sounds about right.
Millions is probably a safe bet/lower bound: majority won’t be via direct twitter reads, but via mainstream media using it in their writing.
With twitter, we have a better overview in the case of our other research on seasonality (still in review!). Altmetric estimate is it was shared with accounts with an upper bound of 13M followers. However, in this case, almost all the shares were due to people retweeting my summary. Per twitter stats, it got 2M actual impressions. Given the fact the NPI research was shared and referenced more, it’s probably more >1M reads just on twitter.
Re: forecasting (or bets). In a broad sense, I do agree. In practice I’m a bit skeptical that a forecasting mindset is that good for generating ideas about “what actions to take”. “Successful planning and strategy” is often something like “making a chain of low-probability events happen”, which seems distinct, or even at tension with typical forecasting reasoning. Also, empirically, my impression is that forecasting skills can be broadly decomposed into two parts—building good models / aggregates of other peoples models, and converting those models into numbers. For most people, the “improving at converting non-numerical information into numbers” part has initially much better marginal returns (e.g. just do calibration trainings...), but I suspect doesn’t do that much for the “model feedback”.
Thanks for the response, seems like a safe bet, yeah. :)
Re forecasting, “making low-probability events happen” is a very interesting framing, thanks! I still am maybe somewhat more positive about forecasting:
many questions involve the actions of highly capable agents and therefore requiring at least some thinking in the direction of this framing
the practice of deriving concrete forecasting questions from my models seems very valuable for my own thinking, and some feedback from a generalist crowd about how likely some event will happen, and seeing in the comments what variables they believe are relevant + having some people posting new info that relate to the question seems fairly valuable, too, because you can easily miss important things