If I recall correctly this paper by Tom Sittler also makes the point you paraphrased as “some reasonable base rate of x-risk means that the expected lifespan of human civilization conditional on solving a particular risk is still hundreds or thousands of years”, among others.
Ok, thanks, I now say “Prove that a certain nonrandom, non-Bayesian …“.
Thank you for posting this, I find it very interesting and useful to have discussions of this kind publicly available!
For now just one point, even though I don’t think it matters much for the high-level disagreement (in particular, I probably still disagree with Ben’s view on the impact of Google, Wikipedia etc.):
I don’t think current IT has had much of an effect by standard metrics of labour productivity, for example.
The context makes me think that maybe by “current IT” you specifically mean things like Facebook or Twitter that became big in the last 10 years. In that case, for all I know the quoted claim may well be correct. I’m not so sure if “current IT” includes e.g. the internet: I believe a prominent view in economics is that IT was a major cause of the US productivity growth resurgence in the 1990s to mid-2000s. For example:
David Romer’s popular textbook Advanced Macroeconomics (4th ed., p. 32) says:
Until the mid-1990s, the rapid technological progress in computers and their introduction in many sectors of the economy appear to have had little impact on aggregate productivity. In part, this was simply because computers, although spreading rapidly, were still only a small fraction of the overall capital stock. And in part, it was because the adoption of the new technologies involved substantial adjustment costs. The growth-accounting studies find, however, that since the mid-1990s, computers and other forms of information technology have had a large impact on aggregate productivity.
Gordon (2014, p. 6), who in general argues against techno-optimists and predicts a growth slowdown, describes 1996-2004 as “the productivity revival associated with the invention of e-mail, the internet, the web, and e-commerce”.
More broadly, the sense I got from the literature is that many people would be comfortable endorsing claims like (i) innovation has been and still is a major driver of productivity growth (say responsible for >10% of productivity growth), and (ii) within the last 10 years a significant share (weighted by impact on productivity, say again >10% of the effect) of innovation has happened in IT. (Admittedly, the arguments behind similar claims often seemed a bit handwavy to me and not as data-drived as I’d like.) So even if productivity growth has slowed down considerably and will remain low, IT would be responsible for a significant part of what little growth we have, and the absolute effect wouldn’t be less than one order of magnitude of typical effects of technology on productivity.
I think this all of this is consistent with e.g. the views that IT has increased productivity less than past innovations such as the steam engine, or that most people overestimate the effect of IT. I’d also guess it’s consistent with Cowen’s and Thiel’s views, but I haven’t read the books by them that you mentioned.
(I said “a prominent view” because I don’t have a good sense of whether it’s a majority view. In particular, I wasn’t able to find a relevant IGM Forum survey of economists. My overall impression is based on having engaged on the order of 10 hours with the relevant literature, albeit in an only moderately systematic way, and I don’t have a background in economics. I think there’s a good chance you’re aware of the above points, and I’m partly writing this comments to see if you or someone else can spot a flaw in my current impression.)
On your first point: I agree that the paper just shows that, as you wrote, “if your decision strategy is to just choose the option you (naively) expect to be best, you will systematically overestimate the value of the selected option”.
I also think that “just choose the option you (naively) expect to be best” is an example of a “nonrandom, non-Bayesian decision strategy”. Now, the first sentence you quoted might reasonably be read to make the stronger claim that all nonrandom, non-Bayesian decision strategies have a certain property. However, the paper actually just shows that one of them does.
Is this what you were pointing to? If so, I’ll edit the quoted sentence accordingly, but I first wanted to check if I understood you correctly.
In any case, thank you for your comment!
On your second point: I think you’re right, and that’s a great example. I’ve added a link to your comment to the post.
Hi Aaron, thank you for the suggestion. I agree that posting a more extensive summary would help readers decide if they should read the whole thing, and I will strongly consider doing so in case I ever plan to post similar things. For this specific post, I probably won’t add a summary because my guess is that in this specific case the size of the beneficial effect doesn’t justify the cost. (I do think extremely few people would use their time optimally by reading the post, mostly because it has no action-guiding conclusions and a low density of generally applicable insights.) I’m somewhat concerned that more people read this post than would be optimal just because there’s some psychological pull toward reading whatever you clicked on, and that I could reduce the amount of time spent suboptimally by having a shorter summary here, with accessing the full text requiring an additional click. However, my hunch is that this harmful effect is sufficiently small. (Also, the cost to me would be unusually high because I have a large ugh field around this project and would really like to avoid spending any more time on it.) But do let me know if you think replacing this text with a summary is clearly warranted, and thank you again for the suggestion!
do you have a vague impression of when randomisation might be a big win purely by reducing costs of evaluation?
Not really I’m afraid. I’d expect that due to the risk of inadvertent negative impacts and large improvements from weeding out obviously suboptimal options a pure lottery will rarely be a good idea. How much effort to expend beyond weeding out clearly suboptimal options to me likely seems to depend on contextual information specific to the use case. I’m not sure how much there is to be said in general except for platitudes along the lines of “invest time into explicit evaluation until the marginal value of information has diminished sufficiently”.