Agreed. Carl Schuman at hour 1:02 at the 80k podcast even notes:
https://80000hours.org/podcast/episodes/carl-shulman-common-sense-case-existential-risks/
Rob Wiblin: I see. So because there’s such a clear motivation for even an altruistic person to exaggerate the potential risk from nuclear winter, then people who haven’t looked into it might regard the work as not super credible because it could kind of be a tool for advocacy more than anything.
Carl Shulman: Yeah. And there was some concern of that sort, that people like Carl Sagan, who was both an anti-nuclear and antiwar activist and bringing these things up. So some people, particularly in the military establishment, might have more doubt about when their various choices in the statistical analysis and the projections and assumptions going into the models, are they biased in this way? And so for that reason, I’ve recommended and been supportive of funding, just work to elaborate on this. But then I have additionally especially valued critical work and support for things that would reveal this was wrong if it were, because establishing that kind of credibility seemed very important. And we were talking earlier about how salience and robustness and it being clear in the minds of policymakers and the public is important.
Note earlier in the conversation demonstrating Schulman influenced the funding decision for the Rutgers team from open philanthropy:
”Robert Wiblin: So, a couple years ago you worked at the Gates Foundation and then moved to the kind of GiveWell/Open Phil cluster that you’re helping now.”
Notably, Reisner is part of Los Alamos in the military establishment. They build nuclear weapons there. So both Reisner and Robock from Rutgers have their own biases.
Here’s a peer-reviewed perspective that shows the flaws in both perspectives on nuclear winter as being too extreme:
https://www.tandfonline.com/doi/pdf/10.1080/25751654.2021.1882772
I recommend Lawrence livermore paper on the topic: https://www.osti.gov/biblio/1764313
It seems like a much less biased middle ground, and generally shows that nuclear winter is still really bad, on the order of 1⁄2 to 1⁄3 as “bad” as Rutgers tends to say it is.
“we haven’t had examples where a huge amount of cognitive labour has been dumped on a scientific field and we’ve been able to observe how much progress in that field accelerates”
Well, Claude 3.5 and I can think of some examples that contradict that statement. These are Claude’s estimates:
The rise of citizen science and crowdsourcing in certain fields. For instance, projects like Galaxy Zoo in astronomy have allowed large numbers of amateur scientists to contribute to data analysis, accelerating progress in classifying galaxies. Duration: Ongoing since 2007 (about 17 years) Degree of influx: Substantial. Galaxy Zoo alone has involved over 250,000 volunteers. Acceleration: Significant but focused. The original Galaxy Zoo project classified ~900,000 galaxies in less than a year, a task that would have taken years for professional astronomers. However, the acceleration is primarily in data processing rather than theoretical advancements. Estimated acceleration: 10-20x faster for specific classification tasks, but perhaps only 2-3x acceleration for the field of galaxy morphology as a whole.
The influx of physicists and mathematicians into quantitative finance and economics in the 1980s and 1990s. This led to rapid developments in financial modeling and econometrics. Duration: About 20 years (concentrated influx) Degree of influx: Moderate. Estimated several thousand PhDs over this period. Acceleration: Substantial. This influx led to the rapid development of complex financial models and the growth of quantitative trading. Estimated acceleration: 5-10x in areas like options pricing and risk modeling. Overall acceleration in quantitative finance might be around 3-5x.
The growth of computer science departments in universities during the 1960s and 1970s, which led to an acceleration in theoretical computer science and algorithm development. Duration: About 20 years Degree of influx: Significant. The number of CS departments and graduates grew rapidly during this period. Acceleration: Major. This period saw fundamental developments in algorithms, programming languages, and theoretical computer science. Estimated acceleration: 5-8x in theoretical computer science and algorithm development. The overall field might have seen a 3-4x acceleration.
I think it’s also interesting to see how open source contributions to language models and academia clearly havr thousands of times more contributors but seems to make relatively limited progress compared to the top AI labs. The main reason being, presumably, the lack of compute for experiments and training. So that’s one reason to be less concerned about a major influx of cognitive skills with limited compute.