I actually think the EA world has been pretty good epistemically on winter: appropriately humble and exploratory, mostly funding research to work out how big a problem it is, not basing big claims on (possibly) unsettled science. The argument for serious action on reducing nuclear risk doesn’t rely on claims about nuclear winter—though nuclear winter would really underline its importance. The Rethink Priorities report you critique talks at length about the debate over winter, which is great. See also 80,000 Hours profile, which is similarly cautious/hedged.
The EA world has been the major recent funder of research on nuclear winter: OpenPhil in 2017, 2020, perhaps Longview, and soon FLI. The research has advanced considerably since 2016. Indeed, most of the research ever published on nuclear winter has been published in the last few years, using the latest climate modelling. The most recent papers are getting published in Nature. I would disagree that theres a “reliance on papers that have a number of obvious flaws”.
Wait. OpenPhil gave money to Toon and Robock? Wow. If I’d know that, I would have written a very sharp criticism of that particular decision.
>Indeed, most of the research ever published on nuclear winter has been published in the last few years, using the latest climate modelling.
The problem isn’t climate modeling. The problem is that one of the inputs to the model is wrong by, conservatively, a factor of 50.
>The most recent papers are getting published in Nature. I would disagree that theres a “reliance on papers that have a number of obvious flaws”.
Peer review is a useful process, but not perfect, hence the existence of the replication crisis. In this case, there’s a couple of papers that keep popping up in more recent literature as the source for soot estimates that are extremely bad. But a typical peer reviewer for nature would have no reason to critique those papers, and doesn’t have the expertise to realize how bonkers some of the assumptions in them are.
Just to respond to the nuclear winter point.
I actually think the EA world has been pretty good epistemically on winter: appropriately humble and exploratory, mostly funding research to work out how big a problem it is, not basing big claims on (possibly) unsettled science. The argument for serious action on reducing nuclear risk doesn’t rely on claims about nuclear winter—though nuclear winter would really underline its importance. The Rethink Priorities report you critique talks at length about the debate over winter, which is great. See also 80,000 Hours profile, which is similarly cautious/hedged.
The EA world has been the major recent funder of research on nuclear winter: OpenPhil in 2017, 2020, perhaps Longview, and soon FLI. The research has advanced considerably since 2016. Indeed, most of the research ever published on nuclear winter has been published in the last few years, using the latest climate modelling. The most recent papers are getting published in Nature. I would disagree that theres a “reliance on papers that have a number of obvious flaws”.
Wait. OpenPhil gave money to Toon and Robock? Wow. If I’d know that, I would have written a very sharp criticism of that particular decision.
>Indeed, most of the research ever published on nuclear winter has been published in the last few years, using the latest climate modelling.
The problem isn’t climate modeling. The problem is that one of the inputs to the model is wrong by, conservatively, a factor of 50.
>The most recent papers are getting published in Nature. I would disagree that theres a “reliance on papers that have a number of obvious flaws”.
Peer review is a useful process, but not perfect, hence the existence of the replication crisis. In this case, there’s a couple of papers that keep popping up in more recent literature as the source for soot estimates that are extremely bad. But a typical peer reviewer for nature would have no reason to critique those papers, and doesn’t have the expertise to realize how bonkers some of the assumptions in them are.