Thanks so much for this. Did any of the reviewers (Peter Watson, Goodwin Gibbins, James Ozden perhaps?) make comments on the overall report ie your methodology, your choices of areas of inquiry etc. As this is my major criticism of your work I would really love to see the reviewers comments on your overall methodology, structure of the report etc
No I didn’t get any of that. I don’t want to put words in their mouths, but Peter overall seemed very positive. I’m less sure what Goodwin and James thought, but they didn’t say anything massively negative, though perhaps they thought it
“I don’t want to put words in their mouths, but Peter overall seemed very positive”
As Peter, just in case this should come back to bite me if misinterpreted, I just thought I’d say I could give an informed review of certain physical climate science aspects and the report seems to capture those well. I am positive about the rest as being an interesting and in depth piece of scholarship into interesting questions, but I can’t vouch for it as an expert :-)
Would you suggest the depth of your feedback was the depth of peer review? And I’m correct in saying therefore that you didn’t really review the overall methodology used etc?
I’d say the depth of review was similar to peer review yes, though it is true to say that publication was not conditional on the peer reviewers okaying what I had written. As mentioned, the methodology was reviewed, yes. So, this is my view, having taken on significant expert input.
A natural question is whether my report should be given less weight eg than a peer reviewed paper in a prominent journal. I think as a rule, a good approach is to try start by getting a sense of what the weight of the literature says, and then exploring the substantive arguments made. For the usual reasons, we should expect any randomly selected paper to be false. Papers that make claims far outside the consensus position that get published in prominent journals are especially likely to be false. There is also scope for certain groups of scientists to review one another’s papers such that bad literatures can snowball.
This isn’t to say that any random person writing about climate change will be better than a random peer reviewed paper. But I think there are reasons to put more weight on the views of someone who has good epistemics (not saying this is true of me, but one might think it is true of some EA researchers) and also be actually talking about the thing we are interested in—i.e. the longtermist import of climate change. Most papers just aren’t focusing on that, but will use similar terminology. e.g. there is a paper by Xu and Ramanathan which says that climate change is an existential risk but uses that term in a completely different way to EAs.
I will give some examples of the flaws of the traditional peer review process as applied to some papers on the catastrophic side of things.
A paper that is often brought up in climate catastrophe discussions is Steffen et al (2018) - the ‘Hothouse Earth’ paper. That paper has now been cited more than 2,000 times. For reasons I discuss in the report, I think it is both surprising that the paper was published. The IPCC also disagrees with it.
2. The Kemp et al 2022 PNAS paper (also written by many planetary boundaries people) was peer reviewed, but also contains several errors.
For instance, it says “Yet, there remain reasons for caution. For instance, there is significant uncertainty over key variables such as energy demand and economic growth. Plausibly higher economic growth rates could make RCP8.5 35% more likely (27).”
The cite here in note (27) is to Christensen et al (2018), which actually says “Our results indicate that there is a greater than 35% probability that emissions concentrations will exceed those assumed in RCP8.5.” i.e. their finding is about the percentage point chance of RCP8.5, not about an increase in the relative risk of RCP8.5.
Another example: “While an ECS below 1.5 °C was essentially ruled out, there remains an 18% probability that ECS could be greater than 4.5 °C (14).”
The cite here is to the entire WG1 IPCC report (not that useful for checking but that aside...) The latest IPCC report says “a best estimate of equilibrium climate sensitivity of 3°C, with a very likely range of 2°C to 5°C. The likely range [is] 2.5°C to 4°C” The IPCC says “Throughout the WGI report and unless stated otherwise, uncertainty is quantified using 90% uncertainty intervals. The 90% uncertainty interval, reported in square brackets [x to y], is estimated to have a 90% likelihood of covering the value that is being estimated. The range encompasses the median value, and there is an estimated 10% combined likelihood of the value being below the lower end of the range (x) and above its upper end (y). Often, the distribution will be considered symmetric about the corresponding best estimate, but this is not always the case. In this Report, an assessed 90% uncertainty interval is referred to as a ‘very likely range’. Similarly, an assessed 66% uncertainty interval is referred to as a ‘likely range’.
So, the 66% CI is 2.5ºC to 4ºC and the 90% CI is 2ºC-5ºC. If this is symmetric, then this means there is a 17% chance of >4ºC, and a 5% chance of >5ºC. It’s unclear whether the distribution is symmetric or not—the IPCC does not say—but if it is then the ’18% chance of >4.5ºC’ claim in climate endgame is wrong. So, a key claim in that paper - about the main variable of interest in climate science—cannot be inferred from the given reference.
3. Jehn et al have published twopapers cited in Kemp et al (2022), one of which says that “More likely higher end warming scenarios of 3 °C and above, despite potential catastrophic impacts, are severely neglected.” This is just not true, but nevertheless made it through peer review. Almost every single climate impact study reports the impact of 4.4ºC. There is barely a single chart in the entire IPCC impacts report that does not report that. We can perhaps quibble over what ‘severely neglected’ means, but it doesn’t mean ‘shown in every single chart in the IPCC climate impacts book’. It is surprising that this got through peer review.
**
As I have said, these are just single studies. I am consistently impressed by how good the IPCC is at reporting the median view in the literature, given how politicised the whole process must be.
**
I also do not think there is any tendency to downplay risks in the climate science literature. If you look at studies on publication bias in climate science, they find that effect sizes in abstracts in climate change papers have a tendency to be significantly inflated relative to the main text. This is especially pronounced in high impact journals. I have also found this from personal experience. Overall, I think in some cases the risks are overstated, in some they are understated, but there is no systematic pattern.
Probably the best way to examine whether my substantive conclusions are wrong would be to raise some substantive criticisms/carry out a redteam—I would welcome this. I emphasise that if my arguments are correct, then the scale of biorisk is numerous orders of magnitude larger than climate change.
Peer review is very variable so it’s hard to say what “the depth of peer review” is. I checked the bits I was asked to check in a similar way as I would a journal article. No I didn’t myself really review the methodology. The process was also quite different from normal review in involving quite a few back-and-forth discussions—I felt more like I was helping make the work better rather than simply commenting on its quality. It also differed in that the decision about “publishing” was taken by John rather than a separate editor (as far as I know).
I would say that for all of the ‘non-EA’ reviewers, the review was very extensive, and this was also true of some of the EA reviewers (because they were more pushed for time). The non-EA expert reviewers were also compensated for their review in order to incentivise them to review in depth.
It is true that I ultimately decided whether or not to publish, so this makes it different to peer review. Someone mentioned to me that some people mean by ‘peer review’ that the reviewers have to agree for publication to be ok, but this wasn’t the case for this report. Though it was reviewed experts, ultimately I decided whether or not to publish in its final state.
So you didn’t get anyone reviewing your overall approach or methodology? Don’t you perhaps think this is a bit of an oversight given how influential this report is likely to be?
Oh sorry, I thought you meant ‘did they leave negative comments about these things’. Lots of people looked at the overall report and were free to point out things I missed.
I still don’t really understand why you have such an issue with the methodology. I took my methodology to be—pick out all of the things in the climate literature that are relevant to the longtermist import of climate change, review the scientific literature on those things, and then arrive at my own view, send it to reviewers, make some revisions, iterate.
John, with all possible respect, that is not a theoretical framework.
I think one of your major errors in this piece (as betrayed by your methodology-as-categorisation comment above), is that you have an implicit ontology of factors as essentially separate phenomena that can perhaps have a few, likely simple relationships, which is simply not how the Earth-system or social systems work.
Thus, you think that if you’ve written a few paragraphs on each thing you deem relevant (chosen informally, liberally sprinkled with assertions, assumptions, and self-citations), you’ve covered everything.
Which impacts do you think I have missed? Can you explain why the perspective you take would render any of my substantive conclusions false?
I’m not sure what you’re talking about with self-citation. When do I cite myself?
Another way to look at it is to think about the impacts including in climate-economy models. Takakura et al (2019), which is one of the more comprehensive, includes:
Fluvial flooding
Coastal inundation
Agriculture
Undernourishment
Heat-related excess mortality
Cooling/heating demand
Occupational-health costs
Hydroelectric generation capacity
Thermal power generation capacity
I discuss all of those except cooling/heating demand and hydro/thermal generation capacity, as they seem like small factors relative to climate risk. In addition to that, I discuss tipping points, runaway greenhouse effects, crime, civil and interstate conflict, ecosystem collapse.
Sorry for jumping into this discussion which I haven’t actually read (I just saw this particular comment through the forum’s front page), but one thing that’s absent and I’d be interested in is desertification. I didn’t find any mention of it in the report.
Hi John
Thanks so much for this. Did any of the reviewers (Peter Watson, Goodwin Gibbins, James Ozden perhaps?) make comments on the overall report ie your methodology, your choices of areas of inquiry etc. As this is my major criticism of your work I would really love to see the reviewers comments on your overall methodology, structure of the report etc
Best
Gideon
No I didn’t get any of that. I don’t want to put words in their mouths, but Peter overall seemed very positive. I’m less sure what Goodwin and James thought, but they didn’t say anything massively negative, though perhaps they thought it
“I don’t want to put words in their mouths, but Peter overall seemed very positive”
As Peter, just in case this should come back to bite me if misinterpreted, I just thought I’d say I could give an informed review of certain physical climate science aspects and the report seems to capture those well. I am positive about the rest as being an interesting and in depth piece of scholarship into interesting questions, but I can’t vouch for it as an expert :-)
Would you suggest the depth of your feedback was the depth of peer review? And I’m correct in saying therefore that you didn’t really review the overall methodology used etc?
I’d say the depth of review was similar to peer review yes, though it is true to say that publication was not conditional on the peer reviewers okaying what I had written. As mentioned, the methodology was reviewed, yes. So, this is my view, having taken on significant expert input.
A natural question is whether my report should be given less weight eg than a peer reviewed paper in a prominent journal. I think as a rule, a good approach is to try start by getting a sense of what the weight of the literature says, and then exploring the substantive arguments made. For the usual reasons, we should expect any randomly selected paper to be false. Papers that make claims far outside the consensus position that get published in prominent journals are especially likely to be false. There is also scope for certain groups of scientists to review one another’s papers such that bad literatures can snowball.
This isn’t to say that any random person writing about climate change will be better than a random peer reviewed paper. But I think there are reasons to put more weight on the views of someone who has good epistemics (not saying this is true of me, but one might think it is true of some EA researchers) and also be actually talking about the thing we are interested in—i.e. the longtermist import of climate change. Most papers just aren’t focusing on that, but will use similar terminology. e.g. there is a paper by Xu and Ramanathan which says that climate change is an existential risk but uses that term in a completely different way to EAs.
I will give some examples of the flaws of the traditional peer review process as applied to some papers on the catastrophic side of things.
A paper that is often brought up in climate catastrophe discussions is Steffen et al (2018) - the ‘Hothouse Earth’ paper. That paper has now been cited more than 2,000 times. For reasons I discuss in the report, I think it is both surprising that the paper was published. The IPCC also disagrees with it.
2. The Kemp et al 2022 PNAS paper (also written by many planetary boundaries people) was peer reviewed, but also contains several errors.
For instance, it says “Yet, there remain reasons for caution. For instance, there is significant uncertainty over key variables such as energy demand and economic growth. Plausibly higher economic growth rates could make RCP8.5 35% more likely (27).”
The cite here in note (27) is to Christensen et al (2018), which actually says “Our results indicate that there is a greater than 35% probability that emissions concentrations will exceed those assumed in RCP8.5.” i.e. their finding is about the percentage point chance of RCP8.5, not about an increase in the relative risk of RCP8.5.
Another example: “While an ECS below 1.5 °C was essentially ruled out, there remains an 18% probability that ECS could be greater than 4.5 °C (14).”
The cite here is to the entire WG1 IPCC report (not that useful for checking but that aside...) The latest IPCC report says “a best estimate of equilibrium climate sensitivity of 3°C, with a very likely range of 2°C to 5°C. The likely range [is] 2.5°C to 4°C” The IPCC says “Throughout the WGI report and unless stated otherwise, uncertainty is quantified using 90% uncertainty intervals. The 90% uncertainty interval, reported in square brackets [x to y], is estimated to have a 90% likelihood of covering the value that is being estimated. The range encompasses the median value, and there is an estimated 10% combined likelihood of the value being below the lower end of the range (x) and above its upper end (y). Often, the distribution will be considered symmetric about the corresponding best estimate, but this is not always the case. In this Report, an assessed 90% uncertainty interval is referred to as a ‘very likely range’. Similarly, an assessed 66% uncertainty interval is referred to as a ‘likely range’.
So, the 66% CI is 2.5ºC to 4ºC and the 90% CI is 2ºC-5ºC. If this is symmetric, then this means there is a 17% chance of >4ºC, and a 5% chance of >5ºC. It’s unclear whether the distribution is symmetric or not—the IPCC does not say—but if it is then the ’18% chance of >4.5ºC’ claim in climate endgame is wrong. So, a key claim in that paper - about the main variable of interest in climate science—cannot be inferred from the given reference.
3. Jehn et al have published two papers cited in Kemp et al (2022), one of which says that “More likely higher end warming scenarios of 3 °C and above, despite potential catastrophic impacts, are severely neglected.” This is just not true, but nevertheless made it through peer review. Almost every single climate impact study reports the impact of 4.4ºC. There is barely a single chart in the entire IPCC impacts report that does not report that. We can perhaps quibble over what ‘severely neglected’ means, but it doesn’t mean ‘shown in every single chart in the IPCC climate impacts book’. It is surprising that this got through peer review.
**
As I have said, these are just single studies. I am consistently impressed by how good the IPCC is at reporting the median view in the literature, given how politicised the whole process must be.
**
I also do not think there is any tendency to downplay risks in the climate science literature. If you look at studies on publication bias in climate science, they find that effect sizes in abstracts in climate change papers have a tendency to be significantly inflated relative to the main text. This is especially pronounced in high impact journals. I have also found this from personal experience. Overall, I think in some cases the risks are overstated, in some they are understated, but there is no systematic pattern.
Probably the best way to examine whether my substantive conclusions are wrong would be to raise some substantive criticisms/carry out a redteam—I would welcome this. I emphasise that if my arguments are correct, then the scale of biorisk is numerous orders of magnitude larger than climate change.
Peer review is very variable so it’s hard to say what “the depth of peer review” is. I checked the bits I was asked to check in a similar way as I would a journal article. No I didn’t myself really review the methodology. The process was also quite different from normal review in involving quite a few back-and-forth discussions—I felt more like I was helping make the work better rather than simply commenting on its quality. It also differed in that the decision about “publishing” was taken by John rather than a separate editor (as far as I know).
I would say that for all of the ‘non-EA’ reviewers, the review was very extensive, and this was also true of some of the EA reviewers (because they were more pushed for time). The non-EA expert reviewers were also compensated for their review in order to incentivise them to review in depth.
It is true that I ultimately decided whether or not to publish, so this makes it different to peer review. Someone mentioned to me that some people mean by ‘peer review’ that the reviewers have to agree for publication to be ok, but this wasn’t the case for this report. Though it was reviewed experts, ultimately I decided whether or not to publish in its final state.
Hi John,
Thanks for this openness, its really appreciated. Any update as to whether the reviewers are happy for their comments to be share?
Best
Gideon
So you didn’t get anyone reviewing your overall approach or methodology? Don’t you perhaps think this is a bit of an oversight given how influential this report is likely to be?
Oh sorry, I thought you meant ‘did they leave negative comments about these things’. Lots of people looked at the overall report and were free to point out things I missed.
I still don’t really understand why you have such an issue with the methodology. I took my methodology to be—pick out all of the things in the climate literature that are relevant to the longtermist import of climate change, review the scientific literature on those things, and then arrive at my own view, send it to reviewers, make some revisions, iterate.
John, with all possible respect, that is not a theoretical framework.
I think one of your major errors in this piece (as betrayed by your methodology-as-categorisation comment above), is that you have an implicit ontology of factors as essentially separate phenomena that can perhaps have a few, likely simple relationships, which is simply not how the Earth-system or social systems work.
Thus, you think that if you’ve written a few paragraphs on each thing you deem relevant (chosen informally, liberally sprinkled with assertions, assumptions, and self-citations), you’ve covered everything.
It’s all very Cartesian.
Which impacts do you think I have missed? Can you explain why the perspective you take would render any of my substantive conclusions false?
I’m not sure what you’re talking about with self-citation. When do I cite myself?
Another way to look at it is to think about the impacts including in climate-economy models. Takakura et al (2019), which is one of the more comprehensive, includes:
Fluvial flooding
Coastal inundation
Agriculture
Undernourishment
Heat-related excess mortality
Cooling/heating demand
Occupational-health costs
Hydroelectric generation capacity
Thermal power generation capacity
I discuss all of those except cooling/heating demand and hydro/thermal generation capacity, as they seem like small factors relative to climate risk. In addition to that, I discuss tipping points, runaway greenhouse effects, crime, civil and interstate conflict, ecosystem collapse.
Sorry for jumping into this discussion which I haven’t actually read (I just saw this particular comment through the forum’s front page), but one thing that’s absent and I’d be interested in is desertification. I didn’t find any mention of it in the report.