In short, his take (a) seems consistent with the claim that research and policy attention is being misallocated and (b) suggests a mechanism that might partly explain the misallocation.
Abstract (my emphasis):
In recent articles I have argued that integrated assessment models (IAMs) have flaws that make them close to useless as tools for policy analysis. IAM-based analyses of climate policy create a perception of knowledge and precision that is illusory and can fool policymakers into thinking that the forecasts the models generate have some kind of scientific legitimacy. However, some economists and climate scientists have claimed that we need to use some kind of model for policy analysis and that IAMs can be structured and used in ways that correct for their shortcomings. For example, it has been argued that although we know very little about key relationships in the model, we can get around this problem by attaching probability distributions to various parameters and then simulating the model using Monte Carlo methods. I argue that this would buy us nothing and that a simpler and more transparent approach to the design of climate change policy is preferable. I briefly outline what such an approach would look like.
A few highlights:
I believe that we need to be much more honest and up-front about the inherent limitations of IAMs. I doubt that the developers of IAMs have any intention of using them in a misleading way. Nevertheless, overselling their validity and claiming that IAMs can be used to evaluate policies and determine the SCC can end up misleading researchers, policymakers, and the public, even if it is unintentional. If economics is indeed a science, scientific honesty is paramount.
...
Yes, the calculations I have just described constitute a “model,” but it is a model that is exceedingly simple and straightforward and involves no pretense that we know the damage function, the feedback parameters that affect climate sensitivity, or other details of the climate–economy system. And yes, some experts might base their opinions on one or more IAMs, on a more limited climate science model, or simply on their research experience and/or general knowledge of climate change and its impact.
...
Some might argue that the approach I have outlined here is insufficiently precise. But I believe that we have no choice. Building and using elaborate models might allow us to think that we are approaching the climate policy problem more scientifically, but in the end, like the Wizard of Oz, we would only be drawing a curtain around our lack of knowledge
...
I have argued that the best we can do at this point is to come up with plausible answers to these questions, most likely by relying at least in part on numbers supplied by climate scientists and environmental economists, that is, utilize expert opinion. This kind of analysis would be simple, transparent, and easy to understand. It might not inspire the kind of awe and sense of scientific legitimacy conveyed by a large-scale IAM, but that is exactly the point.
Hi Peter, I think there is a nuance two disentangle – IAMs are confusingly used in two contexts: 1) models that try to optimize for some economically efficient social cost of carbon (and by proxy, climate policies), and 2) those that attempt to simulate plausible futures. Where Pindyck’s writing was mostly about the first, most IPCC work regards the second. Still, I absolutely agree with Pindyck’s criticisms – they translate well over to the second category. We tried to cover that massive topic in the section about deeply uncertain factors and Robust Decision-Making, but with so few words, it is difficult to fully address those points.
A further tricky aspect is that of the second type of models, the scenarios that are explored can themselves be misleading or they can limit analysis. Lamontagne et al. (2018) show how a full factorial of input scenarios illustrates that many combinations can lead to the same outcomes. When we don’t know how the future will actually unfold, the chosen archetypes clout our assessment.
All of this is to say: yes, I agree that all models are wrong, but some are useful. Our argument is mainly that through various approaches, we have some understanding of plausible temperature outcomes. We should prepare for all of these to be robustly prepared.
Somewhat related: Robert S. Pindyck on The Use and Misuse of Models for Climate Policy.
In short, his take (a) seems consistent with the claim that research and policy attention is being misallocated and (b) suggests a mechanism that might partly explain the misallocation.
Abstract (my emphasis):
A few highlights:
Hi Peter, I think there is a nuance two disentangle – IAMs are confusingly used in two contexts: 1) models that try to optimize for some economically efficient social cost of carbon (and by proxy, climate policies), and 2) those that attempt to simulate plausible futures. Where Pindyck’s writing was mostly about the first, most IPCC work regards the second. Still, I absolutely agree with Pindyck’s criticisms – they translate well over to the second category. We tried to cover that massive topic in the section about deeply uncertain factors and Robust Decision-Making, but with so few words, it is difficult to fully address those points.
A further tricky aspect is that of the second type of models, the scenarios that are explored can themselves be misleading or they can limit analysis. Lamontagne et al. (2018) show how a full factorial of input scenarios illustrates that many combinations can lead to the same outcomes. When we don’t know how the future will actually unfold, the chosen archetypes clout our assessment.
Another aspect is that the inputs themselves are actually outputs of others. Pielke Jr. and Ritchie (2021) discuss that in Distorting the view of our climate future: The misuse and abuse of climate pathways and scenarios.
All of this is to say: yes, I agree that all models are wrong, but some are useful. Our argument is mainly that through various approaches, we have some understanding of plausible temperature outcomes. We should prepare for all of these to be robustly prepared.