1. Do you think that 4 degrees is “endgame and catastrophic” in the sense of being a threat to the long-term flourishing of humanity, or something else?
I agree 4 degrees would be bad, but I don’t see how that is relevant to my argument.
2. “individual actors may resort to solar geoengineering without worldwide consensus” I argue against it in my piece. If brazil starts doing stratospheric aerosol injection, this would affect weather in the US and other allies—it’s not a plausible piece of statecraft in my opinion. You mention the risk of ‘rogue actors’ deploying it—I don’t see an argument against what I said in my piece on this. You are stating one common view in the literature that is especially worried about unilateralism, but I find the other multilateralism take more persuasive.
I am happy to offer a bet on this—what do you think the odds are of a single state unilaterally deploying stratospheric aerosol injection for more than 6 months over the next 30 years? I’ll offer £500 I win/ £500 you win.
Other things equal, understanding the ramifications of SG would be good, but there are costs to doing so, namely mitigation obstruction risk.
3. I agree it would’ve been better to look more in detail at the effects on Russia and that does update me towards it being bad for them.
4. Not sure I see why the connection to AGI research is relevant here. We should worry about neglect of a solution when that neglect is irrational. I think SG research is neglected for a reason—scientists and funders don’t want to do it because they are worried about the moral hazard.
5. “A lot of the warming is already locked into the ocean. Giving another 30 years before starting to research will likely be too late.” What do you mean by “a lot”? Emissions until 2080 will have a large effect on what level of warming there will be—it is still technically within our power to follow RCP2.6 or RCP8.5, which would have hugely different implications for the probability of 4 degrees. This is why a wait and see approach is valuable.
6. Current technologies—I was going off McLellan et al (2012) - “We conclude that (a) the basic technological capability to deliver material to the stratosphere at million tonne per year rates exists today”. Smith and Wagner (2018) dissent from this but still say “However, we also conclude that developing a new, purpose-built high-altitude tanker with substantial payload capabilities would neither be technologically difficult nor prohibitively expensive”.
You say “have almost no precedent (volcanic eruptions can only tell us so much)” - I don’t see why this is relevant to the question at hand of the technical feasibility of getting aerosols into the air.
7. I agree that the weaponisation risk seems small. It’s still hard to know what research will turn up in advance and if we can avoid this risk without much cost then we should do so. It is a downside of research it would be nice to avoid.
8. Climate is a stressor of international politics risks. I agree with that but don’t see how it is inconsistent with my argument.
9. In what sense are CO2 emissions a moral hazard? It’s usually classed as a free rider problem, not a moral hazard. If you mean that CO2 emissions are bad, I agree with you.
Yeah the thought that research will rule out SG is plausible and that would be a reason to research SG, especially with governance-focused research. I have some credence in that view and some in not researching it at all. The timelag feature that SG is unlikely to be deployed in the next 50 years pushes me towards delaying research being the way to go.
Hi John, thanks for the reply, I wasn’t expecting one after this long but I am pleased about the thoroughness and thought you put into it.
Yes, I think it’s likely to be catastrophic (not extinction risk). If your assertion that it is only likely to be used when warming is 4C+, your whole argument is moot because that is already past the point of no return. From the base that 4C+ is an unacceptable situation, you would immediately be making the case that SG should be funded and accelerated by your own following argument—that currently policy and research are preventing it from being available until well after it is a viable temporary mitigation strategy (if it on balance positive).
In the framing of the desperation-triggered, unilateral application of SG this would not be in the context of “statecraft”. If millions are facing water shortages and famines in India they may have bigger concerns than their relations with allies, and equally could criticise them for not contributing to climate change solutions that are disproportionately affecting them. Scapegoating a unilateral SG actor for adverse weather is possible but they could also point to climate change for those effects and say they are trying to combat them, which no one else is doing. This very much gets into the mire of propaganda and spin and I don’t think that a direct consequential chain of “public gets angry, state representing that public puts pressure on SG-using state, SG-using state acquiesces” is bulletproof as an argument. Saudi Arabia has such significant political and economic power that they can support terrorism with the West calling them allies. I think these things are far from obvious. I am not so concerned with unilateralism not for the state “peer pressure” reason you give but because it would be a significant financial burden for any one state. I also see it as a potential risk while you appear to discount it heavily, and use this to support 1b) which then kicks the ball down the road for 50 years.
This bet seems severely against your interests so I will treat it as rhetorical—your payout will only be accessible in 30 years and on those time scales inflation, life expectancy, technological change, X-risks and remembering the bet are all not in your favour. Whereas I have the potential to cash out at any time before that deadline. I don’t particularly think that it’s likely but I would still have taken that bet because of the conditions − 30 years is a long time.
We are, depending on estimates, a decent way off having AGI, so by your argument there is no point doing AGI safety research now because we don’t need it now, if there are any risks / costs associated. I think the same arguments in support of that work in this context, time required and complexity of the issue prompt early investment (and that early-stage research is neglected currently). This isn’t about the neglect but the timeline—any research is going to incur costs now and pay off later. AGI safety has clear risks in potentially downplaying the risks of AGI and achieving a Dunning-Kruger effect for the entire human race, accelerating development of an inherently uncontrollable technology. It’s a case of balancing the costs with the advantages of starting earlier.
Over 90% of warming since the 1970s has been absorbed by the oceans, this conversation will only be of relevance if we are following higher RCP trajectories where it may be warranted to use controversial techniques, but there is a huge thermal sink that will work to bring the atmosphere back into equilibrium even using SG. This undermines the wait and see approach because it would be valuable to prepare in advance of the requirements.
I didn’t realise that this was referenced—if other people have investigated then that’s fair enough. I didn’t think there was any precedent for creating and dispersing particulate matter into the atmosphere and to keep them there, considering weather and localisation and other challenges. We are not able to replicate a volcanic eruption in terms of getting material airborne was my point, so the relevance of global cooling via volcanism isn’t ironclad, direct evidence that SAI would work in the same way, as I expect it would depend on the distribution method and location (altitude etc.). If people who have looked into this see this as feasible with current tech then they will likely have more info than me.
The counterfactual of not implementing SAI isn’t a flat line for X-risk, in the absence of mitigating effects the risk of existential events increases over time as a result of the indirect effects of climate change unmitigated by SAI (other things being equal). This would somewhat discount the increasing X-risk of SAI.
On rereading I may be misinterpreting, I thought that you were using moral hazard in the standard economic sense but you may be defining it just after as per previous papers as plan B undermining plan A. I agree in that case that my use of the term doesn’t make sense, but I’m more familiar with it’s use as increasing exposure to risk because the actor doesn’t bear the full consequence of that risk, a sort of generalised externality. In that case CO2 emission is a moral hazard because the systemic risk isn’t borne just by that actor, which incentivises overproduction.
Again, thanks for the response, I enjoyed the article and your reply has helped me understand the sources of disagreement a bit better. Some of them seem to be purely opinion-based or miscommunication. I also agreed with a lot of your points, although some not for the same reasons that you have given. I would still like to see small scale research on this topic being done and didn’t see that much wrong with doing it. I will have to read more about the moral hazard argument sources you mentioned because they should be more convincing than those I have encountered previously.
1. If solar geoengineering is not going to be used until we get to 4 degrees, then there is no point in researching it even if 4 degrees is catastrophic.
2. I agree that the constraints on state action are not perfect. As you say, the saudis fund terrorism and major powers flex their muscles at each other in more or less overt ways. The deployment of solar geoengineering would be on a different order—a huge and bold move. Do I think India would deploy solar geoengineering without the consent of China, risking the almost guaranteed ire of China? No.
The bet offer was not rhetorical and still stands if you would like it. We can pick an arbiter to make sure it happens. If you are worried about decaying attention, we could have a shorter timeframe? What do you think is the chance in the next 10 years that someone deploys it?
3. The debate about AI safety seems like a distraction to me—if you showed me that the case was analogous to solar geoengineering research, then I would argue that we should also delay AGI safety research for the same reasons. But it is disanalogous in numerous ways, so I don’t see the point in exploring the analogy. Nevertheless… one rationale for AGI safety research is that some people think there is a non-negligible chance of AI in the next 20 years. Indeed, Toby Ord’s median estimate is that we will get it in the next 20 years. If you believe that, then the case for AI safety research now is very clear. That is one disanalogy.
Secondly, the downsides of AGI research seem minimal. There is some dim possibility that AGI research could lead us to irrationally downplay the risks of AGI, but I have literally never seen this concern brought up before as a reason not to do AGI safety research. As far as I am aware, no-one is not doing AGI safety research because of that consideration. In contrast, in climate there is a pretty much cross-field taboo against against talking about solar geoengineering in a vaguely positive way. This is basically for the reasons I outline.
5. Our anthropogenic emissions between 2020 and 2080 have a huge effect on how hot it will get. e.g. We can still (technically) follow RCP2.6 and RCP8.5 On RCP2.6, median warming is less than 2 degrees, on RCP8.5, it is 4 degrees and beyond.
7. That seems right but the debate we’re having is about whether to research it not deploy it.
Hello thanks for these interesting comments
1. Do you think that 4 degrees is “endgame and catastrophic” in the sense of being a threat to the long-term flourishing of humanity, or something else?
I agree 4 degrees would be bad, but I don’t see how that is relevant to my argument.
2. “individual actors may resort to solar geoengineering without worldwide consensus” I argue against it in my piece. If brazil starts doing stratospheric aerosol injection, this would affect weather in the US and other allies—it’s not a plausible piece of statecraft in my opinion. You mention the risk of ‘rogue actors’ deploying it—I don’t see an argument against what I said in my piece on this. You are stating one common view in the literature that is especially worried about unilateralism, but I find the other multilateralism take more persuasive.
I am happy to offer a bet on this—what do you think the odds are of a single state unilaterally deploying stratospheric aerosol injection for more than 6 months over the next 30 years? I’ll offer £500 I win/ £500 you win.
Other things equal, understanding the ramifications of SG would be good, but there are costs to doing so, namely mitigation obstruction risk.
3. I agree it would’ve been better to look more in detail at the effects on Russia and that does update me towards it being bad for them.
4. Not sure I see why the connection to AGI research is relevant here. We should worry about neglect of a solution when that neglect is irrational. I think SG research is neglected for a reason—scientists and funders don’t want to do it because they are worried about the moral hazard.
5. “A lot of the warming is already locked into the ocean. Giving another 30 years before starting to research will likely be too late.” What do you mean by “a lot”? Emissions until 2080 will have a large effect on what level of warming there will be—it is still technically within our power to follow RCP2.6 or RCP8.5, which would have hugely different implications for the probability of 4 degrees. This is why a wait and see approach is valuable.
6. Current technologies—I was going off McLellan et al (2012) - “We conclude that (a) the basic technological capability to deliver material to the stratosphere at million tonne per year rates exists today”. Smith and Wagner (2018) dissent from this but still say “However, we also conclude that developing a new, purpose-built high-altitude tanker with substantial payload capabilities would neither be technologically difficult nor prohibitively expensive”.
You say “have almost no precedent (volcanic eruptions can only tell us so much)” - I don’t see why this is relevant to the question at hand of the technical feasibility of getting aerosols into the air.
7. I agree that the weaponisation risk seems small. It’s still hard to know what research will turn up in advance and if we can avoid this risk without much cost then we should do so. It is a downside of research it would be nice to avoid.
8. Climate is a stressor of international politics risks. I agree with that but don’t see how it is inconsistent with my argument.
9. In what sense are CO2 emissions a moral hazard? It’s usually classed as a free rider problem, not a moral hazard. If you mean that CO2 emissions are bad, I agree with you.
Yeah the thought that research will rule out SG is plausible and that would be a reason to research SG, especially with governance-focused research. I have some credence in that view and some in not researching it at all. The timelag feature that SG is unlikely to be deployed in the next 50 years pushes me towards delaying research being the way to go.
Hi John, thanks for the reply, I wasn’t expecting one after this long but I am pleased about the thoroughness and thought you put into it.
Yes, I think it’s likely to be catastrophic (not extinction risk). If your assertion that it is only likely to be used when warming is 4C+, your whole argument is moot because that is already past the point of no return. From the base that 4C+ is an unacceptable situation, you would immediately be making the case that SG should be funded and accelerated by your own following argument—that currently policy and research are preventing it from being available until well after it is a viable temporary mitigation strategy (if it on balance positive).
In the framing of the desperation-triggered, unilateral application of SG this would not be in the context of “statecraft”. If millions are facing water shortages and famines in India they may have bigger concerns than their relations with allies, and equally could criticise them for not contributing to climate change solutions that are disproportionately affecting them. Scapegoating a unilateral SG actor for adverse weather is possible but they could also point to climate change for those effects and say they are trying to combat them, which no one else is doing. This very much gets into the mire of propaganda and spin and I don’t think that a direct consequential chain of “public gets angry, state representing that public puts pressure on SG-using state, SG-using state acquiesces” is bulletproof as an argument. Saudi Arabia has such significant political and economic power that they can support terrorism with the West calling them allies. I think these things are far from obvious. I am not so concerned with unilateralism not for the state “peer pressure” reason you give but because it would be a significant financial burden for any one state. I also see it as a potential risk while you appear to discount it heavily, and use this to support 1b) which then kicks the ball down the road for 50 years.
This bet seems severely against your interests so I will treat it as rhetorical—your payout will only be accessible in 30 years and on those time scales inflation, life expectancy, technological change, X-risks and remembering the bet are all not in your favour. Whereas I have the potential to cash out at any time before that deadline. I don’t particularly think that it’s likely but I would still have taken that bet because of the conditions − 30 years is a long time.
We are, depending on estimates, a decent way off having AGI, so by your argument there is no point doing AGI safety research now because we don’t need it now, if there are any risks / costs associated. I think the same arguments in support of that work in this context, time required and complexity of the issue prompt early investment (and that early-stage research is neglected currently). This isn’t about the neglect but the timeline—any research is going to incur costs now and pay off later. AGI safety has clear risks in potentially downplaying the risks of AGI and achieving a Dunning-Kruger effect for the entire human race, accelerating development of an inherently uncontrollable technology. It’s a case of balancing the costs with the advantages of starting earlier.
Over 90% of warming since the 1970s has been absorbed by the oceans, this conversation will only be of relevance if we are following higher RCP trajectories where it may be warranted to use controversial techniques, but there is a huge thermal sink that will work to bring the atmosphere back into equilibrium even using SG. This undermines the wait and see approach because it would be valuable to prepare in advance of the requirements.
I didn’t realise that this was referenced—if other people have investigated then that’s fair enough. I didn’t think there was any precedent for creating and dispersing particulate matter into the atmosphere and to keep them there, considering weather and localisation and other challenges. We are not able to replicate a volcanic eruption in terms of getting material airborne was my point, so the relevance of global cooling via volcanism isn’t ironclad, direct evidence that SAI would work in the same way, as I expect it would depend on the distribution method and location (altitude etc.). If people who have looked into this see this as feasible with current tech then they will likely have more info than me.
The counterfactual of not implementing SAI isn’t a flat line for X-risk, in the absence of mitigating effects the risk of existential events increases over time as a result of the indirect effects of climate change unmitigated by SAI (other things being equal). This would somewhat discount the increasing X-risk of SAI.
On rereading I may be misinterpreting, I thought that you were using moral hazard in the standard economic sense but you may be defining it just after as per previous papers as plan B undermining plan A. I agree in that case that my use of the term doesn’t make sense, but I’m more familiar with it’s use as increasing exposure to risk because the actor doesn’t bear the full consequence of that risk, a sort of generalised externality. In that case CO2 emission is a moral hazard because the systemic risk isn’t borne just by that actor, which incentivises overproduction.
Again, thanks for the response, I enjoyed the article and your reply has helped me understand the sources of disagreement a bit better. Some of them seem to be purely opinion-based or miscommunication. I also agreed with a lot of your points, although some not for the same reasons that you have given. I would still like to see small scale research on this topic being done and didn’t see that much wrong with doing it. I will have to read more about the moral hazard argument sources you mentioned because they should be more convincing than those I have encountered previously.
Thanks for this—have some quick replies below
1. If solar geoengineering is not going to be used until we get to 4 degrees, then there is no point in researching it even if 4 degrees is catastrophic.
2. I agree that the constraints on state action are not perfect. As you say, the saudis fund terrorism and major powers flex their muscles at each other in more or less overt ways. The deployment of solar geoengineering would be on a different order—a huge and bold move. Do I think India would deploy solar geoengineering without the consent of China, risking the almost guaranteed ire of China? No.
The bet offer was not rhetorical and still stands if you would like it. We can pick an arbiter to make sure it happens. If you are worried about decaying attention, we could have a shorter timeframe? What do you think is the chance in the next 10 years that someone deploys it?
3. The debate about AI safety seems like a distraction to me—if you showed me that the case was analogous to solar geoengineering research, then I would argue that we should also delay AGI safety research for the same reasons. But it is disanalogous in numerous ways, so I don’t see the point in exploring the analogy. Nevertheless… one rationale for AGI safety research is that some people think there is a non-negligible chance of AI in the next 20 years. Indeed, Toby Ord’s median estimate is that we will get it in the next 20 years. If you believe that, then the case for AI safety research now is very clear. That is one disanalogy.
Secondly, the downsides of AGI research seem minimal. There is some dim possibility that AGI research could lead us to irrationally downplay the risks of AGI, but I have literally never seen this concern brought up before as a reason not to do AGI safety research. As far as I am aware, no-one is not doing AGI safety research because of that consideration. In contrast, in climate there is a pretty much cross-field taboo against against talking about solar geoengineering in a vaguely positive way. This is basically for the reasons I outline.
5. Our anthropogenic emissions between 2020 and 2080 have a huge effect on how hot it will get. e.g. We can still (technically) follow RCP2.6 and RCP8.5 On RCP2.6, median warming is less than 2 degrees, on RCP8.5, it is 4 degrees and beyond.
7. That seems right but the debate we’re having is about whether to research it not deploy it.