Hi John, thanks for the reply, I wasn’t expecting one after this long but I am pleased about the thoroughness and thought you put into it.
Yes, I think it’s likely to be catastrophic (not extinction risk). If your assertion that it is only likely to be used when warming is 4C+, your whole argument is moot because that is already past the point of no return. From the base that 4C+ is an unacceptable situation, you would immediately be making the case that SG should be funded and accelerated by your own following argument—that currently policy and research are preventing it from being available until well after it is a viable temporary mitigation strategy (if it on balance positive).
In the framing of the desperation-triggered, unilateral application of SG this would not be in the context of “statecraft”. If millions are facing water shortages and famines in India they may have bigger concerns than their relations with allies, and equally could criticise them for not contributing to climate change solutions that are disproportionately affecting them. Scapegoating a unilateral SG actor for adverse weather is possible but they could also point to climate change for those effects and say they are trying to combat them, which no one else is doing. This very much gets into the mire of propaganda and spin and I don’t think that a direct consequential chain of “public gets angry, state representing that public puts pressure on SG-using state, SG-using state acquiesces” is bulletproof as an argument. Saudi Arabia has such significant political and economic power that they can support terrorism with the West calling them allies. I think these things are far from obvious. I am not so concerned with unilateralism not for the state “peer pressure” reason you give but because it would be a significant financial burden for any one state. I also see it as a potential risk while you appear to discount it heavily, and use this to support 1b) which then kicks the ball down the road for 50 years.
This bet seems severely against your interests so I will treat it as rhetorical—your payout will only be accessible in 30 years and on those time scales inflation, life expectancy, technological change, X-risks and remembering the bet are all not in your favour. Whereas I have the potential to cash out at any time before that deadline. I don’t particularly think that it’s likely but I would still have taken that bet because of the conditions − 30 years is a long time.
We are, depending on estimates, a decent way off having AGI, so by your argument there is no point doing AGI safety research now because we don’t need it now, if there are any risks / costs associated. I think the same arguments in support of that work in this context, time required and complexity of the issue prompt early investment (and that early-stage research is neglected currently). This isn’t about the neglect but the timeline—any research is going to incur costs now and pay off later. AGI safety has clear risks in potentially downplaying the risks of AGI and achieving a Dunning-Kruger effect for the entire human race, accelerating development of an inherently uncontrollable technology. It’s a case of balancing the costs with the advantages of starting earlier.
Over 90% of warming since the 1970s has been absorbed by the oceans, this conversation will only be of relevance if we are following higher RCP trajectories where it may be warranted to use controversial techniques, but there is a huge thermal sink that will work to bring the atmosphere back into equilibrium even using SG. This undermines the wait and see approach because it would be valuable to prepare in advance of the requirements.
I didn’t realise that this was referenced—if other people have investigated then that’s fair enough. I didn’t think there was any precedent for creating and dispersing particulate matter into the atmosphere and to keep them there, considering weather and localisation and other challenges. We are not able to replicate a volcanic eruption in terms of getting material airborne was my point, so the relevance of global cooling via volcanism isn’t ironclad, direct evidence that SAI would work in the same way, as I expect it would depend on the distribution method and location (altitude etc.). If people who have looked into this see this as feasible with current tech then they will likely have more info than me.
The counterfactual of not implementing SAI isn’t a flat line for X-risk, in the absence of mitigating effects the risk of existential events increases over time as a result of the indirect effects of climate change unmitigated by SAI (other things being equal). This would somewhat discount the increasing X-risk of SAI.
On rereading I may be misinterpreting, I thought that you were using moral hazard in the standard economic sense but you may be defining it just after as per previous papers as plan B undermining plan A. I agree in that case that my use of the term doesn’t make sense, but I’m more familiar with it’s use as increasing exposure to risk because the actor doesn’t bear the full consequence of that risk, a sort of generalised externality. In that case CO2 emission is a moral hazard because the systemic risk isn’t borne just by that actor, which incentivises overproduction.
Again, thanks for the response, I enjoyed the article and your reply has helped me understand the sources of disagreement a bit better. Some of them seem to be purely opinion-based or miscommunication. I also agreed with a lot of your points, although some not for the same reasons that you have given. I would still like to see small scale research on this topic being done and didn’t see that much wrong with doing it. I will have to read more about the moral hazard argument sources you mentioned because they should be more convincing than those I have encountered previously.
1. If solar geoengineering is not going to be used until we get to 4 degrees, then there is no point in researching it even if 4 degrees is catastrophic.
2. I agree that the constraints on state action are not perfect. As you say, the saudis fund terrorism and major powers flex their muscles at each other in more or less overt ways. The deployment of solar geoengineering would be on a different order—a huge and bold move. Do I think India would deploy solar geoengineering without the consent of China, risking the almost guaranteed ire of China? No.
The bet offer was not rhetorical and still stands if you would like it. We can pick an arbiter to make sure it happens. If you are worried about decaying attention, we could have a shorter timeframe? What do you think is the chance in the next 10 years that someone deploys it?
3. The debate about AI safety seems like a distraction to me—if you showed me that the case was analogous to solar geoengineering research, then I would argue that we should also delay AGI safety research for the same reasons. But it is disanalogous in numerous ways, so I don’t see the point in exploring the analogy. Nevertheless… one rationale for AGI safety research is that some people think there is a non-negligible chance of AI in the next 20 years. Indeed, Toby Ord’s median estimate is that we will get it in the next 20 years. If you believe that, then the case for AI safety research now is very clear. That is one disanalogy.
Secondly, the downsides of AGI research seem minimal. There is some dim possibility that AGI research could lead us to irrationally downplay the risks of AGI, but I have literally never seen this concern brought up before as a reason not to do AGI safety research. As far as I am aware, no-one is not doing AGI safety research because of that consideration. In contrast, in climate there is a pretty much cross-field taboo against against talking about solar geoengineering in a vaguely positive way. This is basically for the reasons I outline.
5. Our anthropogenic emissions between 2020 and 2080 have a huge effect on how hot it will get. e.g. We can still (technically) follow RCP2.6 and RCP8.5 On RCP2.6, median warming is less than 2 degrees, on RCP8.5, it is 4 degrees and beyond.
7. That seems right but the debate we’re having is about whether to research it not deploy it.
Hi John, thanks for the reply, I wasn’t expecting one after this long but I am pleased about the thoroughness and thought you put into it.
Yes, I think it’s likely to be catastrophic (not extinction risk). If your assertion that it is only likely to be used when warming is 4C+, your whole argument is moot because that is already past the point of no return. From the base that 4C+ is an unacceptable situation, you would immediately be making the case that SG should be funded and accelerated by your own following argument—that currently policy and research are preventing it from being available until well after it is a viable temporary mitigation strategy (if it on balance positive).
In the framing of the desperation-triggered, unilateral application of SG this would not be in the context of “statecraft”. If millions are facing water shortages and famines in India they may have bigger concerns than their relations with allies, and equally could criticise them for not contributing to climate change solutions that are disproportionately affecting them. Scapegoating a unilateral SG actor for adverse weather is possible but they could also point to climate change for those effects and say they are trying to combat them, which no one else is doing. This very much gets into the mire of propaganda and spin and I don’t think that a direct consequential chain of “public gets angry, state representing that public puts pressure on SG-using state, SG-using state acquiesces” is bulletproof as an argument. Saudi Arabia has such significant political and economic power that they can support terrorism with the West calling them allies. I think these things are far from obvious. I am not so concerned with unilateralism not for the state “peer pressure” reason you give but because it would be a significant financial burden for any one state. I also see it as a potential risk while you appear to discount it heavily, and use this to support 1b) which then kicks the ball down the road for 50 years.
This bet seems severely against your interests so I will treat it as rhetorical—your payout will only be accessible in 30 years and on those time scales inflation, life expectancy, technological change, X-risks and remembering the bet are all not in your favour. Whereas I have the potential to cash out at any time before that deadline. I don’t particularly think that it’s likely but I would still have taken that bet because of the conditions − 30 years is a long time.
We are, depending on estimates, a decent way off having AGI, so by your argument there is no point doing AGI safety research now because we don’t need it now, if there are any risks / costs associated. I think the same arguments in support of that work in this context, time required and complexity of the issue prompt early investment (and that early-stage research is neglected currently). This isn’t about the neglect but the timeline—any research is going to incur costs now and pay off later. AGI safety has clear risks in potentially downplaying the risks of AGI and achieving a Dunning-Kruger effect for the entire human race, accelerating development of an inherently uncontrollable technology. It’s a case of balancing the costs with the advantages of starting earlier.
Over 90% of warming since the 1970s has been absorbed by the oceans, this conversation will only be of relevance if we are following higher RCP trajectories where it may be warranted to use controversial techniques, but there is a huge thermal sink that will work to bring the atmosphere back into equilibrium even using SG. This undermines the wait and see approach because it would be valuable to prepare in advance of the requirements.
I didn’t realise that this was referenced—if other people have investigated then that’s fair enough. I didn’t think there was any precedent for creating and dispersing particulate matter into the atmosphere and to keep them there, considering weather and localisation and other challenges. We are not able to replicate a volcanic eruption in terms of getting material airborne was my point, so the relevance of global cooling via volcanism isn’t ironclad, direct evidence that SAI would work in the same way, as I expect it would depend on the distribution method and location (altitude etc.). If people who have looked into this see this as feasible with current tech then they will likely have more info than me.
The counterfactual of not implementing SAI isn’t a flat line for X-risk, in the absence of mitigating effects the risk of existential events increases over time as a result of the indirect effects of climate change unmitigated by SAI (other things being equal). This would somewhat discount the increasing X-risk of SAI.
On rereading I may be misinterpreting, I thought that you were using moral hazard in the standard economic sense but you may be defining it just after as per previous papers as plan B undermining plan A. I agree in that case that my use of the term doesn’t make sense, but I’m more familiar with it’s use as increasing exposure to risk because the actor doesn’t bear the full consequence of that risk, a sort of generalised externality. In that case CO2 emission is a moral hazard because the systemic risk isn’t borne just by that actor, which incentivises overproduction.
Again, thanks for the response, I enjoyed the article and your reply has helped me understand the sources of disagreement a bit better. Some of them seem to be purely opinion-based or miscommunication. I also agreed with a lot of your points, although some not for the same reasons that you have given. I would still like to see small scale research on this topic being done and didn’t see that much wrong with doing it. I will have to read more about the moral hazard argument sources you mentioned because they should be more convincing than those I have encountered previously.
Thanks for this—have some quick replies below
1. If solar geoengineering is not going to be used until we get to 4 degrees, then there is no point in researching it even if 4 degrees is catastrophic.
2. I agree that the constraints on state action are not perfect. As you say, the saudis fund terrorism and major powers flex their muscles at each other in more or less overt ways. The deployment of solar geoengineering would be on a different order—a huge and bold move. Do I think India would deploy solar geoengineering without the consent of China, risking the almost guaranteed ire of China? No.
The bet offer was not rhetorical and still stands if you would like it. We can pick an arbiter to make sure it happens. If you are worried about decaying attention, we could have a shorter timeframe? What do you think is the chance in the next 10 years that someone deploys it?
3. The debate about AI safety seems like a distraction to me—if you showed me that the case was analogous to solar geoengineering research, then I would argue that we should also delay AGI safety research for the same reasons. But it is disanalogous in numerous ways, so I don’t see the point in exploring the analogy. Nevertheless… one rationale for AGI safety research is that some people think there is a non-negligible chance of AI in the next 20 years. Indeed, Toby Ord’s median estimate is that we will get it in the next 20 years. If you believe that, then the case for AI safety research now is very clear. That is one disanalogy.
Secondly, the downsides of AGI research seem minimal. There is some dim possibility that AGI research could lead us to irrationally downplay the risks of AGI, but I have literally never seen this concern brought up before as a reason not to do AGI safety research. As far as I am aware, no-one is not doing AGI safety research because of that consideration. In contrast, in climate there is a pretty much cross-field taboo against against talking about solar geoengineering in a vaguely positive way. This is basically for the reasons I outline.
5. Our anthropogenic emissions between 2020 and 2080 have a huge effect on how hot it will get. e.g. We can still (technically) follow RCP2.6 and RCP8.5 On RCP2.6, median warming is less than 2 degrees, on RCP8.5, it is 4 degrees and beyond.
7. That seems right but the debate we’re having is about whether to research it not deploy it.