Hi John, thank you for this piece. I know it’s been a long time since you posted this but I wanted to respond to some of your thoughts.
“In my view, solar geoengineering is only likely to be used once warming is quite extreme, roughly exceeding around 4 degrees” - +4C is already endgame and catastrophic in my opinion. Considering that most of the heat is being absorbed by oceans leading to acidification, we’ll already be seeing significant sequestration losses as marine animals are unable to build calcium carbonate shells.
“This suggests that for solar geoengineering to be feasible, all major global powers would have to agree on the weather, a highly chaotic system.”—individual actors may resort to solar geoengineering without worldwide consensus, especially if countries that aren’t suffering from climate change are actively blocking mitigation attempts while still polluting. Understanding the possible ramifications before people begin to experiment through desperation is surely a good thing (e.g. India, China, Saudi Arabia, Brazil).
“We have had about 1 degree of warming thus far and, according to an IMF report, a further 1 degree of warming would be economiclly positive for many regions, especially Canada, Russia and Eastern Europe, and even potentially China (IMF report page 15).”—I think this is sketchy at best. The caveat footer 9 on page 14 should indicate how limited their conclusion is, not counting weather effects, migration, ecological effects, etc.
“Russia is a crucial factor here: global warming seems likely to bring numerous economic benefits for Russia, freeing up the Russian Arctic for exploration and thawing potential farmland.” - the US and Canada have been far more disruptive for climate change global agreements, the Paris agreement was largely stymied by Republican Congress. Permafrost thawing doesn’t free up usable farmland in significant amounts, these are still primarily extremely low viability / low human density forestland in Siberia. In fact, Russia is set to lose out significantly from permafrost thaw.
“Solar geoengineering research has clear risks and, given that we cannot deploy it at least for the next 50 years, there is no need to incur these costs now.”—this argument doesn’t hold weight for AGI research, and I don’t think it should for solar geoengineering. SG is highly neglected and as a fraction of CC research is minimal. The research will take decades to filter through to policy and international agreements, so it is worth starting research (not implementing) well before we are forced to use it.
“This would give us at least 20 years to cover the technical details and a governance framework.”—A lot of the warming is already locked into the ocean. Giving another 30 years before starting to research will likely be too late. I’m not in favour of implementing solar geoengineering now but researching the viability of these measures now seems to be promising, if not for application then for global security to dissuade rogue actors from implementing the measures with false / incomplete information / encourage preventative policy decisions. This requires fundamental technical research to assess the risks.
“This seems to me like enough time, given that: Solar geoengineering is probably technically feasible with adaptations to various different current technologies.”—I’m not sure the current technologies that you are referring to that can be adapted, but the more promising interventions are all much larger scale such as sulfur aerosol injections and have almost no precedent (volcanic eruptions can only tell us so much).
Finally “Another risk of solar geoengineering research is that it will uncover new technologies that could destabilise global civilisation. I discuss weaponisation risks in section 3.2 of my paper. ”—As your paper says, current information on SAI indicates that it will take a highly technically adept state actor decades of spending tens of billions of dollars and will still not be a permanent doomsday device (and will be obvious to other states and easily counteracted). All in all I find it difficult to imagine that SAI research will discover something that is easier and cheaper to generate a doomsday device than already exists in a conventional nuclear weapons stockpile. Additionally this doomsday device would also exterminate the user, whereas nuclear weapons can be directed at other states with no immediate, direct blowback (of course the political and social cost and likely retaliation from affiliated states are the reasons why we haven’t seen this happen yet). So the implication is that the malicious actor would also have to be suicidal. This doomsday device would also take time to work, which would give time to find a counteraction, and if research inadvertently discovers this application the time to find a solution will be the time from that research until the time of implementation.
Adding to this, currently climate change is projected to be a major stressor on international politics which can exacerbate nuclear X-risk, as well as expanding vectors for natural pandemic risks, among others—so this should also be in consideration when considering if SAI may uncover new X-risks as the baseline p(X-risk) for the coming decades is likely to be a curve rather than flat.
In conclusion the injection of CO2 and methane into the atmosphere may already constitute a moral hazard and dangerous weather manipulation method, and I think that we should be researching (not implementing) potential technical geoengineering solutions in order to prevent the expected outcomes of climate change as well as many other potential (part) solutions (As mentioned by others SAI doesn’t reduce CO2 levels and so does nothing for ocean acidification and other related issues). We should evaluate the risks and if (as I expect we will find) them to be too high due to uncertainty, we can use that information to construct international policy around this issue.
1. Do you think that 4 degrees is “endgame and catastrophic” in the sense of being a threat to the long-term flourishing of humanity, or something else?
I agree 4 degrees would be bad, but I don’t see how that is relevant to my argument.
2. “individual actors may resort to solar geoengineering without worldwide consensus” I argue against it in my piece. If brazil starts doing stratospheric aerosol injection, this would affect weather in the US and other allies—it’s not a plausible piece of statecraft in my opinion. You mention the risk of ‘rogue actors’ deploying it—I don’t see an argument against what I said in my piece on this. You are stating one common view in the literature that is especially worried about unilateralism, but I find the other multilateralism take more persuasive.
I am happy to offer a bet on this—what do you think the odds are of a single state unilaterally deploying stratospheric aerosol injection for more than 6 months over the next 30 years? I’ll offer £500 I win/ £500 you win.
Other things equal, understanding the ramifications of SG would be good, but there are costs to doing so, namely mitigation obstruction risk.
3. I agree it would’ve been better to look more in detail at the effects on Russia and that does update me towards it being bad for them.
4. Not sure I see why the connection to AGI research is relevant here. We should worry about neglect of a solution when that neglect is irrational. I think SG research is neglected for a reason—scientists and funders don’t want to do it because they are worried about the moral hazard.
5. “A lot of the warming is already locked into the ocean. Giving another 30 years before starting to research will likely be too late.” What do you mean by “a lot”? Emissions until 2080 will have a large effect on what level of warming there will be—it is still technically within our power to follow RCP2.6 or RCP8.5, which would have hugely different implications for the probability of 4 degrees. This is why a wait and see approach is valuable.
6. Current technologies—I was going off McLellan et al (2012) - “We conclude that (a) the basic technological capability to deliver material to the stratosphere at million tonne per year rates exists today”. Smith and Wagner (2018) dissent from this but still say “However, we also conclude that developing a new, purpose-built high-altitude tanker with substantial payload capabilities would neither be technologically difficult nor prohibitively expensive”.
You say “have almost no precedent (volcanic eruptions can only tell us so much)” - I don’t see why this is relevant to the question at hand of the technical feasibility of getting aerosols into the air.
7. I agree that the weaponisation risk seems small. It’s still hard to know what research will turn up in advance and if we can avoid this risk without much cost then we should do so. It is a downside of research it would be nice to avoid.
8. Climate is a stressor of international politics risks. I agree with that but don’t see how it is inconsistent with my argument.
9. In what sense are CO2 emissions a moral hazard? It’s usually classed as a free rider problem, not a moral hazard. If you mean that CO2 emissions are bad, I agree with you.
Yeah the thought that research will rule out SG is plausible and that would be a reason to research SG, especially with governance-focused research. I have some credence in that view and some in not researching it at all. The timelag feature that SG is unlikely to be deployed in the next 50 years pushes me towards delaying research being the way to go.
Hi John, thanks for the reply, I wasn’t expecting one after this long but I am pleased about the thoroughness and thought you put into it.
Yes, I think it’s likely to be catastrophic (not extinction risk). If your assertion that it is only likely to be used when warming is 4C+, your whole argument is moot because that is already past the point of no return. From the base that 4C+ is an unacceptable situation, you would immediately be making the case that SG should be funded and accelerated by your own following argument—that currently policy and research are preventing it from being available until well after it is a viable temporary mitigation strategy (if it on balance positive).
In the framing of the desperation-triggered, unilateral application of SG this would not be in the context of “statecraft”. If millions are facing water shortages and famines in India they may have bigger concerns than their relations with allies, and equally could criticise them for not contributing to climate change solutions that are disproportionately affecting them. Scapegoating a unilateral SG actor for adverse weather is possible but they could also point to climate change for those effects and say they are trying to combat them, which no one else is doing. This very much gets into the mire of propaganda and spin and I don’t think that a direct consequential chain of “public gets angry, state representing that public puts pressure on SG-using state, SG-using state acquiesces” is bulletproof as an argument. Saudi Arabia has such significant political and economic power that they can support terrorism with the West calling them allies. I think these things are far from obvious. I am not so concerned with unilateralism not for the state “peer pressure” reason you give but because it would be a significant financial burden for any one state. I also see it as a potential risk while you appear to discount it heavily, and use this to support 1b) which then kicks the ball down the road for 50 years.
This bet seems severely against your interests so I will treat it as rhetorical—your payout will only be accessible in 30 years and on those time scales inflation, life expectancy, technological change, X-risks and remembering the bet are all not in your favour. Whereas I have the potential to cash out at any time before that deadline. I don’t particularly think that it’s likely but I would still have taken that bet because of the conditions − 30 years is a long time.
We are, depending on estimates, a decent way off having AGI, so by your argument there is no point doing AGI safety research now because we don’t need it now, if there are any risks / costs associated. I think the same arguments in support of that work in this context, time required and complexity of the issue prompt early investment (and that early-stage research is neglected currently). This isn’t about the neglect but the timeline—any research is going to incur costs now and pay off later. AGI safety has clear risks in potentially downplaying the risks of AGI and achieving a Dunning-Kruger effect for the entire human race, accelerating development of an inherently uncontrollable technology. It’s a case of balancing the costs with the advantages of starting earlier.
Over 90% of warming since the 1970s has been absorbed by the oceans, this conversation will only be of relevance if we are following higher RCP trajectories where it may be warranted to use controversial techniques, but there is a huge thermal sink that will work to bring the atmosphere back into equilibrium even using SG. This undermines the wait and see approach because it would be valuable to prepare in advance of the requirements.
I didn’t realise that this was referenced—if other people have investigated then that’s fair enough. I didn’t think there was any precedent for creating and dispersing particulate matter into the atmosphere and to keep them there, considering weather and localisation and other challenges. We are not able to replicate a volcanic eruption in terms of getting material airborne was my point, so the relevance of global cooling via volcanism isn’t ironclad, direct evidence that SAI would work in the same way, as I expect it would depend on the distribution method and location (altitude etc.). If people who have looked into this see this as feasible with current tech then they will likely have more info than me.
The counterfactual of not implementing SAI isn’t a flat line for X-risk, in the absence of mitigating effects the risk of existential events increases over time as a result of the indirect effects of climate change unmitigated by SAI (other things being equal). This would somewhat discount the increasing X-risk of SAI.
On rereading I may be misinterpreting, I thought that you were using moral hazard in the standard economic sense but you may be defining it just after as per previous papers as plan B undermining plan A. I agree in that case that my use of the term doesn’t make sense, but I’m more familiar with it’s use as increasing exposure to risk because the actor doesn’t bear the full consequence of that risk, a sort of generalised externality. In that case CO2 emission is a moral hazard because the systemic risk isn’t borne just by that actor, which incentivises overproduction.
Again, thanks for the response, I enjoyed the article and your reply has helped me understand the sources of disagreement a bit better. Some of them seem to be purely opinion-based or miscommunication. I also agreed with a lot of your points, although some not for the same reasons that you have given. I would still like to see small scale research on this topic being done and didn’t see that much wrong with doing it. I will have to read more about the moral hazard argument sources you mentioned because they should be more convincing than those I have encountered previously.
1. If solar geoengineering is not going to be used until we get to 4 degrees, then there is no point in researching it even if 4 degrees is catastrophic.
2. I agree that the constraints on state action are not perfect. As you say, the saudis fund terrorism and major powers flex their muscles at each other in more or less overt ways. The deployment of solar geoengineering would be on a different order—a huge and bold move. Do I think India would deploy solar geoengineering without the consent of China, risking the almost guaranteed ire of China? No.
The bet offer was not rhetorical and still stands if you would like it. We can pick an arbiter to make sure it happens. If you are worried about decaying attention, we could have a shorter timeframe? What do you think is the chance in the next 10 years that someone deploys it?
3. The debate about AI safety seems like a distraction to me—if you showed me that the case was analogous to solar geoengineering research, then I would argue that we should also delay AGI safety research for the same reasons. But it is disanalogous in numerous ways, so I don’t see the point in exploring the analogy. Nevertheless… one rationale for AGI safety research is that some people think there is a non-negligible chance of AI in the next 20 years. Indeed, Toby Ord’s median estimate is that we will get it in the next 20 years. If you believe that, then the case for AI safety research now is very clear. That is one disanalogy.
Secondly, the downsides of AGI research seem minimal. There is some dim possibility that AGI research could lead us to irrationally downplay the risks of AGI, but I have literally never seen this concern brought up before as a reason not to do AGI safety research. As far as I am aware, no-one is not doing AGI safety research because of that consideration. In contrast, in climate there is a pretty much cross-field taboo against against talking about solar geoengineering in a vaguely positive way. This is basically for the reasons I outline.
5. Our anthropogenic emissions between 2020 and 2080 have a huge effect on how hot it will get. e.g. We can still (technically) follow RCP2.6 and RCP8.5 On RCP2.6, median warming is less than 2 degrees, on RCP8.5, it is 4 degrees and beyond.
7. That seems right but the debate we’re having is about whether to research it not deploy it.
Hi John, thank you for this piece. I know it’s been a long time since you posted this but I wanted to respond to some of your thoughts.
“In my view, solar geoengineering is only likely to be used once warming is quite extreme, roughly exceeding around 4 degrees” - +4C is already endgame and catastrophic in my opinion. Considering that most of the heat is being absorbed by oceans leading to acidification, we’ll already be seeing significant sequestration losses as marine animals are unable to build calcium carbonate shells.
“This suggests that for solar geoengineering to be feasible, all major global powers would have to agree on the weather, a highly chaotic system.”—individual actors may resort to solar geoengineering without worldwide consensus, especially if countries that aren’t suffering from climate change are actively blocking mitigation attempts while still polluting. Understanding the possible ramifications before people begin to experiment through desperation is surely a good thing (e.g. India, China, Saudi Arabia, Brazil).
“We have had about 1 degree of warming thus far and, according to an IMF report, a further 1 degree of warming would be economiclly positive for many regions, especially Canada, Russia and Eastern Europe, and even potentially China (IMF report page 15).”—I think this is sketchy at best. The caveat footer 9 on page 14 should indicate how limited their conclusion is, not counting weather effects, migration, ecological effects, etc.
“Russia is a crucial factor here: global warming seems likely to bring numerous economic benefits for Russia, freeing up the Russian Arctic for exploration and thawing potential farmland.” - the US and Canada have been far more disruptive for climate change global agreements, the Paris agreement was largely stymied by Republican Congress. Permafrost thawing doesn’t free up usable farmland in significant amounts, these are still primarily extremely low viability / low human density forestland in Siberia. In fact, Russia is set to lose out significantly from permafrost thaw.
“Solar geoengineering research has clear risks and, given that we cannot deploy it at least for the next 50 years, there is no need to incur these costs now.”—this argument doesn’t hold weight for AGI research, and I don’t think it should for solar geoengineering. SG is highly neglected and as a fraction of CC research is minimal. The research will take decades to filter through to policy and international agreements, so it is worth starting research (not implementing) well before we are forced to use it.
“This would give us at least 20 years to cover the technical details and a governance framework.”—A lot of the warming is already locked into the ocean. Giving another 30 years before starting to research will likely be too late. I’m not in favour of implementing solar geoengineering now but researching the viability of these measures now seems to be promising, if not for application then for global security to dissuade rogue actors from implementing the measures with false / incomplete information / encourage preventative policy decisions. This requires fundamental technical research to assess the risks.
“This seems to me like enough time, given that: Solar geoengineering is probably technically feasible with adaptations to various different current technologies.”—I’m not sure the current technologies that you are referring to that can be adapted, but the more promising interventions are all much larger scale such as sulfur aerosol injections and have almost no precedent (volcanic eruptions can only tell us so much).
Finally “Another risk of solar geoengineering research is that it will uncover new technologies that could destabilise global civilisation. I discuss weaponisation risks in section 3.2 of my paper. ”—As your paper says, current information on SAI indicates that it will take a highly technically adept state actor decades of spending tens of billions of dollars and will still not be a permanent doomsday device (and will be obvious to other states and easily counteracted). All in all I find it difficult to imagine that SAI research will discover something that is easier and cheaper to generate a doomsday device than already exists in a conventional nuclear weapons stockpile. Additionally this doomsday device would also exterminate the user, whereas nuclear weapons can be directed at other states with no immediate, direct blowback (of course the political and social cost and likely retaliation from affiliated states are the reasons why we haven’t seen this happen yet). So the implication is that the malicious actor would also have to be suicidal. This doomsday device would also take time to work, which would give time to find a counteraction, and if research inadvertently discovers this application the time to find a solution will be the time from that research until the time of implementation.
Adding to this, currently climate change is projected to be a major stressor on international politics which can exacerbate nuclear X-risk, as well as expanding vectors for natural pandemic risks, among others—so this should also be in consideration when considering if SAI may uncover new X-risks as the baseline p(X-risk) for the coming decades is likely to be a curve rather than flat.
In conclusion the injection of CO2 and methane into the atmosphere may already constitute a moral hazard and dangerous weather manipulation method, and I think that we should be researching (not implementing) potential technical geoengineering solutions in order to prevent the expected outcomes of climate change as well as many other potential (part) solutions (As mentioned by others SAI doesn’t reduce CO2 levels and so does nothing for ocean acidification and other related issues). We should evaluate the risks and if (as I expect we will find) them to be too high due to uncertainty, we can use that information to construct international policy around this issue.
Hello thanks for these interesting comments
1. Do you think that 4 degrees is “endgame and catastrophic” in the sense of being a threat to the long-term flourishing of humanity, or something else?
I agree 4 degrees would be bad, but I don’t see how that is relevant to my argument.
2. “individual actors may resort to solar geoengineering without worldwide consensus” I argue against it in my piece. If brazil starts doing stratospheric aerosol injection, this would affect weather in the US and other allies—it’s not a plausible piece of statecraft in my opinion. You mention the risk of ‘rogue actors’ deploying it—I don’t see an argument against what I said in my piece on this. You are stating one common view in the literature that is especially worried about unilateralism, but I find the other multilateralism take more persuasive.
I am happy to offer a bet on this—what do you think the odds are of a single state unilaterally deploying stratospheric aerosol injection for more than 6 months over the next 30 years? I’ll offer £500 I win/ £500 you win.
Other things equal, understanding the ramifications of SG would be good, but there are costs to doing so, namely mitigation obstruction risk.
3. I agree it would’ve been better to look more in detail at the effects on Russia and that does update me towards it being bad for them.
4. Not sure I see why the connection to AGI research is relevant here. We should worry about neglect of a solution when that neglect is irrational. I think SG research is neglected for a reason—scientists and funders don’t want to do it because they are worried about the moral hazard.
5. “A lot of the warming is already locked into the ocean. Giving another 30 years before starting to research will likely be too late.” What do you mean by “a lot”? Emissions until 2080 will have a large effect on what level of warming there will be—it is still technically within our power to follow RCP2.6 or RCP8.5, which would have hugely different implications for the probability of 4 degrees. This is why a wait and see approach is valuable.
6. Current technologies—I was going off McLellan et al (2012) - “We conclude that (a) the basic technological capability to deliver material to the stratosphere at million tonne per year rates exists today”. Smith and Wagner (2018) dissent from this but still say “However, we also conclude that developing a new, purpose-built high-altitude tanker with substantial payload capabilities would neither be technologically difficult nor prohibitively expensive”.
You say “have almost no precedent (volcanic eruptions can only tell us so much)” - I don’t see why this is relevant to the question at hand of the technical feasibility of getting aerosols into the air.
7. I agree that the weaponisation risk seems small. It’s still hard to know what research will turn up in advance and if we can avoid this risk without much cost then we should do so. It is a downside of research it would be nice to avoid.
8. Climate is a stressor of international politics risks. I agree with that but don’t see how it is inconsistent with my argument.
9. In what sense are CO2 emissions a moral hazard? It’s usually classed as a free rider problem, not a moral hazard. If you mean that CO2 emissions are bad, I agree with you.
Yeah the thought that research will rule out SG is plausible and that would be a reason to research SG, especially with governance-focused research. I have some credence in that view and some in not researching it at all. The timelag feature that SG is unlikely to be deployed in the next 50 years pushes me towards delaying research being the way to go.
Hi John, thanks for the reply, I wasn’t expecting one after this long but I am pleased about the thoroughness and thought you put into it.
Yes, I think it’s likely to be catastrophic (not extinction risk). If your assertion that it is only likely to be used when warming is 4C+, your whole argument is moot because that is already past the point of no return. From the base that 4C+ is an unacceptable situation, you would immediately be making the case that SG should be funded and accelerated by your own following argument—that currently policy and research are preventing it from being available until well after it is a viable temporary mitigation strategy (if it on balance positive).
In the framing of the desperation-triggered, unilateral application of SG this would not be in the context of “statecraft”. If millions are facing water shortages and famines in India they may have bigger concerns than their relations with allies, and equally could criticise them for not contributing to climate change solutions that are disproportionately affecting them. Scapegoating a unilateral SG actor for adverse weather is possible but they could also point to climate change for those effects and say they are trying to combat them, which no one else is doing. This very much gets into the mire of propaganda and spin and I don’t think that a direct consequential chain of “public gets angry, state representing that public puts pressure on SG-using state, SG-using state acquiesces” is bulletproof as an argument. Saudi Arabia has such significant political and economic power that they can support terrorism with the West calling them allies. I think these things are far from obvious. I am not so concerned with unilateralism not for the state “peer pressure” reason you give but because it would be a significant financial burden for any one state. I also see it as a potential risk while you appear to discount it heavily, and use this to support 1b) which then kicks the ball down the road for 50 years.
This bet seems severely against your interests so I will treat it as rhetorical—your payout will only be accessible in 30 years and on those time scales inflation, life expectancy, technological change, X-risks and remembering the bet are all not in your favour. Whereas I have the potential to cash out at any time before that deadline. I don’t particularly think that it’s likely but I would still have taken that bet because of the conditions − 30 years is a long time.
We are, depending on estimates, a decent way off having AGI, so by your argument there is no point doing AGI safety research now because we don’t need it now, if there are any risks / costs associated. I think the same arguments in support of that work in this context, time required and complexity of the issue prompt early investment (and that early-stage research is neglected currently). This isn’t about the neglect but the timeline—any research is going to incur costs now and pay off later. AGI safety has clear risks in potentially downplaying the risks of AGI and achieving a Dunning-Kruger effect for the entire human race, accelerating development of an inherently uncontrollable technology. It’s a case of balancing the costs with the advantages of starting earlier.
Over 90% of warming since the 1970s has been absorbed by the oceans, this conversation will only be of relevance if we are following higher RCP trajectories where it may be warranted to use controversial techniques, but there is a huge thermal sink that will work to bring the atmosphere back into equilibrium even using SG. This undermines the wait and see approach because it would be valuable to prepare in advance of the requirements.
I didn’t realise that this was referenced—if other people have investigated then that’s fair enough. I didn’t think there was any precedent for creating and dispersing particulate matter into the atmosphere and to keep them there, considering weather and localisation and other challenges. We are not able to replicate a volcanic eruption in terms of getting material airborne was my point, so the relevance of global cooling via volcanism isn’t ironclad, direct evidence that SAI would work in the same way, as I expect it would depend on the distribution method and location (altitude etc.). If people who have looked into this see this as feasible with current tech then they will likely have more info than me.
The counterfactual of not implementing SAI isn’t a flat line for X-risk, in the absence of mitigating effects the risk of existential events increases over time as a result of the indirect effects of climate change unmitigated by SAI (other things being equal). This would somewhat discount the increasing X-risk of SAI.
On rereading I may be misinterpreting, I thought that you were using moral hazard in the standard economic sense but you may be defining it just after as per previous papers as plan B undermining plan A. I agree in that case that my use of the term doesn’t make sense, but I’m more familiar with it’s use as increasing exposure to risk because the actor doesn’t bear the full consequence of that risk, a sort of generalised externality. In that case CO2 emission is a moral hazard because the systemic risk isn’t borne just by that actor, which incentivises overproduction.
Again, thanks for the response, I enjoyed the article and your reply has helped me understand the sources of disagreement a bit better. Some of them seem to be purely opinion-based or miscommunication. I also agreed with a lot of your points, although some not for the same reasons that you have given. I would still like to see small scale research on this topic being done and didn’t see that much wrong with doing it. I will have to read more about the moral hazard argument sources you mentioned because they should be more convincing than those I have encountered previously.
Thanks for this—have some quick replies below
1. If solar geoengineering is not going to be used until we get to 4 degrees, then there is no point in researching it even if 4 degrees is catastrophic.
2. I agree that the constraints on state action are not perfect. As you say, the saudis fund terrorism and major powers flex their muscles at each other in more or less overt ways. The deployment of solar geoengineering would be on a different order—a huge and bold move. Do I think India would deploy solar geoengineering without the consent of China, risking the almost guaranteed ire of China? No.
The bet offer was not rhetorical and still stands if you would like it. We can pick an arbiter to make sure it happens. If you are worried about decaying attention, we could have a shorter timeframe? What do you think is the chance in the next 10 years that someone deploys it?
3. The debate about AI safety seems like a distraction to me—if you showed me that the case was analogous to solar geoengineering research, then I would argue that we should also delay AGI safety research for the same reasons. But it is disanalogous in numerous ways, so I don’t see the point in exploring the analogy. Nevertheless… one rationale for AGI safety research is that some people think there is a non-negligible chance of AI in the next 20 years. Indeed, Toby Ord’s median estimate is that we will get it in the next 20 years. If you believe that, then the case for AI safety research now is very clear. That is one disanalogy.
Secondly, the downsides of AGI research seem minimal. There is some dim possibility that AGI research could lead us to irrationally downplay the risks of AGI, but I have literally never seen this concern brought up before as a reason not to do AGI safety research. As far as I am aware, no-one is not doing AGI safety research because of that consideration. In contrast, in climate there is a pretty much cross-field taboo against against talking about solar geoengineering in a vaguely positive way. This is basically for the reasons I outline.
5. Our anthropogenic emissions between 2020 and 2080 have a huge effect on how hot it will get. e.g. We can still (technically) follow RCP2.6 and RCP8.5 On RCP2.6, median warming is less than 2 degrees, on RCP8.5, it is 4 degrees and beyond.
7. That seems right but the debate we’re having is about whether to research it not deploy it.