A persistent worry about solar geoengineering research concerns moral hazard: the worry that attention to plan B will reduce commitment to plan A. Having solar geoengineering as a backup will decrease commitment to reducing carbon emissions, which almost all researchers agree to be the top priority.
I’m not really sure why this would be a problem, though I read the sections in your paper—perhaps I just didn’t understand properly. Moral hazard occurs when one group (Agent) pays another (Insurer) to cover the damages of some future event that Agent is partly responsible for. Because of this insurance, Agent has less incentive to avoid/mitigate the event. Insurer now has more incentive, but if it is cheaper for Agent to mitigate it than Insurer, total mitigation will go down (or total $ expenditure on mitigation will have to go up). This is inefficient, but due to imperfect contracting and monitoring hard to avoid.
But in the geoengineering case Agent and Insurer are the same—they’re the researchers/governments. This doesn’t seem so much like moral hazard as simply the substitution effect, in the same way that solar and geothermal energy are (imperfect) substitutes. Given the optionality inherent in research, it seems you need some strong irrationality story to say there will be a net-negative expected substitution effect.
I agree the weaponisation risks make sense as a reason not to do it, but they seem separate from the moral hazard idea.
I agree it’s not technically the right name, but people generally know what it means which was important for a blogpost. In the paper I actually call it the mitigation obstruction argument. I explicitly discuss the irrationality assumption required for the mitigation obstruction argument in my paper. I think the question of how irrationally people/governments will respond to research is an open one.
I’m not really sure why this would be a problem, though I read the sections in your paper—perhaps I just didn’t understand properly. Moral hazard occurs when one group (Agent) pays another (Insurer) to cover the damages of some future event that Agent is partly responsible for. Because of this insurance, Agent has less incentive to avoid/mitigate the event. Insurer now has more incentive, but if it is cheaper for Agent to mitigate it than Insurer, total mitigation will go down (or total $ expenditure on mitigation will have to go up). This is inefficient, but due to imperfect contracting and monitoring hard to avoid.
But in the geoengineering case Agent and Insurer are the same—they’re the researchers/governments. This doesn’t seem so much like moral hazard as simply the substitution effect, in the same way that solar and geothermal energy are (imperfect) substitutes. Given the optionality inherent in research, it seems you need some strong irrationality story to say there will be a net-negative expected substitution effect.
I agree the weaponisation risks make sense as a reason not to do it, but they seem separate from the moral hazard idea.
I agree it’s not technically the right name, but people generally know what it means which was important for a blogpost. In the paper I actually call it the mitigation obstruction argument. I explicitly discuss the irrationality assumption required for the mitigation obstruction argument in my paper. I think the question of how irrationally people/governments will respond to research is an open one.