Working on climate with an interest in AI, I found this a fascinating read.
But I am a bit left wanting as to what the learnable lessons for the AI community are that will make the AI community act better than the climate community.
Could you articulate this?
(I think a lot of the parallels you cite are true, but I don’t think they offer a lot of actionable implications, they feel more like negative updates on the difficulty of acting wisely for fast-moving coordination problems with deep uncertainty and lots of politicization).
Hi. Apologies for the late response, I have not been well.
I agree that the article is more diagnostic – one reason for this is that it was published in two places. It found a home in the AI safety ecosystem, but also in the climate governance ecosystem. There are different lessons for each side, and if I had attempted to make recommendations for both then the article would have been too long. However, I have considered publishing a “part 2” for both communities.
That said, there are three lessons I think the AI safety community can viably act on:
Establish capability thresholds as deployment gates. The Montréal Protocol worked because scientists gave policymakers specific, measurable indicators to act on. METR’s pre-deployment evaluations are a foundation, so the actionable step could be making specific capability benchmarks legally binding. However, we also know that setting such benchmarks has proved a problem as frontier systems development is so rapid and evolving.
Design any international framework with exit costs. Paris failed partly because exit from the Agreement was costless. The Montréal Protocol included trade measures against non-signatories. AI governance treaty design should try to build in analogous mechanisms – market access conditions, or liability linkages, that make opt-out costly. It will only achieve this through serious issue linkage and a general semantic shift from “impact” to “safety”.
Route campaigns more through existing political infrastructures. Trust and parliamentary access are a scarce resource which takes time to build. Climate communities (and others) have spent decades accumulating both. Routing AI governance proposals through existing channels is going to be faster than starting from scratch, but our current coalition building is left wanting.
These are a few actions I believe that we can viably consider for both strategising and framework design. Thanks!
Working on climate with an interest in AI, I found this a fascinating read.
But I am a bit left wanting as to what the learnable lessons for the AI community are that will make the AI community act better than the climate community.
Could you articulate this?
(I think a lot of the parallels you cite are true, but I don’t think they offer a lot of actionable implications, they feel more like negative updates on the difficulty of acting wisely for fast-moving coordination problems with deep uncertainty and lots of politicization).
Hi. Apologies for the late response, I have not been well.
I agree that the article is more diagnostic – one reason for this is that it was published in two places. It found a home in the AI safety ecosystem, but also in the climate governance ecosystem. There are different lessons for each side, and if I had attempted to make recommendations for both then the article would have been too long. However, I have considered publishing a “part 2” for both communities.
That said, there are three lessons I think the AI safety community can viably act on:
Establish capability thresholds as deployment gates. The Montréal Protocol worked because scientists gave policymakers specific, measurable indicators to act on. METR’s pre-deployment evaluations are a foundation, so the actionable step could be making specific capability benchmarks legally binding. However, we also know that setting such benchmarks has proved a problem as frontier systems development is so rapid and evolving.
Design any international framework with exit costs. Paris failed partly because exit from the Agreement was costless. The Montréal Protocol included trade measures against non-signatories. AI governance treaty design should try to build in analogous mechanisms – market access conditions, or liability linkages, that make opt-out costly. It will only achieve this through serious issue linkage and a general semantic shift from “impact” to “safety”.
Route campaigns more through existing political infrastructures. Trust and parliamentary access are a scarce resource which takes time to build. Climate communities (and others) have spent decades accumulating both. Routing AI governance proposals through existing channels is going to be faster than starting from scratch, but our current coalition building is left wanting.
These are a few actions I believe that we can viably consider for both strategising and framework design. Thanks!