Not a current major risk, but also turned out to be trivially easy to solve with minimal societal resources (technological substitution was already available when regulated, only needed regulating a couple of hundred factories in select countries), so does not feel like it belongs in the class of major risks.
I disagree, I think major risks should be defined in terms of their potential impact sans intervention, rather than taking tractability into account (negatively).
Incidentally there was some earlier speculation of what counterfactually might happen if we had invented CFCs a century earlier, which you might find interesting.
While I also disagree that we should ignore tractability for the purpose you indicate, the main point here is more “if we’d chose the ozone layer as an analogy we are suggesting the problem is trivially easy” which doesn’t really help with solving the problem and it already seems extremely likely that AI risk is much trickier than ozone layer depletion.
Not a current major risk, but also turned out to be trivially easy to solve with minimal societal resources (technological substitution was already available when regulated, only needed regulating a couple of hundred factories in select countries), so does not feel like it belongs in the class of major risks.
I disagree, I think major risks should be defined in terms of their potential impact sans intervention, rather than taking tractability into account (negatively).
Incidentally there was some earlier speculation of what counterfactually might happen if we had invented CFCs a century earlier, which you might find interesting.
I think we’re talking past each other.
While I also disagree that we should ignore tractability for the purpose you indicate, the main point here is more “if we’d chose the ozone layer as an analogy we are suggesting the problem is trivially easy” which doesn’t really help with solving the problem and it already seems extremely likely that AI risk is much trickier than ozone layer depletion.