Huge question, which I’ll absolutely fail to do proper justice to in this reply! Very briefly, however:
I think that AI itself (e.g. language models) will help a lot with AI safety.
In general, my perception of society is that it’s very risk-averse about new technologies, has very high safety standards, and governments are happy to slow down the introduction of new tech.
I’m comparatively sceptical of ultra-fast takeoff scenarios, and of very near-term AGI (though I think both of these are possible, and that’s where much of the risk lies), which means that in combination with society’s risk-aversion, I expect a major endogenous societal response as we get closer to AGI.
I haven’t been convinced of the arguments for thinking that AI alignment is extremely hard. I thought that Ben Garfinkel’s review of Joe Carlsmith’s report was good.
That’s not to say that “it’s all fine”. But I’m certainly not on the “death with dignity” train.
Huge question, which I’ll absolutely fail to do proper justice to in this reply! Very briefly, however:
I think that AI itself (e.g. language models) will help a lot with AI safety.
In general, my perception of society is that it’s very risk-averse about new technologies, has very high safety standards, and governments are happy to slow down the introduction of new tech.
I’m comparatively sceptical of ultra-fast takeoff scenarios, and of very near-term AGI (though I think both of these are possible, and that’s where much of the risk lies), which means that in combination with society’s risk-aversion, I expect a major endogenous societal response as we get closer to AGI.
I haven’t been convinced of the arguments for thinking that AI alignment is extremely hard. I thought that Ben Garfinkel’s review of Joe Carlsmith’s report was good.
That’s not to say that “it’s all fine”. But I’m certainly not on the “death with dignity” train.