I’m obviously heavily biased here because I think AI does pose a relevant risk.
I think the arguments that people made were usually along the lines of “AI will stay controllable; it’s just a tool”, “We have fixed big problems in the past, we’ll fix this one too”, “AI just won’t be capable enough; it’s just hype at the moment and transformer-based systems still have many failure modes”, “Improvements in AI are not that fast, so we have enough time to fix them”.
However, I think that most of the dismissive answers are based on vibes rather than sophisticated responses to the arguments made by AI safety folks.
I’m obviously heavily biased here because I think AI does pose a relevant risk.
I think the arguments that people made were usually along the lines of “AI will stay controllable; it’s just a tool”, “We have fixed big problems in the past, we’ll fix this one too”, “AI just won’t be capable enough; it’s just hype at the moment and transformer-based systems still have many failure modes”, “Improvements in AI are not that fast, so we have enough time to fix them”.
However, I think that most of the dismissive answers are based on vibes rather than sophisticated responses to the arguments made by AI safety folks.