I’d be interested to hear what arguments/the best case you’ve heard in your conversations about why the AI security folks are wrong and AGI is not, in principle, such a risk. I am looking for the best case against AGI X-Risk, since many professional AI researchers seem to hold this view, mostly without writing down their reasons which might be really relevant to the discussion
I’m obviously heavily biased here because I think AI does pose a relevant risk.
I think the arguments that people made were usually along the lines of “AI will stay controllable; it’s just a tool”, “We have fixed big problems in the past, we’ll fix this one too”, “AI just won’t be capable enough; it’s just hype at the moment and transformer-based systems still have many failure modes”, “Improvements in AI are not that fast, so we have enough time to fix them”.
However, I think that most of the dismissive answers are based on vibes rather than sophisticated responses to the arguments made by AI safety folks.
I’d be interested to hear what arguments/the best case you’ve heard in your conversations about why the AI security folks are wrong and AGI is not, in principle, such a risk. I am looking for the best case against AGI X-Risk, since many professional AI researchers seem to hold this view, mostly without writing down their reasons which might be really relevant to the discussion
I’m obviously heavily biased here because I think AI does pose a relevant risk.
I think the arguments that people made were usually along the lines of “AI will stay controllable; it’s just a tool”, “We have fixed big problems in the past, we’ll fix this one too”, “AI just won’t be capable enough; it’s just hype at the moment and transformer-based systems still have many failure modes”, “Improvements in AI are not that fast, so we have enough time to fix them”.
However, I think that most of the dismissive answers are based on vibes rather than sophisticated responses to the arguments made by AI safety folks.