I also think that AGI is altogether still quite unlikely in the next decade, but I don’t need AGI happening in the next decade to be worried about AI’s current ability to destabilise our world in a meaningful and potentially catastrophic way.
My main concern is that the pre-AI world was, IMO, not even as prepared as it could have been on “traditional risks”: old risks like cyber attacks, geopolitical instability, military escalation, democracy erosion, and so on. I see AI as a complicating factor and a multiplier of those risks and my cautious nature makes me think we should hurry even more in disaster preparedness in general.
Even without AGI in the picture, I think we are under prepared to deal with the risks associated with misuse current AI capabilities, which really just makes it cheaper and easier to do things like cyber attacks and disinformation campaigns at scale (and other things too, like building biological weapons etc). I’m also very concerned about the models being used in the military to launch missiles and eliminate targets without human oversight. These are things that are already happening and I think we are still not devoting enough attention to.
In summary, because I feel we are not prepared enough TODAY, I see efforts to 1) limit the growth of AI capabilities and 2) have better safeguards against misuse of current capabilities as still important and valuable.
It’s very very possible that AI capabilities’ growth will be halted or massively slowed down anyway due to a number of factors that you have already discussed (such as the AI bubble popping, or bottlenecks in hardware materials and so on), and I would cautiously welcome those as net positive things (for the reasons I mentioned), but I would also welcome any voluntary efforts to curtail the growth of AI future capabilities / increase safety, world cooperation, and regulations around current capabilities as a way to buy us time to become better prepared.
I also think that AGI is altogether still quite unlikely in the next decade, but I don’t need AGI happening in the next decade to be worried about AI’s current ability to destabilise our world in a meaningful and potentially catastrophic way.
My main concern is that the pre-AI world was, IMO, not even as prepared as it could have been on “traditional risks”: old risks like cyber attacks, geopolitical instability, military escalation, democracy erosion, and so on. I see AI as a complicating factor and a multiplier of those risks and my cautious nature makes me think we should hurry even more in disaster preparedness in general.
Even without AGI in the picture, I think we are under prepared to deal with the risks associated with misuse current AI capabilities, which really just makes it cheaper and easier to do things like cyber attacks and disinformation campaigns at scale (and other things too, like building biological weapons etc). I’m also very concerned about the models being used in the military to launch missiles and eliminate targets without human oversight. These are things that are already happening and I think we are still not devoting enough attention to.
In summary, because I feel we are not prepared enough TODAY, I see efforts to 1) limit the growth of AI capabilities and 2) have better safeguards against misuse of current capabilities as still important and valuable.
It’s very very possible that AI capabilities’ growth will be halted or massively slowed down anyway due to a number of factors that you have already discussed (such as the AI bubble popping, or bottlenecks in hardware materials and so on), and I would cautiously welcome those as net positive things (for the reasons I mentioned), but I would also welcome any voluntary efforts to curtail the growth of AI future capabilities / increase safety, world cooperation, and regulations around current capabilities as a way to buy us time to become better prepared.