I also think that AGI is altogether still quite unlikely in the next decade, but I donāt need AGI happening in the next decade to be worried about AIās current ability to destabilise our world in a meaningful and potentially catastrophic way.
My main concern is that the pre-AI world was, IMO, not even as prepared as it could have been on ātraditional risksā: old risks like cyber attacks, geopolitical instability, military escalation, democracy erosion, and so on. I see AI as a complicating factor and a multiplier of those risks and my cautious nature makes me think we should hurry even more in disaster preparedness in general.
Even without AGI in the picture, I think we are under prepared to deal with the risks associated with misuse current AI capabilities, which really just makes it cheaper and easier to do things like cyber attacks and disinformation campaigns at scale (and other things too, like building biological weapons etc). Iām also very concerned about the models being used in the military to launch missiles and eliminate targets without human oversight. These are things that are already happening and I think we are still not devoting enough attention to.
In summary, because I feel we are not prepared enough TODAY, I see efforts to 1) limit the growth of AI capabilities and 2) have better safeguards against misuse of current capabilities as still important and valuable.
Itās very very possible that AI capabilitiesā growth will be halted or massively slowed down anyway due to a number of factors that you have already discussed (such as the AI bubble popping, or bottlenecks in hardware materials and so on), and I would cautiously welcome those as net positive things (for the reasons I mentioned), but I would also welcome any voluntary efforts to curtail the growth of AI future capabilities /ā increase safety, world cooperation, and regulations around current capabilities as a way to buy us time to become better prepared.
Much could be said in response to this comment. Probably the most direct and succinct response is my post āUnsolved research problems on the road to AGIā.
Largely for the reasons explained in that post, I think AGI is much less than 0.01% likely in the next decade.
I also think that AGI is altogether still quite unlikely in the next decade, but I donāt need AGI happening in the next decade to be worried about AIās current ability to destabilise our world in a meaningful and potentially catastrophic way.
My main concern is that the pre-AI world was, IMO, not even as prepared as it could have been on ātraditional risksā: old risks like cyber attacks, geopolitical instability, military escalation, democracy erosion, and so on. I see AI as a complicating factor and a multiplier of those risks and my cautious nature makes me think we should hurry even more in disaster preparedness in general.
Even without AGI in the picture, I think we are under prepared to deal with the risks associated with misuse current AI capabilities, which really just makes it cheaper and easier to do things like cyber attacks and disinformation campaigns at scale (and other things too, like building biological weapons etc). Iām also very concerned about the models being used in the military to launch missiles and eliminate targets without human oversight. These are things that are already happening and I think we are still not devoting enough attention to.
In summary, because I feel we are not prepared enough TODAY, I see efforts to 1) limit the growth of AI capabilities and 2) have better safeguards against misuse of current capabilities as still important and valuable.
Itās very very possible that AI capabilitiesā growth will be halted or massively slowed down anyway due to a number of factors that you have already discussed (such as the AI bubble popping, or bottlenecks in hardware materials and so on), and I would cautiously welcome those as net positive things (for the reasons I mentioned), but I would also welcome any voluntary efforts to curtail the growth of AI future capabilities /ā increase safety, world cooperation, and regulations around current capabilities as a way to buy us time to become better prepared.