6. Why I’m more concerned about human alignment than AI alignment; why rapidly accelerating technology will make terrorism an insurmountable existential threat within a relatively short timeframe
I was thinking about the human alignment portion of this earlier today—how bad actors with future powerful (non-AGI) AI systems at their disposal could cause a tremendous amount of damage. I haven’t thought through just how severe this damage might get and would be interested in reading your thoughts on this. What are the most significant risks from unaligned humans empowered by future technology?
Yes! I think the main threats are hard to predict, but mostly involve terrorism with advanced technology, for example weaponized blackholes, intentional grey goo, super coordinated nuclear attacks, and probably many, many other hyper-advanced technilogies we can’t even conceive of yet. I think if technology continues to accellerate it could get pretty bad pretty fast, and even if we’re wrong about AI somehow, human malevolence will be a massive challenge.
I was thinking about the human alignment portion of this earlier today—how bad actors with future powerful (non-AGI) AI systems at their disposal could cause a tremendous amount of damage. I haven’t thought through just how severe this damage might get and would be interested in reading your thoughts on this. What are the most significant risks from unaligned humans empowered by future technology?
Yes! I think the main threats are hard to predict, but mostly involve terrorism with advanced technology, for example weaponized blackholes, intentional grey goo, super coordinated nuclear attacks, and probably many, many other hyper-advanced technilogies we can’t even conceive of yet. I think if technology continues to accellerate it could get pretty bad pretty fast, and even if we’re wrong about AI somehow, human malevolence will be a massive challenge.