I feel like while “superintelligent AI would be dangerous” makes sense if you believe superintelligence is possible, it would be good to look at other risk scenarios from current and future AI systems as well.
I agree, and I think there’s a gap for thoughtful and creative folks with technical understanding to contribute to filling out the map here!
One person I think has made really interesting contributions here is Andrew Critch, for example on Multipolar Failure and Robust Agent-Agnostic Processes (I realise this is literally me sharing a link without much context which was a conversation-failure-mode discussed in the OP so feel free to pass on this). He also has made some attempts to discuss more breadth e.g. here. Critch isn’t the only one.
I agree, and I think there’s a gap for thoughtful and creative folks with technical understanding to contribute to filling out the map here!
One person I think has made really interesting contributions here is Andrew Critch, for example on Multipolar Failure and Robust Agent-Agnostic Processes (I realise this is literally me sharing a link without much context which was a conversation-failure-mode discussed in the OP so feel free to pass on this). He also has made some attempts to discuss more breadth e.g. here. Critch isn’t the only one.