Even setting aside instrumental convergence (to be clear, I don’t think we can safely do this), there is still misuse risk and multi-agent coordination that needs solving to avoid doom (or at least global catastrophe).
I agree, but that implies pretty different things than what is currently being done, and still implies that the danger from AI is overestimated, which bleeds into other things.
Even setting aside instrumental convergence (to be clear, I don’t think we can safely do this), there is still misuse risk and multi-agent coordination that needs solving to avoid doom (or at least global catastrophe).
I agree, but that implies pretty different things than what is currently being done, and still implies that the danger from AI is overestimated, which bleeds into other things.