“* Will something less than superhuman AI pose similar extreme risks? If yes: How much less, how far in advance will we see it coming, when will it come, how easy is it to solve?”
I don’t think there is any disagreement that there are such things. I think that the key disagreement is whether there will be sufficient warning , and how easy it will be to solve / prevent.
Not to speak on their behalf, but my understanding of MIRI’s view on this issue is that there are likely to be such issues, but they aren’t as fundamentally hard as ASI alignment, and while there should be people working on the pre-ASI risks, we need all the time we can invest on solving the really hard parts of the eventual risk from ASI.
Maybe we should add: Does working on pre-ASI risks improve our prospects of solving ASI (I think that’s the core of the conciliation between near-term and long-term concerns about AI… but up to what point?), or does it worsen it?
“* Will something less than superhuman AI pose similar extreme risks? If yes: How much less, how far in advance will we see it coming, when will it come, how easy is it to solve?”
I don’t think there is any disagreement that there are such things. I think that the key disagreement is whether there will be sufficient warning , and how easy it will be to solve / prevent.
Not to speak on their behalf, but my understanding of MIRI’s view on this issue is that there are likely to be such issues, but they aren’t as fundamentally hard as ASI alignment, and while there should be people working on the pre-ASI risks, we need all the time we can invest on solving the really hard parts of the eventual risk from ASI.
Maybe we should add: Does working on pre-ASI risks improve our prospects of solving ASI (I think that’s the core of the conciliation between near-term and long-term concerns about AI… but up to what point?), or does it worsen it?