I may try to write something on that in the future. I’m personally more worried about accidents and think that solving accidents causes one to solve misuse pre-AGI. Post aligned AGI, misuse rebecomes a major worry.
I guess it’s possible that, post-AutoGPT, we are in a world where warning shots are much more likely, because there will be a lot more misuse than was previously expected.
I may try to write something on that in the future. I’m personally more worried about accidents and think that solving accidents causes one to solve misuse pre-AGI. Post aligned AGI, misuse rebecomes a major worry.
I guess it’s possible that, post-AutoGPT, we are in a world where warning shots are much more likely, because there will be a lot more misuse than was previously expected.