I think you mean to say ‘existential risk’ rather than ‘extinction risk’ in this comment?
I think even with totalitarianism reaching existential security is really hard—the world would need to be permanently locked into a totalitarian state.
Something I didn’t say in my other comment is that I do think the future could be very, very long under a misaligned AI scenario. Such an AI would have some goals, and it would probably be useful to have a very long time to achieve those goals. This wouldn’t really matter if there was no sentient life around for the AI to exploit, but we can’t be sure that this would be the case as the AI may find it useful to use sentient life.
Overall I am interested to hear your view on the importance of AI alignment as, from what I’ve heard, it sounds like it could still be important taking into account your various views.
I think you mean to say ‘existential risk’ rather than ‘extinction risk’ in this comment?
Something I didn’t say in my other comment is that I do think the future could be very, very long under a misaligned AI scenario. Such an AI would have some goals, and it would probably be useful to have a very long time to achieve those goals. This wouldn’t really matter if there was no sentient life around for the AI to exploit, but we can’t be sure that this would be the case as the AI may find it useful to use sentient life.
Overall I am interested to hear your view on the importance of AI alignment as, from what I’ve heard, it sounds like it could still be important taking into account your various views.