Indeed, the specifics of killing all humans don’t receive that much attention. I think partially this is because the concrete way of killing (or disempowering) all humans does not matter that much for practical purposes: Once we have AI that is smarter than all of humanity combined, wants to kill all humans, and is widely deployed and used, we are in an extremely bad situation, and clearly we should not build such a thing (for example if you solve alignment, then you can build the AI without it wanting to kill all humans).
Since the AI is smarter than humanity, the AI can come up with plans that humans does not consider. And I think there are multiple ways for a superintelligent AI to kill all humans. Jakub Kraus mentions some ingredients in his answer.
As for public communication, a downside of telling a story of a concrete scenario is that it might give people a false sense of security. For example. if the story involves the AI hacking into a lot of servers, then people might think that the solution would be as easy as replacing all software in the world with formally verified and secure software. While such a defense might buy us some time, a superintelligent AI will probably find another way (eg earning money and buying servers instead of hacking into them.)
Indeed, the specifics of killing all humans don’t receive that much attention. I think partially this is because the concrete way of killing (or disempowering) all humans does not matter that much for practical purposes: Once we have AI that is smarter than all of humanity combined, wants to kill all humans, and is widely deployed and used, we are in an extremely bad situation, and clearly we should not build such a thing (for example if you solve alignment, then you can build the AI without it wanting to kill all humans).
Since the AI is smarter than humanity, the AI can come up with plans that humans does not consider. And I think there are multiple ways for a superintelligent AI to kill all humans. Jakub Kraus mentions some ingredients in his answer.
As for public communication, a downside of telling a story of a concrete scenario is that it might give people a false sense of security. For example. if the story involves the AI hacking into a lot of servers, then people might think that the solution would be as easy as replacing all software in the world with formally verified and secure software. While such a defense might buy us some time, a superintelligent AI will probably find another way (eg earning money and buying servers instead of hacking into them.)