I don’t think that the development of sentience (the ability to experience positive and negative qualia) is necessary for an AI to pursue goals. I’m also not sure what it would look like for an AI to select its own interests. This may be due to my own lack of knowledge rather than a real lack of necessity or possibility though.
To answer your main question, some have theorized that self-preservation is a useful instrumental goal for all sufficiently intelligent agents. I recommend reading about instrumental convergence. Hope this helps!
I don’t think that the development of sentience (the ability to experience positive and negative qualia) is necessary for an AI to pursue goals. I’m also not sure what it would look like for an AI to select its own interests. This may be due to my own lack of knowledge rather than a real lack of necessity or possibility though.
To answer your main question, some have theorized that self-preservation is a useful instrumental goal for all sufficiently intelligent agents. I recommend reading about instrumental convergence. Hope this helps!