A couple comments:
1. Evolution does not imply that every organism has an “intrinsic desire for survivability and reproduction.” Rather, it implies that organisms will tend to act in ways that would lead to survival and reproduction in their ancestral environment, but these actions need not be motivated by a drive to survive and reproduce. In slogan form: We are adaptation executors, not fitness maximizers.
For example, the reason people nowadays eat Twinkies is not because we want to survive or reproduce, but because we like the taste! This preference for sugary foods would have been fitness enhancing in our ancestral environment, but it is maladaptive in our modern one. Yet people continue eating Twinkies anyways.
2. The “core” of your argument doesn’t seem sound to me. You say that hyper-sentient (V1) aliens wouldn’t eat humans because other super duper sentient (V2) aliens might eat them. And V3 aliens might eat the V2 aliens, and so on. But… the mere possibility of other aliens is not a strong reason to do anything. After all, it’s hypothetically possible that the V2 aliens would be positively elated that the V1 aliens are eating humans and reward the V1 aliens even more. What matters though is not what’s possible but rather about what the expected effects of one’s actions are.
If V2 aliens don’t actually exist and the V1 aliens know this, what other prudential reason would the V1 aliens have for refraining from eating humans? I don’t see any.
Scott Alexander discusses this in his post here. I’m skeptical that humans will able to align AI with morality anytime soon. Humans have been disagreeing about what morality consists of for a few thousand years. It’s unlikely we’ll solve the issue in the next 10.