[Question] Why AGIs utility can’t outweigh humans’ utility?

This is probably a Utilitarianism 101 question. Many/​most people in EA seem to accept as a given that:

1) Non-human agent’s welfare can count toward utilitarian calculations (hence animal welfare)

2) AGI welfare cannot count towards utility calculations (otherwise alternative to alignment would be working on an AGI which has a goal of maximizing copies of itself experiencing maximum utility, likely a much easier task)

Which means there should be a compelling argument, or Schelling point, which includes animals but not AGIs into the category of moral patients. But I haven’t seen any and can’t easily think of a good one myself. What’s the deal here? Am I missing some important basic idea about utilitarianism?

[To be clear, this is not an argument against alignment work. I’m mostly just trying to improve my understanding of the matter, but insofar there has to be an argument, it’s one against the whatever branches of utilitarianism say yielding the world to AIs is an acceptable choice.]