Longtermism, aliens, AI

The language of longtermism focuses on future generations of humans. Should it explicitly include the flourishing of moral agents in general, rather than just future generations of people?

Imagine we find intelligent life on another planet, where lives are long, rich, fulfilling and peaceful, making human lives look very poor in comparison. Would a longtermist want to ensure that alien life of that enviable type flourishes far into the future, even if this comes at the expense of human life?

The same thought experiment can be done with AI. If super-happy, super-moral artificially intelligences emerge, would a longtermist be clearing the way for their long-term proliferation in preference to humanity’s?