I think there’s room for divergence here (i.e., I can imagine longtermists who only focus on the human race) but generally, I expect that longtermism aligns with “the flourishing of moral agents in general, rather than just future generations of people.” My belief largely draws from one of Michael Aird’s posts.
This is because many longtermists are worried about existential risk (x-risk), which specifically refers to the curtailing of humanity’s potential. This includes both our values—which could lead to wanting to protect alien life, if we consider them moral patients and so factor them into our moral calculations—and potential super-/non-human descendants.
However, I’m less certain that longtermists worried about x-risk would be happy to let AI ‘take over’ and for humans to go extinct. That seems to get into more transhumanist territory. C.f. disagreement over Max Tegmark’s various AI aftermath scenarios, which runs the spectrum of human/AIcoexistence.