This argument appears very similar to the one I addressed in the essay about how delaying or accelerating AI will impact the well-being of currently existing humans. My claim is not that it isn’t bad if humanity goes extinct; I am certainly not saying that it would be good if everyone died.
I’m not supposing you do. Of course most people have a strong preference not to die. But there is also (beyond that) a widespread preference for humanity not to go extinct. This is why it e.g. would be so depressing (as in the movie Children of Men) when a global virus made all humans infertile. Ending humanity is very different from and much worse than people merely dying at the end of their lives, which by itself doesn’t imply extinction. Many people would likely even sacrifice their own life in order to safe the future of humanity. We don’t have a similar preference for having AI descendants. That’s not speciesist, it’s just what our preferences are.
The economic behavior analysis falls short. People usually do not expect to have a significant impact on the survival of humanity. If in the past centuries people had saved a large part of their income for “future generations” (including for us) this would likely have had almost no impact on the survival of humanity, probably not even significantly on our present quality of life. The expected utility of saving money for future generations is simply too low compared to spending the money in the present for themselves. This does just mean that people (reasonably) expect to have little influence on the survival of humanity, not that they are relatively okay with humanity going extinct. If people could somehow directly influence, via voting perhaps, whether to trade a few extra years of life against a significant increase in the likelihood of humanity going extinct, I think the outcome would be predictable.
Though I’m indeed not specifically commenting here on what delaying AI could realistically achieve. My main point was only that the preferences for humanity not going extinct are significant and that they easily outweigh any preferences for future AI coming into existence, without relying on immoral speciesism.