...assuming that particular example is a concern of such an impact primarily on humans, could that be articulated as anthropocentric technopessimism ?
Why would you want to describe it that way?
On reflection, I don’t think it can be called anthropocentric, no. There are four big groups of beings involved here: Humanity, Animals, Transhumanist post-humanity (hopefully without value-drift), and Unaligned AI. Three of those groups are non-human. Those concerned with AI alignment tend to be fighting in favor of more of those non-human groups than they are fighting against.
(It’s a bit hard to tell whether we would actually like animals once they could speak, wield guns, occupy vast portions of the accessible universe etc. Might turn out there are fundamental irreconcilable conflicts. None apparent yet, though.)
Why would you want to describe it that way?
On reflection, I don’t think it can be called anthropocentric, no. There are four big groups of beings involved here: Humanity, Animals, Transhumanist post-humanity (hopefully without value-drift), and Unaligned AI. Three of those groups are non-human. Those concerned with AI alignment tend to be fighting in favor of more of those non-human groups than they are fighting against.
(It’s a bit hard to tell whether we would actually like animals once they could speak, wield guns, occupy vast portions of the accessible universe etc. Might turn out there are fundamental irreconcilable conflicts. None apparent yet, though.)