This is a very difficult question to answer, as it depends very heavily on the specifics of each scenario, as well as which groups of animals you consider to have sentience, and your default estimations of how worthwhile their lives are. For AI, I think the standard paper-clipper/ misaligned super intelligence probably doesn’t go as far to kill all complex biological life immediately, as unlike humans, most animals would not really pose a threat to its goals/compete with it for resources. However, in the long run, I assume a lot of life would die off as AI develops industry without regard for the environmental effects (robots do not need much clean air, or water, or low-acidity oceans). In the long, long run, I do not see why an AI system would not construct a Dyson sphere.
Ultimately, however, I do not think this really changes the utility of these scenarios, as human civilization is also mostly indifferent to animals. The existence of factory farming (which will last longer with humans, as humans enjoy meat while AI will probably not care about it) probably will out weigh any potential pro-wild-animal welfare efforts pursued by humanity.
For non-AI extinction risk (nuclear war, asteroids, super volcanoes) sentient animal populations will sharply decline and then gradually recover, just as they have done in reaction to previous mass extinction events.
TLDR:
For essentially all extinction scenarios, the utility value calculation is based on the difference between long-term and short term human flourishing against short-term factory farming of animals farmed for humans. Wild animals have similar expected utility in all scenarios, especially if you think they have about net-neural utility in their lives on average, as they will either persist unaffected or die (maybe at some point humanity will want to intervene to help wild animals have net-positive lives, but this is highly uncertain).
This is a very difficult question to answer, as it depends very heavily on the specifics of each scenario, as well as which groups of animals you consider to have sentience, and your default estimations of how worthwhile their lives are. For AI, I think the standard paper-clipper/ misaligned super intelligence probably doesn’t go as far to kill all complex biological life immediately, as unlike humans, most animals would not really pose a threat to its goals/compete with it for resources. However, in the long run, I assume a lot of life would die off as AI develops industry without regard for the environmental effects (robots do not need much clean air, or water, or low-acidity oceans). In the long, long run, I do not see why an AI system would not construct a Dyson sphere.
Ultimately, however, I do not think this really changes the utility of these scenarios, as human civilization is also mostly indifferent to animals. The existence of factory farming (which will last longer with humans, as humans enjoy meat while AI will probably not care about it) probably will out weigh any potential pro-wild-animal welfare efforts pursued by humanity.
For non-AI extinction risk (nuclear war, asteroids, super volcanoes) sentient animal populations will sharply decline and then gradually recover, just as they have done in reaction to previous mass extinction events.
TLDR:
For essentially all extinction scenarios, the utility value calculation is based on the difference between long-term and short term human flourishing against short-term factory farming of animals farmed for humans. Wild animals have similar expected utility in all scenarios, especially if you think they have about net-neural utility in their lives on average, as they will either persist unaffected or die (maybe at some point humanity will want to intervene to help wild animals have net-positive lives, but this is highly uncertain).
Thank you very much for answering both questions! This was clear and helpful.