In fact, x-risks that eliminate human life, but leave animal life unaffected would generally be almost negligible in value to prevent compared to preventing x-risks to animals and improving their welfare.
Eliminating human life would lock in a very narrow set of futures for animals—something similar to the status quo (minus factory farming) until the Earth becomes uninhabitable. What reason is there to think the difference between these futures, and those we could expect if humanity continues to exist, would be negligible?
As far as we know, humans are the only thing capable of moral reasoning, systematically pushing the world toward more valuable states, embarking on multi-generational plans, etc. etc. This gives very strong reasons for thinking the extinction of humanity would be of profound significance to the value of the future for non-humans.
Yes, this was noted in the sentence following you quote and the paragraphs after this one. Note that if humans implemented extremely resilient interventions, human-focused x-risks might be of less value, but I broadly agree humanity’s moral personhood is a good reason to think that x-risks impacting humans are valuable to work on. Reading through my conclusions again, I could have been a bit more clear on this.
Eliminating human life would lock in a very narrow set of futures for animals—something similar to the status quo (minus factory farming) until the Earth becomes uninhabitable. What reason is there to think the difference between these futures, and those we could expect if humanity continues to exist, would be negligible?
As far as we know, humans are the only thing capable of moral reasoning, systematically pushing the world toward more valuable states, embarking on multi-generational plans, etc. etc. This gives very strong reasons for thinking the extinction of humanity would be of profound significance to the value of the future for non-humans.
Hey,
Yes, this was noted in the sentence following you quote and the paragraphs after this one. Note that if humans implemented extremely resilient interventions, human-focused x-risks might be of less value, but I broadly agree humanity’s moral personhood is a good reason to think that x-risks impacting humans are valuable to work on. Reading through my conclusions again, I could have been a bit more clear on this.