You mention “preventing x-risks that pose specific threats to animals over those that only pose threats to humans”—which examples of this did you have in mind? It’s hard for me to imagine a risk factor for extinction of all nonhuman wildlife that wouldn’t also apply to humans, aside from perhaps an asteroid that humans could avoid by going to some other planet but humans would not choose to protect wild animals from by bringing them along. Though I haven’t spent much time thinking about non-AI x-risks so it’s likely the failure is in my imagination.
I think it’s also worth noting that the takeaway from this essay could be that x-risk to humans is primarily bad not because of effects on us/our descendants, but because of the wild animal suffering that would not be relieved in our absence. I’m not sure this would make much difference to the priorities of classical utilitarians, but it’s an important consideration if reducing suffering is one’s priority.
Admittedly, I haven’t thought about this extensively. I think that there are a variety of x-risks that might cause humans to go extinct but not animals, such as specific bio-risks, etc. And, there are x-risks that might threaten both humans and animals (a big enough asteroid?), which would fall into the group I describe. One might be just continued human development massively decreasing animal populations, if animals have net positive lives, though I think those might be unlikely.
I haven’t given enough thought to the second question, but I’d guess if you thought most the value of the future was in animal lives, and not human lives, it should change something? Especially given how focused on only preserving human welfare the long-termist community has been.
Got it, so if I’m understanding things correctly, the claim is not that many longtermists are necessarily neglecting x-risks that uniquely affect wild animals, just that they are disproportionately prioritizing risks that uniquely affect humans? That sounds fair, though like other commenters here the crux that makes me not fully endorse this conclusion is that I think, in expectation, artificial sentience could be larger than that of organic humans and wild animals combined. I agree with your assessment that this isn’t something that many (non-suffering-focused) longtermists emphasize in common arguments, though; the focus is still on humans.
Yeah, I think it probably depends on your specific credence that artificial minds will dominate in the future. I assume that most people don’t place a value of 100% on that (especially if they think x-risks are possible prior to the invention of self-replicating digital minds, because necessarily that decreases your credence that artificial minds will dominate). I think if your credence in this claim is relatively low, which seems reasonable, it is really unclear to me that the expected value of working on human-focused x-risks is higher than that of working on animal-focused ones. There hasn’t been any attempt that I know of to compare the two, so I can’t say this with confidence though. But it is clear that saying “there might be tons of digital minds” isn’t a strong enough claim on its own, without specific credences in specific numbers of digital minds.
Great post, Abraham!
You mention “preventing x-risks that pose specific threats to animals over those that only pose threats to humans”—which examples of this did you have in mind? It’s hard for me to imagine a risk factor for extinction of all nonhuman wildlife that wouldn’t also apply to humans, aside from perhaps an asteroid that humans could avoid by going to some other planet but humans would not choose to protect wild animals from by bringing them along. Though I haven’t spent much time thinking about non-AI x-risks so it’s likely the failure is in my imagination.
I think it’s also worth noting that the takeaway from this essay could be that x-risk to humans is primarily bad not because of effects on us/our descendants, but because of the wild animal suffering that would not be relieved in our absence. I’m not sure this would make much difference to the priorities of classical utilitarians, but it’s an important consideration if reducing suffering is one’s priority.
Hey!
Admittedly, I haven’t thought about this extensively. I think that there are a variety of x-risks that might cause humans to go extinct but not animals, such as specific bio-risks, etc. And, there are x-risks that might threaten both humans and animals (a big enough asteroid?), which would fall into the group I describe. One might be just continued human development massively decreasing animal populations, if animals have net positive lives, though I think those might be unlikely.
I haven’t given enough thought to the second question, but I’d guess if you thought most the value of the future was in animal lives, and not human lives, it should change something? Especially given how focused on only preserving human welfare the long-termist community has been.
Got it, so if I’m understanding things correctly, the claim is not that many longtermists are necessarily neglecting x-risks that uniquely affect wild animals, just that they are disproportionately prioritizing risks that uniquely affect humans? That sounds fair, though like other commenters here the crux that makes me not fully endorse this conclusion is that I think, in expectation, artificial sentience could be larger than that of organic humans and wild animals combined. I agree with your assessment that this isn’t something that many (non-suffering-focused) longtermists emphasize in common arguments, though; the focus is still on humans.
Yeah, I think it probably depends on your specific credence that artificial minds will dominate in the future. I assume that most people don’t place a value of 100% on that (especially if they think x-risks are possible prior to the invention of self-replicating digital minds, because necessarily that decreases your credence that artificial minds will dominate). I think if your credence in this claim is relatively low, which seems reasonable, it is really unclear to me that the expected value of working on human-focused x-risks is higher than that of working on animal-focused ones. There hasn’t been any attempt that I know of to compare the two, so I can’t say this with confidence though. But it is clear that saying “there might be tons of digital minds” isn’t a strong enough claim on its own, without specific credences in specific numbers of digital minds.