Hey, everyone. I don’t post here often and I’m not particularly knowledgeable about strong longtermism, but I’ve been thinking a bit about it lately and wanted to share a thought I haven’t seen addressed yet and I was wondering if it’s reasonable and unaddressed. I’m not sure this is the right place though, but here goes.
It seems to me that strong longtermism is extremely biased towards human beings.
In most catastrophic risks I can imagine (climate change, AI misalignment, and maybe even nuclear war* or pandemics**), it seems unlikely that earth would become uninhabitable for a long period or that all life on earth would be disrupted.
Some of these events (e.g. climate change) could have significant short to medium term effects on all life on earth, but in the long run (after several million years?), I’d argue the impact on non-human animals would likely be negligible, since evolution would eventually find its way. So if this is right and you consider the very long term and value all lives (humans and other animals) equally, wouldn’t strong longtermism imply not doing anything?
Although I definitely am somewhat biased towards human beings and think existential risk is a very important cause, I wonder if this critique makes sense.
*Regarding nuclear war, I guess it would depend on the length and strength of the radioactivity, which is not a subject I’m familiar with.
**From what I’ve learned in the last year and a half, it wouldn’t be easy for viruses (not sure about bacteria) to infect lots of different species (covid-19 doesn’t seem to be a problem to other species).
Some of these events (e.g. climate change) could have significant short to medium term effects on all life on earth, but in the long run (after several million years?), I’d argue the impact on non-human animals would likely be negligible, since evolution would eventually find its way. So if this is right and you consider the very long term and value all lives (humans and other animals) equally, wouldn’t strong longtermism imply not doing anything?
If humanity survives, we have a decent shot of reducing suffering in nature and spreading utopia throughout the stars.
If humanity dies, but not all life, and some other species eventually evolves intelligence and then builds civilization, I think they might also have a shot of doing the same thing, but this is more speculative and uncertain, and seems to me to be a much worse bet than betting on humanity (flawed as we are).
TBC, I think it’s more likely that utopia would not look like having animals in the stars. Digital minds seem more likely, but also I think it’s likely just that the future will be really weird, even weirder than digital minds.
Great points! I agree that the longtermist community need to better internalize the anti-speciesist belief that we claim to hold, and explicitly include non-humans in our considerations.
On your specific argument that longtermist work doesn’t affect non-humans:
X-risks aren’t the sole focus of longtermism. IMO work in the S-risk space takes non-humans (including digital minds) much more seriously, to the extent that human welfare is mentioned much less often than non-human welfare.
I think X-risk work does affect non-humans. Linch’s comment mentions one possible way, though I think we need to weigh the upsides and downsides more carefully. Another thing I want to add is that misaligned AI can be a much powerful actor than other earth-originating intelligient species, and may have a large influence on non-humans even after human extinction.
I think we need to thoroughly investigate the influence of our longtermist interventions on non-humans. This topic is highly neglected relative to its importance.
I agree with Linchs comment, but I want to mention a further point. Let us suppose that the well-being of all non-human animals between now and the death of the sun is the most important value. This idea can be justified since there are much more animals than humans.
Let us suppose furthermore that the future of human civilization has no impact on the lives of animals in the far future. [I disagree with this point since it might be possible that future humans abolish wild animal suffering or in the bad case they take wild animals with them when they colonize the stars and thus extend wild animal suffering.] Nevertheless, let us assume that we cannot have any impact on animals in the far future.
In my opinion, the most logical thing would be to focus on the things that we can change (x-risks, animal suffering today etc.) and to develop a stoic attitude towards the things we cannot change.
Hey, everyone. I don’t post here often and I’m not particularly knowledgeable about strong longtermism, but I’ve been thinking a bit about it lately and wanted to share a thought I haven’t seen addressed yet and I was wondering if it’s reasonable and unaddressed. I’m not sure this is the right place though, but here goes.
It seems to me that strong longtermism is extremely biased towards human beings.
In most catastrophic risks I can imagine (climate change, AI misalignment, and maybe even nuclear war* or pandemics**), it seems unlikely that earth would become uninhabitable for a long period or that all life on earth would be disrupted.
Some of these events (e.g. climate change) could have significant short to medium term effects on all life on earth, but in the long run (after several million years?), I’d argue the impact on non-human animals would likely be negligible, since evolution would eventually find its way. So if this is right and you consider the very long term and value all lives (humans and other animals) equally, wouldn’t strong longtermism imply not doing anything?
Although I definitely am somewhat biased towards human beings and think existential risk is a very important cause, I wonder if this critique makes sense.
*Regarding nuclear war, I guess it would depend on the length and strength of the radioactivity, which is not a subject I’m familiar with.
**From what I’ve learned in the last year and a half, it wouldn’t be easy for viruses (not sure about bacteria) to infect lots of different species (covid-19 doesn’t seem to be a problem to other species).
If humanity survives, we have a decent shot of reducing suffering in nature and spreading utopia throughout the stars.
If humanity dies, but not all life, and some other species eventually evolves intelligence and then builds civilization, I think they might also have a shot of doing the same thing, but this is more speculative and uncertain, and seems to me to be a much worse bet than betting on humanity (flawed as we are).
Thanks for the comment. I really hadn’t considered colonizing the stars and bringing animals.
TBC, I think it’s more likely that utopia would not look like having animals in the stars. Digital minds seem more likely, but also I think it’s likely just that the future will be really weird, even weirder than digital minds.
Great points! I agree that the longtermist community need to better internalize the anti-speciesist belief that we claim to hold, and explicitly include non-humans in our considerations.
On your specific argument that longtermist work doesn’t affect non-humans:
X-risks aren’t the sole focus of longtermism. IMO work in the S-risk space takes non-humans (including digital minds) much more seriously, to the extent that human welfare is mentioned much less often than non-human welfare.
I think X-risk work does affect non-humans. Linch’s comment mentions one possible way, though I think we need to weigh the upsides and downsides more carefully. Another thing I want to add is that misaligned AI can be a much powerful actor than other earth-originating intelligient species, and may have a large influence on non-humans even after human extinction.
I think we need to thoroughly investigate the influence of our longtermist interventions on non-humans. This topic is highly neglected relative to its importance.
I agree with Linchs comment, but I want to mention a further point. Let us suppose that the well-being of all non-human animals between now and the death of the sun is the most important value. This idea can be justified since there are much more animals than humans.
Let us suppose furthermore that the future of human civilization has no impact on the lives of animals in the far future. [I disagree with this point since it might be possible that future humans abolish wild animal suffering or in the bad case they take wild animals with them when they colonize the stars and thus extend wild animal suffering.] Nevertheless, let us assume that we cannot have any impact on animals in the far future.
In my opinion, the most logical thing would be to focus on the things that we can change (x-risks, animal suffering today etc.) and to develop a stoic attitude towards the things we cannot change.