Great points! I agree that the longtermist community need to better internalize the anti-speciesist belief that we claim to hold, and explicitly include non-humans in our considerations.
On your specific argument that longtermist work doesn’t affect non-humans:
X-risks aren’t the sole focus of longtermism. IMO work in the S-risk space takes non-humans (including digital minds) much more seriously, to the extent that human welfare is mentioned much less often than non-human welfare.
I think X-risk work does affect non-humans. Linch’s comment mentions one possible way, though I think we need to weigh the upsides and downsides more carefully. Another thing I want to add is that misaligned AI can be a much powerful actor than other earth-originating intelligient species, and may have a large influence on non-humans even after human extinction.
I think we need to thoroughly investigate the influence of our longtermist interventions on non-humans. This topic is highly neglected relative to its importance.
Great points! I agree that the longtermist community need to better internalize the anti-speciesist belief that we claim to hold, and explicitly include non-humans in our considerations.
On your specific argument that longtermist work doesn’t affect non-humans:
X-risks aren’t the sole focus of longtermism. IMO work in the S-risk space takes non-humans (including digital minds) much more seriously, to the extent that human welfare is mentioned much less often than non-human welfare.
I think X-risk work does affect non-humans. Linch’s comment mentions one possible way, though I think we need to weigh the upsides and downsides more carefully. Another thing I want to add is that misaligned AI can be a much powerful actor than other earth-originating intelligient species, and may have a large influence on non-humans even after human extinction.
I think we need to thoroughly investigate the influence of our longtermist interventions on non-humans. This topic is highly neglected relative to its importance.