I agree with the argument. If you buy into the idea of evidential cooperation in large worlds (formerly multiverse-wide superrationality), then this argument might go through even if you don’t think alien values are very aligned with humans. Roughly, ECL is the idea that you should be nice to other value systems because that will (acausally via evidential/timeless/functional decision theory) make it more likely that agents with different values will also be nice to our values. Applied to the present argument: If we focus more on existential risks that take resources from other (potentially unaligned) value systems, then this makes it more likely that elsewhere in the universe other agents will focus on existential risks that take away resources from civilizations that happen to be aligned with us.
I agree with the argument. If you buy into the idea of evidential cooperation in large worlds (formerly multiverse-wide superrationality), then this argument might go through even if you don’t think alien values are very aligned with humans. Roughly, ECL is the idea that you should be nice to other value systems because that will (acausally via evidential/timeless/functional decision theory) make it more likely that agents with different values will also be nice to our values. Applied to the present argument: If we focus more on existential risks that take resources from other (potentially unaligned) value systems, then this makes it more likely that elsewhere in the universe other agents will focus on existential risks that take away resources from civilizations that happen to be aligned with us.