The aliens (including alien-descended AI) could also themselves be moral patients, and there are some other possibilities worth considering if this is true:
We could help them.
We could harm them.
They could limit our expansion and take the space we would have otherwise, with or without catastrophic conflict. In that case, the future could still be very morally valuable (or disvaluable), but our future could be much smaller in value. Or, we could be replaceable.
We could limit their expansion. This could help or harm them, depending on the value of their existences, and could help or harm other aliens that they would have otherwise encountered. It also could make us replaceable.
(By “we” and “our”, I mean to include our descendants, our tech and tech descended from us, including autonomous AI or whatever results from it.)
It also seems worth mentioning grabby alien models, which, from my understanding, are consistent with a high probability of eventually encountering aliens if we survive.
Thanks for highlighting this Michael and spelling out the different possibilities. In particular, it seems like if aliens are present and would expand into the same space we would have expanded into had we not gone extinct, then for the totalist, to the extent that aliens have similar values to us, the value of x-risk mitigation is reduced. If we are replaceable by aliens, then it seems like not much is lost if we do go extinct, since the aliens would still produce the large valuable future that we would have otherwise produced.
I have to admit though, it is personally uncomfortable for my valuation of x-risk mitigation efforts and cause prioritisation to depend partially on something as abstract and unknowable as the existence of aliens.
The aliens (including alien-descended AI) could also themselves be moral patients, and there are some other possibilities worth considering if this is true:
We could help them.
We could harm them.
They could limit our expansion and take the space we would have otherwise, with or without catastrophic conflict. In that case, the future could still be very morally valuable (or disvaluable), but our future could be much smaller in value. Or, we could be replaceable.
We could limit their expansion. This could help or harm them, depending on the value of their existences, and could help or harm other aliens that they would have otherwise encountered. It also could make us replaceable.
(By “we” and “our”, I mean to include our descendants, our tech and tech descended from us, including autonomous AI or whatever results from it.)
It also seems worth mentioning grabby alien models, which, from my understanding, are consistent with a high probability of eventually encountering aliens if we survive.
Thanks for highlighting this Michael and spelling out the different possibilities. In particular, it seems like if aliens are present and would expand into the same space we would have expanded into had we not gone extinct, then for the totalist, to the extent that aliens have similar values to us, the value of x-risk mitigation is reduced. If we are replaceable by aliens, then it seems like not much is lost if we do go extinct, since the aliens would still produce the large valuable future that we would have otherwise produced.
I have to admit though, it is personally uncomfortable for my valuation of x-risk mitigation efforts and cause prioritisation to depend partially on something as abstract and unknowable as the existence of aliens.