Insofar as humans and/or aliens care about nature, similar arguments apply there too, though this is mostly beside the point: if humans survive and have (even a tiny bit of) resources they can preserve some natural easily.
I find it annoying how confident this article is without really bother to engage with the relevant arguments here.
(Same goes for many other posts asserting that AIs will disassemble humans for their atoms.)
(cross posting my reply to your cross-posted comment)
I’m not arguing about p(total human extinction|superintelligence), but p(nature survives|total human extinction from superintelligence), as this conditional probability I see people getting very wrong sometimes.
It’s not implausible to me that we survive due to decision theoretic reasons, this seems possible though not my default expectation (I mostly expect Decision theory does not imply we get nice things, unless we manually win a decent chunk more timelines than I expect).
My confidence is in the claim “if AI wipes out humans, it will wipe out nature”. I don’t engage with counterarguments to a separate claim, as that is beyond the scope of this post and I don’t have much to add over existing literature like the other posts you linked.
I think literal extinction is unlikely even conditional on misaligned AI takeover due to:
The potential for the AI to be at least a tiny bit “kind” (same as humans probably wouldn’t kill all aliens).[1]
Decision theory/trade reasons
This is discussed in more detail here and here.
Insofar as humans and/or aliens care about nature, similar arguments apply there too, though this is mostly beside the point: if humans survive and have (even a tiny bit of) resources they can preserve some natural easily.
I find it annoying how confident this article is without really bother to engage with the relevant arguments here.
(Same goes for many other posts asserting that AIs will disassemble humans for their atoms.)
(This comment echos Owen’s to some extent.)
This includes the potential for the AI to have preferences that are morally valueable from a typical human perspective.
(cross posting my reply to your cross-posted comment)
I’m not arguing about p(total human extinction|superintelligence), but p(nature survives|total human extinction from superintelligence), as this conditional probability I see people getting very wrong sometimes.
It’s not implausible to me that we survive due to decision theoretic reasons, this seems possible though not my default expectation (I mostly expect Decision theory does not imply we get nice things, unless we manually win a decent chunk more timelines than I expect).
My confidence is in the claim “if AI wipes out humans, it will wipe out nature”. I don’t engage with counterarguments to a separate claim, as that is beyond the scope of this post and I don’t have much to add over existing literature like the other posts you linked.