First, I’m not sure I like Bostrom’s definition of x-risk. It seems to dismiss the notion of aliens. You could imagine a scenario with a ton of independently popping up alien civilizations being very uniform, regardless of what we do. Second, I think the binaryness of our universe is going to be dependent on the AI we make and/or our expansion philosophy.
AI 1: Flies around the universe dropping single celled organisms on every livable planet
AI 2: Flies around the universe setting up colonies that suck up all the energy in the area and converting it into simulations/digital people.
if AI 2 expands through the universe then the valence of sentience in our lightcone would seemingly be much more correlated than if AI 1 expands. So AI 1 scenario would look more binary uniform and AI 2 scenario would look more uniform binary.
I have two thoughts here.
First, I’m not sure I like Bostrom’s definition of x-risk. It seems to dismiss the notion of aliens. You could imagine a scenario with a ton of independently popping up alien civilizations being very uniform, regardless of what we do. Second, I think the binaryness of our universe is going to be dependent on the AI we make and/or our expansion philosophy.
AI 1: Flies around the universe dropping single celled organisms on every livable planet
AI 2: Flies around the universe setting up colonies that suck up all the energy in the area and converting it into simulations/digital people.
if AI 2 expands through the universe then the valence of sentience in our lightcone would seemingly be much more correlated than if AI 1 expands. So AI 1 scenario would look more
binaryuniform and AI 2 scenario would look moreuniformbinary.