Iâm not sure if you consider LessWrong serious literature, but cryonically preserving all humans was mentioned here. I think nearly everyone would consider this doom, but there are people defending extinction (which I think is even worse) as not doom, so I included all them for completeness.
Yes, one could take many hours thinking through these questions (as I have), but even if one doesnât have that time, I think itâs useful to get an idea how people are defining doom, because a lot of people use the term, and I suspect that there is a wide variety of definitions (and indeed, preliminary results do show a large range).
Iâm happy to respond to specific feedback about which questions are confused and why.
=Confusion in What mildest scenario do you consider doom?= My probability distribution looks like what you call the MIRI Torch, and what I call the MIRI Logo: Scenarios 3 to 9 arenât well described in the literature because they are not in a stable equilibrium. In the real world, once you are powerless, worthless and an obstacle to those in power, you just end up dead.
=Confusion in Minimum P(doom) that is unacceptable to develop AGI?= For non-extreme values, the concrete estimate and the most of the considerations you mention are irrelevant. The question is morally isomorphic to âWhat percentage of the worlds population am I willing to kill in expectation?â. Answers such as â10^6 humansâ and â10^9 humansâ are both monstrous, even though your poll would rate them very differently.
These possible answers donât become moral even if you think that itâs really positive that humans donât have to work any longer. You arenât allowed to do something worse than the Holocaust in expectation, even if you really really like space travel or immortality, or ending factory farming, or whatever. You arenât allowed to unilaterally decide to roll the dice on omnicide even if you personally believe that global warming is an existential risk, or that it would be good to fill the universe with machines of your creation.
=Confusion in What mildest scenario do you consider doom?=
My probability distribution looks like what you call the MIRI Torch, and what I call the MIRI Logo: Scenarios 3 to 9 arenât well described in the literature because they are not in a stable equilibrium. In the real world, once you are powerless, worthless and an obstacle to those in power, you just end up dead.
This question was not about probability, but instead what one considers doom. But letâs talk probability. I think Yudkowsky and Soares believe that one or more of 3-5 has decent likelihood, though Iâm not finding it now, because of acausal trade. As someone else said, âKilling all humans is defecting. Preserving humans is a relatively cheap signal to any other ASI that you will cooperate.â Christiano believes a stronger version, that most humans will survive (unfrozen) a takeover because AGI has pico-pseudo kindness. Though humans did cause the extinction of close competitors, they are exhibiting pico-pseudo kindness to many other species, despite them being a (small) obstacle.
=Confusion in Minimum P(doom) that is unacceptable to develop AGI?= For non-extreme values, the concrete estimate and the most of the considerations you mention are irrelevant. The question is morally isomorphic to âWhat percentage of the worlds population am I willing to kill in expectation?â. Answers such as â10^6 humansâ and â10^9 humansâ are both monstrous, even though your poll would rate them very differently.
Since your doom equates to extinction, a probability of doom of 0.01% gives ~10^6 expected deaths, which you call monstrous. Solving factory farming does not sway you, but what about saving the billions of human lives who would die without AGI in the next century decades (even without creating immortality, just solving poverty)? Or what about AGI preventing other existential risks like an engineered pandemic? Do you think that non-AI X risk is <0.01% in the next century? Ever? Or maybe you are just objecting to the unilateral partâso then is it ok if the UN votes to create AGI even if it has a 33% chance of doom, as one paper said could be justified by economic growth?
Iâm not sure if you consider LessWrong serious literature, but cryonically preserving all humans was mentioned here. I think nearly everyone would consider this doom, but there are people defending extinction (which I think is even worse) as not doom, so I included all them for completeness.
Yes, one could take many hours thinking through these questions (as I have), but even if one doesnât have that time, I think itâs useful to get an idea how people are defining doom, because a lot of people use the term, and I suspect that there is a wide variety of definitions (and indeed, preliminary results do show a large range).
Iâm happy to respond to specific feedback about which questions are confused and why.
=Confusion in What mildest scenario do you consider doom?=
My probability distribution looks like what you call the MIRI Torch, and what I call the MIRI Logo: Scenarios 3 to 9 arenât well described in the literature because they are not in a stable equilibrium. In the real world, once you are powerless, worthless and an obstacle to those in power, you just end up dead.
=Confusion in Minimum P(doom) that is unacceptable to develop AGI?=
For non-extreme values, the concrete estimate and the most of the considerations you mention are irrelevant. The question is morally isomorphic to âWhat percentage of the worlds population am I willing to kill in expectation?â. Answers such as â10^6 humansâ and â10^9 humansâ are both monstrous, even though your poll would rate them very differently.
These possible answers donât become moral even if you think that itâs really positive that humans donât have to work any longer. You arenât allowed to do something worse than the Holocaust in expectation, even if you really really like space travel or immortality, or ending factory farming, or whatever. You arenât allowed to unilaterally decide to roll the dice on omnicide even if you personally believe that global warming is an existential risk, or that it would be good to fill the universe with machines of your creation.
This question was not about probability, but instead what one considers doom. But letâs talk probability. I think Yudkowsky and Soares believe that one or more of 3-5 has decent likelihood, though Iâm not finding it now, because of acausal trade. As someone else said, âKilling all humans is defecting. Preserving humans is a relatively cheap signal to any other ASI that you will cooperate.â Christiano believes a stronger version, that most humans will survive (unfrozen) a takeover because AGI has pico-pseudo kindness. Though humans did cause the extinction of close competitors, they are exhibiting pico-pseudo kindness to many other species, despite them being a (small) obstacle.
Since your doom equates to extinction, a probability of doom of 0.01% gives ~10^6 expected deaths, which you call monstrous. Solving factory farming does not sway you, but what about saving the billions of human lives who would die without AGI in the next century decades (even without creating immortality, just solving poverty)? Or what about AGI preventing other existential risks like an engineered pandemic? Do you think that non-AI X risk is <0.01% in the next century? Ever? Or maybe you are just objecting to the unilateral partâso then is it ok if the UN votes to create AGI even if it has a 33% chance of doom, as one paper said could be justified by economic growth?