Meta: I count 25 questionmarks in this “quick” poll, and a lot of the questions appear to be seriously confused. A proper response here would take many hours.
Take your scenario number 5, for instance. Is there any serious literature examining this? Are there any reasons why anyone would assign that scenario >epsilon probability? Do any decisions hinge on this?
I’m not sure if you consider LessWrong serious literature, but cryonically preserving all humans was mentioned here. I think nearly everyone would consider this doom, but there are people defending extinction (which I think is even worse) as not doom, so I included all them for completeness.
Yes, one could take many hours thinking through these questions (as I have), but even if one doesn’t have that time, I think it’s useful to get an idea how people are defining doom, because a lot of people use the term, and I suspect that there is a wide variety of definitions (and indeed, preliminary results do show a large range).
I’m happy to respond to specific feedback about which questions are confused and why.
=Confusion in What mildest scenario do you consider doom?= My probability distribution looks like what you call the MIRI Torch, and what I call the MIRI Logo: Scenarios 3 to 9 aren’t well described in the literature because they are not in a stable equilibrium. In the real world, once you are powerless, worthless and an obstacle to those in power, you just end up dead.
=Confusion in Minimum P(doom) that is unacceptable to develop AGI?= For non-extreme values, the concrete estimate and the most of the considerations you mention are irrelevant. The question is morally isomorphic to “What percentage of the worlds population am I willing to kill in expectation?”. Answers such as “10^6 humans” and “10^9 humans” are both monstrous, even though your poll would rate them very differently.
These possible answers don’t become moral even if you think that it’s really positive that humans don’t have to work any longer. You aren’t allowed to do something worse than the Holocaust in expectation, even if you really really like space travel or immortality, or ending factory farming, or whatever. You aren’t allowed to unilaterally decide to roll the dice on omnicide even if you personally believe that global warming is an existential risk, or that it would be good to fill the universe with machines of your creation.
=Confusion in What mildest scenario do you consider doom?=
My probability distribution looks like what you call the MIRI Torch, and what I call the MIRI Logo: Scenarios 3 to 9 aren’t well described in the literature because they are not in a stable equilibrium. In the real world, once you are powerless, worthless and an obstacle to those in power, you just end up dead.
This question was not about probability, but instead what one considers doom. But let’s talk probability. I think Yudkowsky and Soares believe that one or more of 3-5 has decent likelihood, though I’m not finding it now, because of acausal trade. As someone else said, “Killing all humans is defecting. Preserving humans is a relatively cheap signal to any other ASI that you will cooperate.” Christiano believes a stronger version, that most humans will survive (unfrozen) a takeover because AGI has pico-pseudo kindness. Though humans did cause the extinction of close competitors, they are exhibiting pico-pseudo kindness to many other species, despite them being a (small) obstacle.
=Confusion in Minimum P(doom) that is unacceptable to develop AGI?= For non-extreme values, the concrete estimate and the most of the considerations you mention are irrelevant. The question is morally isomorphic to “What percentage of the worlds population am I willing to kill in expectation?”. Answers such as “10^6 humans” and “10^9 humans” are both monstrous, even though your poll would rate them very differently.
Since your doom equates to extinction, a probability of doom of 0.01% gives ~10^6 expected deaths, which you call monstrous. Solving factory farming does not sway you, but what about saving the billions of human lives who would die without AGI in the next century decades (even without creating immortality, just solving poverty)? Or what about AGI preventing other existential risks like an engineered pandemic? Do you think that non-AI X risk is <0.01% in the next century? Ever? Or maybe you are just objecting to the unilateral part—so then is it ok if the UN votes to create AGI even if it has a 33% chance of doom, as one paper said could be justified by economic growth?
Meta: I count 25 questionmarks in this “quick” poll, and a lot of the questions appear to be seriously confused. A proper response here would take many hours.
Take your scenario number 5, for instance. Is there any serious literature examining this? Are there any reasons why anyone would assign that scenario >epsilon probability? Do any decisions hinge on this?
I’m not sure if you consider LessWrong serious literature, but cryonically preserving all humans was mentioned here. I think nearly everyone would consider this doom, but there are people defending extinction (which I think is even worse) as not doom, so I included all them for completeness.
Yes, one could take many hours thinking through these questions (as I have), but even if one doesn’t have that time, I think it’s useful to get an idea how people are defining doom, because a lot of people use the term, and I suspect that there is a wide variety of definitions (and indeed, preliminary results do show a large range).
I’m happy to respond to specific feedback about which questions are confused and why.
=Confusion in What mildest scenario do you consider doom?=
My probability distribution looks like what you call the MIRI Torch, and what I call the MIRI Logo: Scenarios 3 to 9 aren’t well described in the literature because they are not in a stable equilibrium. In the real world, once you are powerless, worthless and an obstacle to those in power, you just end up dead.
=Confusion in Minimum P(doom) that is unacceptable to develop AGI?=
For non-extreme values, the concrete estimate and the most of the considerations you mention are irrelevant. The question is morally isomorphic to “What percentage of the worlds population am I willing to kill in expectation?”. Answers such as “10^6 humans” and “10^9 humans” are both monstrous, even though your poll would rate them very differently.
These possible answers don’t become moral even if you think that it’s really positive that humans don’t have to work any longer. You aren’t allowed to do something worse than the Holocaust in expectation, even if you really really like space travel or immortality, or ending factory farming, or whatever. You aren’t allowed to unilaterally decide to roll the dice on omnicide even if you personally believe that global warming is an existential risk, or that it would be good to fill the universe with machines of your creation.
This question was not about probability, but instead what one considers doom. But let’s talk probability. I think Yudkowsky and Soares believe that one or more of 3-5 has decent likelihood, though I’m not finding it now, because of acausal trade. As someone else said, “Killing all humans is defecting. Preserving humans is a relatively cheap signal to any other ASI that you will cooperate.” Christiano believes a stronger version, that most humans will survive (unfrozen) a takeover because AGI has pico-pseudo kindness. Though humans did cause the extinction of close competitors, they are exhibiting pico-pseudo kindness to many other species, despite them being a (small) obstacle.
Since your doom equates to extinction, a probability of doom of 0.01% gives ~10^6 expected deaths, which you call monstrous. Solving factory farming does not sway you, but what about saving the billions of human lives who would die without AGI in the next century decades (even without creating immortality, just solving poverty)? Or what about AGI preventing other existential risks like an engineered pandemic? Do you think that non-AI X risk is <0.01% in the next century? Ever? Or maybe you are just objecting to the unilateral part—so then is it ok if the UN votes to create AGI even if it has a 33% chance of doom, as one paper said could be justified by economic growth?