=Confusion in What mildest scenario do you consider doom?=
My probability distribution looks like what you call the MIRI Torch, and what I call the MIRI Logo: Scenarios 3 to 9 arenāt well described in the literature because they are not in a stable equilibrium. In the real world, once you are powerless, worthless and an obstacle to those in power, you just end up dead.
This question was not about probability, but instead what one considers doom. But letās talk probability. I think Yudkowsky and Soares believe that one or more of 3-5 has decent likelihood, though Iām not finding it now, because of acausal trade. As someone else said, āKilling all humans is defecting. Preserving humans is a relatively cheap signal to any other ASI that you will cooperate.ā Christiano believes a stronger version, that most humans will survive (unfrozen) a takeover because AGI has pico-pseudo kindness. Though humans did cause the extinction of close competitors, they are exhibiting pico-pseudo kindness to many other species, despite them being a (small) obstacle.
=Confusion in Minimum P(doom) that is unacceptable to develop AGI?= For non-extreme values, the concrete estimate and the most of the considerations you mention are irrelevant. The question is morally isomorphic to āWhat percentage of the worlds population am I willing to kill in expectation?ā. Answers such as ā10^6 humansā and ā10^9 humansā are both monstrous, even though your poll would rate them very differently.
Since your doom equates to extinction, a probability of doom of 0.01% gives ~10^6 expected deaths, which you call monstrous. Solving factory farming does not sway you, but what about saving the billions of human lives who would die without AGI in the next century decades (even without creating immortality, just solving poverty)? Or what about AGI preventing other existential risks like an engineered pandemic? Do you think that non-AI X risk is <0.01% in the next century? Ever? Or maybe you are just objecting to the unilateral partāso then is it ok if the UN votes to create AGI even if it has a 33% chance of doom, as one paper said could be justified by economic growth?
This question was not about probability, but instead what one considers doom. But letās talk probability. I think Yudkowsky and Soares believe that one or more of 3-5 has decent likelihood, though Iām not finding it now, because of acausal trade. As someone else said, āKilling all humans is defecting. Preserving humans is a relatively cheap signal to any other ASI that you will cooperate.ā Christiano believes a stronger version, that most humans will survive (unfrozen) a takeover because AGI has pico-pseudo kindness. Though humans did cause the extinction of close competitors, they are exhibiting pico-pseudo kindness to many other species, despite them being a (small) obstacle.
Since your doom equates to extinction, a probability of doom of 0.01% gives ~10^6 expected deaths, which you call monstrous. Solving factory farming does not sway you, but what about saving the billions of human lives who would die without AGI in the next century decades (even without creating immortality, just solving poverty)? Or what about AGI preventing other existential risks like an engineered pandemic? Do you think that non-AI X risk is <0.01% in the next century? Ever? Or maybe you are just objecting to the unilateral partāso then is it ok if the UN votes to create AGI even if it has a 33% chance of doom, as one paper said could be justified by economic growth?