Minimum P(doom) that is unacceptable to develop AGI
80%: I think even if we were disempowered, we would likely get help from the AGI to quickly solve problems like poverty, factory farming, aging, etc. and I do think that is valuable. If humanity were disempowered, I think there would still be some value in expectation of the AGI settling the universe. I am worried that a pause before AGI could become permanent (until there is population and economic collapse due to fertility collapse, after which it likely doesn’t matter), and that could prevent the settlement of the universe with sentient beings. However, I think if we can pause at AGI, even if that becomes permanent, we could either make human brain emulations or make AGI sentient so that we could still settle the universe with sentient beings even if it were not possible for biological humans (though the value might be much lower than with artificial superintelligence). I am worried about the background existential risk, but I think if we are at the point of AGI, the AI risk becomes large per year, so it’s worth it to pause, despite the possibility of it being riskier when we unpause, depending on how we do it. I am somewhat optimistic that a pause would reduce the risk, but I am still compelled by the outside view that a more intelligent species would eventually take control. So overall, I think it is acceptable to create AGI at a relatively high P(doom) (mostly non-Draconian disempowerment) if we were to continue to superintelligence, but then we should pause at AGI to try to reduce P(doom) (and we should also be more cautious in the run up to AGI). So taking into account this pause, P(doom) would be lower, but I’m not sure how to take this into account in my answer.
80%: I think even if we were disempowered, we would likely get help from the AGI to quickly solve problems like poverty, factory farming, aging, etc. and I do think that is valuable. If humanity were disempowered, I think there would still be some value in expectation of the AGI settling the universe. I am worried that a pause before AGI could become permanent (until there is population and economic collapse due to fertility collapse, after which it likely doesn’t matter), and that could prevent the settlement of the universe with sentient beings. However, I think if we can pause at AGI, even if that becomes permanent, we could either make human brain emulations or make AGI sentient so that we could still settle the universe with sentient beings even if it were not possible for biological humans (though the value might be much lower than with artificial superintelligence). I am worried about the background existential risk, but I think if we are at the point of AGI, the AI risk becomes large per year, so it’s worth it to pause, despite the possibility of it being riskier when we unpause, depending on how we do it. I am somewhat optimistic that a pause would reduce the risk, but I am still compelled by the outside view that a more intelligent species would eventually take control. So overall, I think it is acceptable to create AGI at a relatively high P(doom) (mostly non-Draconian disempowerment) if we were to continue to superintelligence, but then we should pause at AGI to try to reduce P(doom) (and we should also be more cautious in the run up to AGI). So taking into account this pause, P(doom) would be lower, but I’m not sure how to take this into account in my answer.