Option A: 10% chance utopia within our lifetime (if alignment turns out to be easy) and 90% human extinction
Are you simplifying here, or do you actually believe that “utopia in our lifetime” or “extinction” are the only two possible outcomes given AGI? Do you assign a 0% chance that we survive AGI, but don’t have a utopia in the next 80 years?
What if AGI stalls out at human level, or is incredibly expensive, or is buggy and unreliable like humans are? What if the technology required for utopia turns out to be ridiculously hard even for AGI, or substantially bottlenecked by available resources? What if technology alone can’t create a utopia, and the extra tech just exacerbates existing conflicts? What if AGI access is restricted to world leaders, who use it for their own purposes?
What if we build an unaligned AGI, but catch it early and manage to defeat it in battle? What if early, shitty AGI screws up in a way that causes a worldwide ban on further AGI development? What if we build an AGI, but we keep it confined to a box and can only get limited functionality out of it? What if we build an aligned AGI, but people hate it so much that it voluntary shuts off? What if the AGI that gets built is aligned to the values of people with awful views, like religious fundamentalists? What if AGI wants nothing to do with us and flees the galaxy? What if [insert X thing I didn’t think of here]?.
IMO, extinction and utopia are both unlikely outcomes. The bulk of the probability lies somewhere in the middle.
I was indeed simplifying, and e.g. probably should have said “global catastrophe” instead of “human extinction” to cover cases like permanent totalitarian regimes. I think some of the scenarios you mention could happen, but also think a bunch of them are pretty unlikely, and also disagree with your conclusion that “The bulk of the probability lies somewhere in the middle”. I might be up for discussing more specifics, but also I don’t get the sense that disagreement here is a crux for either of us, so I’m also not sure how much value there would be in continuing down this thread.
I would agree that “utopia in our lifetime” or “extinction” seems like a false dichotomy. What makes you say that you predict the bulk of the probability lies somewhere in the middle?
Are you simplifying here, or do you actually believe that “utopia in our lifetime” or “extinction” are the only two possible outcomes given AGI? Do you assign a 0% chance that we survive AGI, but don’t have a utopia in the next 80 years?
What if AGI stalls out at human level, or is incredibly expensive, or is buggy and unreliable like humans are? What if the technology required for utopia turns out to be ridiculously hard even for AGI, or substantially bottlenecked by available resources? What if technology alone can’t create a utopia, and the extra tech just exacerbates existing conflicts? What if AGI access is restricted to world leaders, who use it for their own purposes?
What if we build an unaligned AGI, but catch it early and manage to defeat it in battle? What if early, shitty AGI screws up in a way that causes a worldwide ban on further AGI development? What if we build an AGI, but we keep it confined to a box and can only get limited functionality out of it? What if we build an aligned AGI, but people hate it so much that it voluntary shuts off? What if the AGI that gets built is aligned to the values of people with awful views, like religious fundamentalists? What if AGI wants nothing to do with us and flees the galaxy? What if [insert X thing I didn’t think of here]?.
IMO, extinction and utopia are both unlikely outcomes. The bulk of the probability lies somewhere in the middle.
I was indeed simplifying, and e.g. probably should have said “global catastrophe” instead of “human extinction” to cover cases like permanent totalitarian regimes. I think some of the scenarios you mention could happen, but also think a bunch of them are pretty unlikely, and also disagree with your conclusion that “The bulk of the probability lies somewhere in the middle”. I might be up for discussing more specifics, but also I don’t get the sense that disagreement here is a crux for either of us, so I’m also not sure how much value there would be in continuing down this thread.
I would agree that “utopia in our lifetime” or “extinction” seems like a false dichotomy. What makes you say that you predict the bulk of the probability lies somewhere in the middle?