Hey Greg! I personally appreciate that you and others are thinking hard about the viability of giving us more time to solve the challenges that I expect we’ll encounter as we transition to a world with powerful AI systems. Due to capacity constraints, I won’t be able to discuss the pros and cons of pausing right now. But as a brief sketch of my current personal view: I agree it’d be really useful to have more time to solve the challenges associated with navigating the transition to a world with AGI, all else equal. However, I’m relatively more excited than you about other strategies to reduce the risks of AGI, because I’m worried about the tractability of a (really effective) pause. I’d also guess my P(doom) is lower than yours.
Hi Niel, what I’d like to see is an argument for the tractability of successfully “navigating the transition to a world with AGI” without a global catastrophe (or extinction) (i.e. an explanation for why your p(doom|AGI) is lower). I think this is much less tractable than getting a (really effective) Pause! (Even if a Pause itself is somewhat unlikely at this point.)
I think most people in EA have relatively low (but still macroscopic) p(doom)s (e.g. 1-20%), and have the view that “by default, everything turns out fine”. And I don’t think this has ever been sufficiently justified. The common view is that alignment will just somehow be solved enough to keep us alive, and maybe even thrive (if we just keep directing more talent and funding to research). But then the extrapolation to the ultimate implications of such imperfect alignment (e.g. gradual disempowerment → existential catastrophe) never happens.
Hey Greg! I personally appreciate that you and others are thinking hard about the viability of giving us more time to solve the challenges that I expect we’ll encounter as we transition to a world with powerful AI systems. Due to capacity constraints, I won’t be able to discuss the pros and cons of pausing right now. But as a brief sketch of my current personal view: I agree it’d be really useful to have more time to solve the challenges associated with navigating the transition to a world with AGI, all else equal. However, I’m relatively more excited than you about other strategies to reduce the risks of AGI, because I’m worried about the tractability of a (really effective) pause. I’d also guess my P(doom) is lower than yours.
Hi Niel, what I’d like to see is an argument for the tractability of successfully “navigating the transition to a world with AGI” without a global catastrophe (or extinction) (i.e. an explanation for why your p(doom|AGI) is lower). I think this is much less tractable than getting a (really effective) Pause! (Even if a Pause itself is somewhat unlikely at this point.)
I think most people in EA have relatively low (but still macroscopic) p(doom)s (e.g. 1-20%), and have the view that “by default, everything turns out fine”. And I don’t think this has ever been sufficiently justified. The common view is that alignment will just somehow be solved enough to keep us alive, and maybe even thrive (if we just keep directing more talent and funding to research). But then the extrapolation to the ultimate implications of such imperfect alignment (e.g. gradual disempowerment → existential catastrophe) never happens.