What are the key cruxes between people who think AGI is about to kill us all, and those who don’t? I’m at the stage where I can read something like this and think “ok so we’re all going to die”, then follow it up with this and be like “ah great we’re all fine then”. I don’t yet have the expertise to critically evaluate the arguments in any depth. Has anyone written something that explains where people begin to diverge, and why, in a reasonably accessible way?
What are the key cruxes between people who think AGI is about to kill us all, and those who don’t? I’m at the stage where I can read something like this and think “ok so we’re all going to die”, then follow it up with this and be like “ah great we’re all fine then”. I don’t yet have the expertise to critically evaluate the arguments in any depth. Has anyone written something that explains where people begin to diverge, and why, in a reasonably accessible way?
80k’s AI risk article has a section titled “What do we think are the best arguments against this problem being pressing?”