I realised that there was a missing step in the reasoning of the first paragraph (relating to the title), so I’ ve edited it (and split it into two). Previously it read:
Artificial General Intelligence (AGI) poses an existential risk (x-risk) to all known sentient life. Given the stakes involved (the whole world/future light cone), we should regard timelines of ≥10% probability of AGI in ≤10 years as crunch time, and—given that there is already an increasingly broad consensus around this [1]-- be treating AGI x-risk as an urgent immediate priority (not something to mull over leisurely as part of a longtermist agenda).
I realised that there was a missing step in the reasoning of the first paragraph (relating to the title), so I’ ve edited it (and split it into two). Previously it read: