Interesting analysis, thanks. I’m a bit wary of it leading to a false sense of security around AGI though. Your “reasons not to believe”, such as -
- Biological causes of extinction may differ from AGI-related causes. - Intelligent species, including humans may be qualitatively different from superhuman AGI. - Since AGI is agential, it is likely more damaging than accidental risks.
- are overpowering imo. AGI would be unprecedented in that it would threaten the entirety of carbon-based life (e.g. a superintelligent AI might remove the oxygen from the atmosphere to prevent the corrosion of it’s machinery, or star-lift the Sun for energy).
[Separating out this paragraph into a new comment as I’m guessing it’s what lead to the downvotes, and I’d quite like the point of the parent paragraph to stand alone. Not sure if anyone will see this now though.]
I think it’s imperative to get the leaders of AGI companies to realise that they are in a suiciderace (and that AGI will likely kill themtoo). The default outcome of AGI is doom. For extinction risk at the 1% level, it seems reasonable (even though it’s still 80M lives in expectation) to pull the trigger on AGI for a 99% chance of utopia. This is totally wrong-headed and is arguably contributing massively to current x-risk.
Interesting analysis, thanks. I’m a bit wary of it leading to a false sense of security around AGI though. Your “reasons not to believe”, such as -
- are overpowering imo. AGI would be unprecedented in that it would threaten the entirety of carbon-based life (e.g. a superintelligent AI might remove the oxygen from the atmosphere to prevent the corrosion of it’s machinery, or star-lift the Sun for energy).
[Separating out this paragraph into a new comment as I’m guessing it’s what lead to the downvotes, and I’d quite like the point of the parent paragraph to stand alone. Not sure if anyone will see this now though.]
I think it’s imperative to get the leaders of AGI companies to realise that they are in a suicide race (and that AGI will likely kill them too). The default outcome of AGI is doom. For extinction risk at the 1% level, it seems reasonable (even though it’s still 80M lives in expectation) to pull the trigger on AGI for a 99% chance of utopia. This is totally wrong-headed and is arguably contributing massively to current x-risk.