Basically, you can look at my 10% [estimate of the existential risk from AI this century] as, there’s about a 50% chance that we create something that’s more intelligent than humanity this century. And then there’s only an 80% chance that we manage to survive that transition, being in charge of our future.
I think he also gives that 50% estimate in The Precipice, but I can’t remember for sure.
This indeed matches Ord’s views. He says on 80k:
I think he also gives that 50% estimate in The Precipice, but I can’t remember for sure.