Of course, if we do somehow survive all this, people will accuse me and others like me of crying wolf. But 1⁄10 outcomes aren’t that uncommon! I’m willing to take the reputation hit though, whether justified or not.
I think in general a big problem with AI x-risk discourse is that there are a lot of innumerate people around, who just don’t understand what probability means (or at least act like they don’t, and count everything as a confident statement even if appropriately hedged).
Of course, if we do somehow survive all this, people will accuse me and others like me of crying wolf. But 1⁄10 outcomes aren’t that uncommon! I’m willing to take the reputation hit though, whether justified or not.
I think in general a big problem with AI x-risk discourse is that there are a lot of innumerate people around, who just don’t understand what probability means (or at least act like they don’t, and count everything as a confident statement even if appropriately hedged).