We can quibble over the numbers but I think the point here is basically right, and if not right for AI then probably right for biorisk or some other risks.
This can be taken further—if your main priority is people alive today (or yourself) - near term catastrophic risks that aren’t x-risks become as important. So, for example, while it may be improbable for a pandemic to kill everyone, I think it’s much more probable that one kills, say, at least 90% of people. On the other hand I’m not sure the increase in probability from AI killing everyone to AI killing at least 90% of people is that big.
Then again, AI can be misused much worse than other things. So maybe the chance that it doesn’t kill me but still, for example, lets a totalitarian government enslave me, is pretty big?
This can be taken further—if your main priority is people alive today (or yourself) - near term catastrophic risks that aren’t x-risks become as important. So, for example, while it may be improbable for a pandemic to kill everyone, I think it’s much more probable that one kills, say, at least 90% of people. On the other hand I’m not sure the increase in probability from AI killing everyone to AI killing at least 90% of people is that big.
Then again, AI can be misused much worse than other things. So maybe the chance that it doesn’t kill me but still, for example, lets a totalitarian government enslave me, is pretty big?