Your arguments seem to basically be “I can’t think of how this could kill everyone, therefore it’s extremely unlikely.” You should assign more probability to the hypothesis that it could happen in a way you didn’t think of. For example, did you know about mirror biology?
Currently, I think that the risk of extinction over the next hundred years is quite low (less than .01%) if AGI is not developed.
Your thesis seems to be “non-AI x-risks are very unlikely”, but your title says all x-risks are very unlikely. Those are two very different things, since AI is the biggest x-risk.
You’re right. I should have mentioned mirror biology. That’s definitely the greatest biorisk I know of. That said, it still seems to be a low risk, considering that a significant number of people are actively working on it and they are being taken seriously.
And, also, I am accounting for ways that extinction could occur that I haven’t thought of when I mentioned unknown risks (which I said could plausibly be as high as 10%). As such, my thesis perhaps should be adjusted to “the known non-AI risks that are commonly discussed seem really unlikely, although the unknown risks could plausibly be somewhat high.” That said, in terms of known existential risks, I don’t think I should weigh a risk particularly high if every scenario I hear for it almost certainly wouldn’t kill everyone on Earth.
I have also changed the title to account for the fact that I’m not referring to AI.
That said, it still seems to be a low risk, considering that a significant number of people are actively working on it and they are being taken seriously.
Have they solved the problem yet? How likely are they to solve the problem? How confident are you about that? I don’t think you can answer those questions in a way that justifies a 0.01% probability of extinction.
Your arguments seem to basically be “I can’t think of how this could kill everyone, therefore it’s extremely unlikely.” You should assign more probability to the hypothesis that it could happen in a way you didn’t think of. For example, did you know about mirror biology?
Your thesis seems to be “non-AI x-risks are very unlikely”, but your title says all x-risks are very unlikely. Those are two very different things, since AI is the biggest x-risk.
You’re right. I should have mentioned mirror biology. That’s definitely the greatest biorisk I know of. That said, it still seems to be a low risk, considering that a significant number of people are actively working on it and they are being taken seriously.
And, also, I am accounting for ways that extinction could occur that I haven’t thought of when I mentioned unknown risks (which I said could plausibly be as high as 10%). As such, my thesis perhaps should be adjusted to “the known non-AI risks that are commonly discussed seem really unlikely, although the unknown risks could plausibly be somewhat high.” That said, in terms of known existential risks, I don’t think I should weigh a risk particularly high if every scenario I hear for it almost certainly wouldn’t kill everyone on Earth.
I have also changed the title to account for the fact that I’m not referring to AI.
Have they solved the problem yet? How likely are they to solve the problem? How confident are you about that? I don’t think you can answer those questions in a way that justifies a 0.01% probability of extinction.