If I could make one suggestion, I think the questions about the “how” a catastrophe would occur (ie nanotech, viruses, etc), deserve it’s own section, rather than being lumped in under “miscellaneous”. This is a key part of the argument for AI being an x-risk, and imo one of the most underdeveloped parts.
I agree that this would be interesting to explore, but heavily disagree that having a detailed answer to that influences the prediction of X risk substantially.
Good job on putting this together
If I could make one suggestion, I think the questions about the “how” a catastrophe would occur (ie nanotech, viruses, etc), deserve it’s own section, rather than being lumped in under “miscellaneous”. This is a key part of the argument for AI being an x-risk, and imo one of the most underdeveloped parts.
I agree that this would be interesting to explore, but heavily disagree that having a detailed answer to that influences the prediction of X risk substantially.
Why do you disagree?
Fair point. I personally agree that has tended to be underdeveloped.