I’m focused, here, on a very specific type of worry. There are lots of other ways to be worried about AI—and even, about existential catastrophes resulting from AI.
Can you talk about your estimate of the overall AI-related x-risk (see here for an attempt at a comprehensive list), as well as total x-risk from all sources? (If your overall AI-related x-risk is significantly higher than 5%, what do you think are the other main sources?) I think it would be a good idea for anyone discussing a specific type of x-risk to also give their more general estimates, for a few reasons:
It’s useful for the purpose of prioritizing between different types of x-risk.
Quantification of specific risks can be sensitive to how one defines categories. For example one might push some kinds of risks out of “existential risk from misaligned AI” and into “AI-related x-risk in general” by defining the former in a narrow way, thereby reducing one’s estimate of it. This would be less problematic (e.g., less likely to give the reader a false sense of security) if one also talked about more general risk estimates.
Different people may be more or less optimistic in general, making it hard to compare absolute risk estimates between individuals. Relative risk levels suffer less from this problem.
Can you talk about your estimate of the overall AI-related x-risk (see here for an attempt at a comprehensive list), as well as total x-risk from all sources? (If your overall AI-related x-risk is significantly higher than 5%, what do you think are the other main sources?) I think it would be a good idea for anyone discussing a specific type of x-risk to also give their more general estimates, for a few reasons:
It’s useful for the purpose of prioritizing between different types of x-risk.
Quantification of specific risks can be sensitive to how one defines categories. For example one might push some kinds of risks out of “existential risk from misaligned AI” and into “AI-related x-risk in general” by defining the former in a narrow way, thereby reducing one’s estimate of it. This would be less problematic (e.g., less likely to give the reader a false sense of security) if one also talked about more general risk estimates.
Different people may be more or less optimistic in general, making it hard to compare absolute risk estimates between individuals. Relative risk levels suffer less from this problem.