Executive summary: The term “p(doom)” is ambiguous, rhetorically ineffective for communicating AI risks, and has become an polarizing ingroup signal that impedes thoughtful discussion on mitigating existential threats from advanced AI.
Key points:
“p(doom)” conflates multiple distinct probabilities like short-term AI catastrophe vs long-term, conditional on AGI vs conditional on superintelligence. This ambiguity fosters miscommunication.
Explicit probabilities meet motivated skepticism and innumeracy. Framing AI risk discussion around numbers backfires rhetorically.
“p(doom)” has become an ingroup shibboleth that outsiders easily ridicule. This entrenches polarization around AI risk.
People should stop using this term and instead discuss specific risks and probabilities when warranted, but focus rhetoric on normal language.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: The term “p(doom)” is ambiguous, rhetorically ineffective for communicating AI risks, and has become an polarizing ingroup signal that impedes thoughtful discussion on mitigating existential threats from advanced AI.
Key points:
“p(doom)” conflates multiple distinct probabilities like short-term AI catastrophe vs long-term, conditional on AGI vs conditional on superintelligence. This ambiguity fosters miscommunication.
Explicit probabilities meet motivated skepticism and innumeracy. Framing AI risk discussion around numbers backfires rhetorically.
“p(doom)” has become an ingroup shibboleth that outsiders easily ridicule. This entrenches polarization around AI risk.
People should stop using this term and instead discuss specific risks and probabilities when warranted, but focus rhetoric on normal language.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.