Richard—this all sounds quite reasonable and prudent, and clearly argued.
I guess a key psychological issue here is that we have a few decades of research showing that people tend to either exaggerate or entirely discount quite low probability events; we’re quite bad at thinking rationally about probabilities in the range of 0.1% − 5% (your best guess for likelihood of AI extinction). So, if we want people to take AI X risks seriously, there may be public relations incentives to push our guesses slightly higher. Depending on one’s model of public outreach, that could be seen as deceptively manipulative, or as a helpful and honorable ‘nudge’ to overcome a common cognitive bias.
Richard—this all sounds quite reasonable and prudent, and clearly argued.
I guess a key psychological issue here is that we have a few decades of research showing that people tend to either exaggerate or entirely discount quite low probability events; we’re quite bad at thinking rationally about probabilities in the range of 0.1% − 5% (your best guess for likelihood of AI extinction). So, if we want people to take AI X risks seriously, there may be public relations incentives to push our guesses slightly higher. Depending on one’s model of public outreach, that could be seen as deceptively manipulative, or as a helpful and honorable ‘nudge’ to overcome a common cognitive bias.