In the successful cases, events occurred that made the technology risk seem plausible. [...]
This might suggest that we should be pessimistic about catalyzing self-governance efforts via abstract arguments and faraway failure modes only. We could do more to connect “near-term” issues like data privacy and algorithmic bias with “long-term” concerns. We could try to preemptively identify “fire alarms” for TAI, and be ready to take advantage of these warning signals if they occur.
This seems like a reasonable takeaway from these case studies. But it seems like the case studies mainly demonstrate that the salience and perceived plausibility of a technological risk is important, rather than that the best method for increasing the salience and perceived plausibility of a technological risk is to connect long-term/extreme risks to near-term/smaller-scale concerns or to identify “fire alarms”?
Perhaps making people think and care more about the long-term/extreme risks themselves—or the long-term future more broadly, or extinction, or suffering risks, or whatever—could also do a decent job of increasing the salience and perceived plausibility of the long-term/extreme risks? I’d guess that it’s harder, but maybe that disadvantage is made up for by the fact that then any self-governance efforts that do occur would be better targeted at the long-term/extreme risks specifically?
Also, the nuclear and bio case studies involved technological risks that are more extreme or at least more violent and dramatic than issues like data privacy or algorithmic bias. So it actually seems like maybe the case studies would push against a focus on those sorts of issues, and in favour of a focus on the more extreme/violent/dramatic aspects of AI risk?
Maybe some parts of what you read that didn’t make it into this post seem to more clearly push in favour of connecting “long-term” AI concerns to “near-term” AI concerns?
This seems like a reasonable takeaway from these case studies. But it seems like the case studies mainly demonstrate that the salience and perceived plausibility of a technological risk is important, rather than that the best method for increasing the salience and perceived plausibility of a technological risk is to connect long-term/extreme risks to near-term/smaller-scale concerns or to identify “fire alarms”?
Perhaps making people think and care more about the long-term/extreme risks themselves—or the long-term future more broadly, or extinction, or suffering risks, or whatever—could also do a decent job of increasing the salience and perceived plausibility of the long-term/extreme risks? I’d guess that it’s harder, but maybe that disadvantage is made up for by the fact that then any self-governance efforts that do occur would be better targeted at the long-term/extreme risks specifically?
Also, the nuclear and bio case studies involved technological risks that are more extreme or at least more violent and dramatic than issues like data privacy or algorithmic bias. So it actually seems like maybe the case studies would push against a focus on those sorts of issues, and in favour of a focus on the more extreme/violent/dramatic aspects of AI risk?
Maybe some parts of what you read that didn’t make it into this post seem to more clearly push in favour of connecting “long-term” AI concerns to “near-term” AI concerns?