In the successful cases, events occurred that made the technology risk seem plausible. [...]
This might suggest that we should be pessimistic about catalyzing self-governance efforts via abstract arguments and faraway failure modes only. We could do more to connect ânear-termâ issues like data privacy and algorithmic bias with âlong-termâ concerns. We could try to preemptively identify âfire alarmsâ for TAI, and be ready to take advantage of these warning signals if they occur.
This seems like a reasonable takeaway from these case studies. But it seems like the case studies mainly demonstrate that the salience and perceived plausibility of a technological risk is important, rather than that the best method for increasing the salience and perceived plausibility of a technological risk is to connect long-term/âextreme risks to near-term/âsmaller-scale concerns or to identify âfire alarmsâ?
Perhaps making people think and care more about the long-term/âextreme risks themselvesâor the long-term future more broadly, or extinction, or suffering risks, or whateverâcould also do a decent job of increasing the salience and perceived plausibility of the long-term/âextreme risks? Iâd guess that itâs harder, but maybe that disadvantage is made up for by the fact that then any self-governance efforts that do occur would be better targeted at the long-term/âextreme risks specifically?
Also, the nuclear and bio case studies involved technological risks that are more extreme or at least more violent and dramatic than issues like data privacy or algorithmic bias. So it actually seems like maybe the case studies would push against a focus on those sorts of issues, and in favour of a focus on the more extreme/âviolent/âdramatic aspects of AI risk?
Maybe some parts of what you read that didnât make it into this post seem to more clearly push in favour of connecting âlong-termâ AI concerns to ânear-termâ AI concerns?
This seems like a reasonable takeaway from these case studies. But it seems like the case studies mainly demonstrate that the salience and perceived plausibility of a technological risk is important, rather than that the best method for increasing the salience and perceived plausibility of a technological risk is to connect long-term/âextreme risks to near-term/âsmaller-scale concerns or to identify âfire alarmsâ?
Perhaps making people think and care more about the long-term/âextreme risks themselvesâor the long-term future more broadly, or extinction, or suffering risks, or whateverâcould also do a decent job of increasing the salience and perceived plausibility of the long-term/âextreme risks? Iâd guess that itâs harder, but maybe that disadvantage is made up for by the fact that then any self-governance efforts that do occur would be better targeted at the long-term/âextreme risks specifically?
Also, the nuclear and bio case studies involved technological risks that are more extreme or at least more violent and dramatic than issues like data privacy or algorithmic bias. So it actually seems like maybe the case studies would push against a focus on those sorts of issues, and in favour of a focus on the more extreme/âviolent/âdramatic aspects of AI risk?
Maybe some parts of what you read that didnât make it into this post seem to more clearly push in favour of connecting âlong-termâ AI concerns to ânear-termâ AI concerns?