I think the risks around securitisation are real/underappreicated, so I’m grateful you’ve written about them. As I’ve written about, I think the securitisation of the internet after 9/11 impeded proper privacy regulation in the US, and prompted Google towards an explicitly pro-profit business model. (Although, this was not a case of macrosecuritization failure).
Some smaller points:
Secondly, its clear that epistemic expert communities, which the AI Safety community could clearly be considered
but ultimately the social construction of the issue as one of security is what is decisive, and this is done by the securitising speech act.
I feel like this point was not fully justified. It seems likely to me that whilst rhetoric around AGI could contribute to securitisation, other military/economic incentives could be as (or more) influential.
Thanks for writing this Gideon.
I think the risks around securitisation are real/underappreicated, so I’m grateful you’ve written about them. As I’ve written about, I think the securitisation of the internet after 9/11 impeded proper privacy regulation in the US, and prompted Google towards an explicitly pro-profit business model. (Although, this was not a case of macrosecuritization failure).
Some smaller points:
This is argued for at greater length here: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4641526
I feel like this point was not fully justified. It seems likely to me that whilst rhetoric around AGI could contribute to securitisation, other military/economic incentives could be as (or more) influential.
What do you think?