I don’t think the technical context is the only, or even the most important context where AI risk mitigation can happen. My interpretation of Yudkowsky’s gloom view is that it is mainly a sociological problem (ie someone else will do the cool super profitable thing if the first company/ research group hesitates) rather than a fundamentally technical problem (it would be impossible to figure out how to do it safely if everyone involved moved super slowly).
Thanks, that’s a really good point. Hmm, I might still believe that also for the AI governance side you’ll want to have more high bandwidth discussions specified to somewhat niche audiences, such as specific governmental departments, think tanks, international organizations like the EU, the UN, academic groups. I imagine they all will find different specific framings convincing and others very off-putting, and that you find this out quickly by working with them vs. doing AB testing on a more generic audience.
I don’t think the technical context is the only, or even the most important context where AI risk mitigation can happen. My interpretation of Yudkowsky’s gloom view is that it is mainly a sociological problem (ie someone else will do the cool super profitable thing if the first company/ research group hesitates) rather than a fundamentally technical problem (it would be impossible to figure out how to do it safely if everyone involved moved super slowly).
Thanks, that’s a really good point. Hmm, I might still believe that also for the AI governance side you’ll want to have more high bandwidth discussions specified to somewhat niche audiences, such as specific governmental departments, think tanks, international organizations like the EU, the UN, academic groups. I imagine they all will find different specific framings convincing and others very off-putting, and that you find this out quickly by working with them vs. doing AB testing on a more generic audience.