I’m not that aware of what the non-technical AI Safety interventions are, aside from semi-related things like working on AI strategy and policy (i.e. FHI’s GovAI, The Partnership on AI) and advocating against shorter-term AI risks (i.e. Future of Life Institute’s work on Lethal Autonomous Weapons Systems).
Just wanted to quickly flag: I think the more popular interpretation of the term AI safety points to a wide landscape that includes AI policy/strategy as well as technical AI safety (which is also often referred to by the term AI alignment).
I thought the term AI safety was shorthand for technical AI safety, and didn’t really include AI policy/strategy. I personally use the term AI risk (or sometimes AI x-risk) to group together work on AI safety and AI strategy/policy/governance, i.e. work on AI risk = work on AI safety or AI strategy/policy.
I was aware though of AI safety being referred to as AI alignment.
Just wanted to quickly flag: I think the more popular interpretation of the term AI safety points to a wide landscape that includes AI policy/strategy as well as technical AI safety (which is also often referred to by the term AI alignment).
Thanks for clarifying! I wasn’t aware.
I thought the term AI safety was shorthand for technical AI safety, and didn’t really include AI policy/strategy. I personally use the term AI risk (or sometimes AI x-risk) to group together work on AI safety and AI strategy/policy/governance, i.e. work on AI risk = work on AI safety or AI strategy/policy.
I was aware though of AI safety being referred to as AI alignment.