Sorry for the miscommunication. We are not intending to do technical AI safety work. We are going to focus on non-technical for the time being.
I am in the process of learning ML but am very far from being able to make contributions in that area. This is mostly so that I have a better understanding of the area and can better communicate with people with more technical expertise.
However, I’m still a bit confused. When you say “We are not intending to do technical AI safety work. We are going to focus on non-technical for the time being.”, do you mean you will only be researching high leverage, non-technical AI Safety interventions? Or do you mean that the research work you’re doing is non-technical?
I understand that the research work you’re doing is non-technical (in that you probably aren’t going to directly use any ML to do your research), but I’m not that aware of what the non-technical AI Safety interventions are, aside from semi-related things like working on AI strategy and policy (i.e. FHI’s GovAI, The Partnership on AI) and advocating against shorter-term AI risks (i.e. Future of Life Institute’s work on Lethal Autonomous Weapons Systems). Could you elaborate on what you mean when you say you will focus on non-technical AI safety work for the time being? Maybe you could give some examples of possible non-technical AI safety interventions? Thanks!
For sure. One example that we’ll be researching is scaling up getting PAs for high impact people in AI safety. It seems like one of the things that’s bottlenecking the movement is talent. Getting more talent is one solution which we should definitely be working on. Another is helping the talent we already have be more productive. Setting up an organization that specializes in hiring PAs and pairing them with top AI safety experts seems like a potentially great way to boost the impact of already high impact people.
I’m not that aware of what the non-technical AI Safety interventions are, aside from semi-related things like working on AI strategy and policy (i.e. FHI’s GovAI, The Partnership on AI) and advocating against shorter-term AI risks (i.e. Future of Life Institute’s work on Lethal Autonomous Weapons Systems).
Just wanted to quickly flag: I think the more popular interpretation of the term AI safety points to a wide landscape that includes AI policy/strategy as well as technical AI safety (which is also often referred to by the term AI alignment).
I thought the term AI safety was shorthand for technical AI safety, and didn’t really include AI policy/strategy. I personally use the term AI risk (or sometimes AI x-risk) to group together work on AI safety and AI strategy/policy/governance, i.e. work on AI risk = work on AI safety or AI strategy/policy.
I was aware though of AI safety being referred to as AI alignment.
Good and important points!
Sorry for the miscommunication. We are not intending to do technical AI safety work. We are going to focus on non-technical for the time being.
I am in the process of learning ML but am very far from being able to make contributions in that area. This is mostly so that I have a better understanding of the area and can better communicate with people with more technical expertise.
Thanks for the reply Kat!
However, I’m still a bit confused. When you say “We are not intending to do technical AI safety work. We are going to focus on non-technical for the time being.”, do you mean you will only be researching high leverage, non-technical AI Safety interventions? Or do you mean that the research work you’re doing is non-technical?
I understand that the research work you’re doing is non-technical (in that you probably aren’t going to directly use any ML to do your research), but I’m not that aware of what the non-technical AI Safety interventions are, aside from semi-related things like working on AI strategy and policy (i.e. FHI’s GovAI, The Partnership on AI) and advocating against shorter-term AI risks (i.e. Future of Life Institute’s work on Lethal Autonomous Weapons Systems). Could you elaborate on what you mean when you say you will focus on non-technical AI safety work for the time being? Maybe you could give some examples of possible non-technical AI safety interventions? Thanks!
For sure. One example that we’ll be researching is scaling up getting PAs for high impact people in AI safety. It seems like one of the things that’s bottlenecking the movement is talent. Getting more talent is one solution which we should definitely be working on. Another is helping the talent we already have be more productive. Setting up an organization that specializes in hiring PAs and pairing them with top AI safety experts seems like a potentially great way to boost the impact of already high impact people.
Great, I think that’s a good idea actually! I’m looking forward to see other potential good ideas like that from Nonlinear’s research.
Just wanted to quickly flag: I think the more popular interpretation of the term AI safety points to a wide landscape that includes AI policy/strategy as well as technical AI safety (which is also often referred to by the term AI alignment).
Thanks for clarifying! I wasn’t aware.
I thought the term AI safety was shorthand for technical AI safety, and didn’t really include AI policy/strategy. I personally use the term AI risk (or sometimes AI x-risk) to group together work on AI safety and AI strategy/policy/governance, i.e. work on AI risk = work on AI safety or AI strategy/policy.
I was aware though of AI safety being referred to as AI alignment.