Hi @JBPDavies, thank you for your questions and happy to comment on examples from my home country.
Our current focus in Europe is on two priorities: i) strengthened the AI Act, and ii) building support among European countries for a treaty on autonomous weapons systems. For the first priority, we work mainly with the EU institutions. For the second, our focus is with Member State capitals (due to the limited EU influence over security issues, as you rightly point out).
We regularly evaluate our choice of projects, and are currently conduct an evaluation of our ‘policy platform’, which we hope to share on our website sometime later this year. Nevertheless, we current focus on the AI Act because it is the first piece of legislation on AI by a major regulator anywhere, and because it could set regulators around the world on a certain path that could impact how we deal with increasingly powerful AI systems.
Our focus on autonomous weapons is partly driven by the Asilomar principles, which FLI helped coordinate and where the principle of avoiding an AI arms race (#18) got most support from attending experts working on beneficial AI. This line of effort also helps us understand global coordination problems, because autonomous weapons may be an early example of AI that we would want to regulate.
In reply to your clarifying questions, and as I mentioned earlier, our advocacy on autonomous weapons is mainly targeted at the Member State level (please do note that this will not be the main focus of the job for which we are currently advertising!). We are a small team, but we do build coalitions with other civil society organisations, businesses and academics where we can. A recent example of this is the open letter we coordinated among German AI researchers in which they called for a more progressive stance on this issue in the German coalition agreement (https://autonomewaffen.org/FAZ-Anzeige/).
Hope this is helpful, but please let me know if you have further questions or if I am unclear.
Hi @JBPDavies, thank you for your questions and happy to comment on examples from my home country.
Our current focus in Europe is on two priorities: i) strengthened the AI Act, and ii) building support among European countries for a treaty on autonomous weapons systems. For the first priority, we work mainly with the EU institutions. For the second, our focus is with Member State capitals (due to the limited EU influence over security issues, as you rightly point out).
We regularly evaluate our choice of projects, and are currently conduct an evaluation of our ‘policy platform’, which we hope to share on our website sometime later this year. Nevertheless, we current focus on the AI Act because it is the first piece of legislation on AI by a major regulator anywhere, and because it could set regulators around the world on a certain path that could impact how we deal with increasingly powerful AI systems.
Our focus on autonomous weapons is partly driven by the Asilomar principles, which FLI helped coordinate and where the principle of avoiding an AI arms race (#18) got most support from attending experts working on beneficial AI. This line of effort also helps us understand global coordination problems, because autonomous weapons may be an early example of AI that we would want to regulate.
In reply to your clarifying questions, and as I mentioned earlier, our advocacy on autonomous weapons is mainly targeted at the Member State level (please do note that this will not be the main focus of the job for which we are currently advertising!). We are a small team, but we do build coalitions with other civil society organisations, businesses and academics where we can. A recent example of this is the open letter we coordinated among German AI researchers in which they called for a more progressive stance on this issue in the German coalition agreement (https://autonomewaffen.org/FAZ-Anzeige/).
Hope this is helpful, but please let me know if you have further questions or if I am unclear.