Amazing post. Really good clear write-up for the lay reader new to AI. I feel confident to share this.
One point where I worry that readers could take away the wrong impression is with the line that “we’re not yet at the point of knowing what policies would be useful to implement”.
I agree with you that “we are in the early stages of figuring out the shape of this problem [AI governance] and the most effective ways to tackle it” but I worry saying we don’t yet know what policies to advocate for (a fairly common trope among non-policy AI people in EA) gives a number of misleading impressions. It implies that AI policy advocacy work has no value at present, that people working on AI policy don’t know what they are doing and shouldn’t currently be working in this area. I think this is wrong. I think governments are putting AI polices in place now and if we refuse to engage then we risk missing opportunities to make things better and there are clear cases where we know what better policy and worse policy looks like.
Lets take one example directly from your own post. You article says “If we could successfully ‘sandbox’ an advanced AI — that is, contain it to a training environment with no access to the real world until we were very confident it wouldn’t do harm — that would help our efforts to mitigate AI risks tremendously.” That is a policy! Right now the US government is producing non-binding guidance for AI companies on how to manage the risks from AI. I am involved on some ongoing work to encourage this guidance to say that AI systems that A] can self-improve and B] present risks if they go wrong, should be sandboxed and tested. I don’t at all think it is your intention to imply that EA should miss a policy opportunity to get AI companies to consider sandboxing (a thing you strongly agree with). But I worry that some non-policy people I talk to in the EA community seem to have views that approximate this level of dismissal for all current AI policy advocacy work (e.g. see views of funders here and here).
Note on other AI policies. I suggest a few things to focus on at point 3 here. There is the x-risk database of 250+ policy proposals here. There is work on policy ideas in Future Proof here. Etc.
Amazing post. Really good clear write-up for the lay reader new to AI. I feel confident to share this.
One point where I worry that readers could take away the wrong impression is with the line that “we’re not yet at the point of knowing what policies would be useful to implement”.
I agree with you that “we are in the early stages of figuring out the shape of this problem [AI governance] and the most effective ways to tackle it” but I worry saying we don’t yet know what policies to advocate for (a fairly common trope among non-policy AI people in EA) gives a number of misleading impressions. It implies that AI policy advocacy work has no value at present, that people working on AI policy don’t know what they are doing and shouldn’t currently be working in this area. I think this is wrong. I think governments are putting AI polices in place now and if we refuse to engage then we risk missing opportunities to make things better and there are clear cases where we know what better policy and worse policy looks like.
Lets take one example directly from your own post. You article says “If we could successfully ‘sandbox’ an advanced AI — that is, contain it to a training environment with no access to the real world until we were very confident it wouldn’t do harm — that would help our efforts to mitigate AI risks tremendously.” That is a policy! Right now the US government is producing non-binding guidance for AI companies on how to manage the risks from AI. I am involved on some ongoing work to encourage this guidance to say that AI systems that A] can self-improve and B] present risks if they go wrong, should be sandboxed and tested. I don’t at all think it is your intention to imply that EA should miss a policy opportunity to get AI companies to consider sandboxing (a thing you strongly agree with). But I worry that some non-policy people I talk to in the EA community seem to have views that approximate this level of dismissal for all current AI policy advocacy work (e.g. see views of funders here and here).
Note on other AI policies. I suggest a few things to focus on at point 3 here. There is the x-risk database of 250+ policy proposals here. There is work on policy ideas in Future Proof here. Etc.