After talking and working for some time with non-EA organisations in the AI Policy space, I believe that we need to give more credence to the here-and-now of AI safety policy as well to get the attention of policymakers and get our foot in the door. That also gives us space to collaborate with other think tanks and organisations outside of the x-risk space that are proactive and committed to AI policy. Right now, a lot of those people also see X-risks as being fringe and radical(and these are people who are supposed to be on our side).
Governments tend to move slowly, with due process, and in small increments(think, “We are going to first maybe do some risk monitoring, only then auditing”). Policymakers are only visionaries with horizons until the end of their terms(hmm, no surprise). Usually, broad strokes in policy require precedents of a similar size for it to be feasible within a policymakers’ agenda and the Overton window.
Every group that comes to a policy meeting thinks that their agenda item is the most pressing because, by definition, most of the time, contacting and getting meetings with policymakers means that you are proactive and have done your homework.
I want to see more EAs respond to Public Voice Opportunities, for instance- something I rarely hear on the EA forum or via EA channels/material.
I agree. I suspect that responses to calls for evidence over the years played a big role in introducing and normalising xrisk research ideas in the UK context, before the big moves we’ve seen in the last year.
This is a really good point, and perhaps the number one mistake I see in this area. People also forget that policy changes have colossal impacts on very complex human systems—the bigger the change, the bigger the impact. A small step is a lot easier for end-user buy-in to stomach than a large one.
I often advise to think the cost and effort difference between “I have to re-wallpaper one wall” as opposed to “I need to tear my house down to the foundations and rebuild it”.
That said, I think a lot of it is because actual policy is super hard to break into and get experience. There needs to be more training etc available to people—particularly early careers researchers.
After talking and working for some time with non-EA organisations in the AI Policy space, I believe that we need to give more credence to the here-and-now of AI safety policy as well to get the attention of policymakers and get our foot in the door. That also gives us space to collaborate with other think tanks and organisations outside of the x-risk space that are proactive and committed to AI policy. Right now, a lot of those people also see X-risks as being fringe and radical(and these are people who are supposed to be on our side).
Governments tend to move slowly, with due process, and in small increments(think, “We are going to first maybe do some risk monitoring, only then auditing”). Policymakers are only visionaries with horizons until the end of their terms(hmm, no surprise). Usually, broad strokes in policy require precedents of a similar size for it to be feasible within a policymakers’ agenda and the Overton window.
Every group that comes to a policy meeting thinks that their agenda item is the most pressing because, by definition, most of the time, contacting and getting meetings with policymakers means that you are proactive and have done your homework.
I want to see more EAs respond to Public Voice Opportunities, for instance- something I rarely hear on the EA forum or via EA channels/material.
I agree. I suspect that responses to calls for evidence over the years played a big role in introducing and normalising xrisk research ideas in the UK context, before the big moves we’ve seen in the last year.
e.g. a few representative examples
(2016) https://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/science-and-technology-committee/robotics-and-artificial-intelligence/written/32690.pdf
(2017)
https://data.parliament.uk/writtenevidence/committeeevidence.svc/evidencedocument/artificial-intelligence-committee/artificial-intelligence/written/69587.html
(2022)
https://www.longtermresilience.org/post/future-of-compute-review-submission-of-evidence
And many more.
This is a really good point, and perhaps the number one mistake I see in this area. People also forget that policy changes have colossal impacts on very complex human systems—the bigger the change, the bigger the impact. A small step is a lot easier for end-user buy-in to stomach than a large one.
I often advise to think the cost and effort difference between “I have to re-wallpaper one wall” as opposed to “I need to tear my house down to the foundations and rebuild it”.
That said, I think a lot of it is because actual policy is super hard to break into and get experience. There needs to be more training etc available to people—particularly early careers researchers.