I have some sympathy for the second view, although I’m skeptical that sane advisors have significant real impact. I’d love a way to test it as decisively as we’ve tested the “government (in its current form) responds appropriately to warning shots” hypotheses.
On my own models, the “don’t worry, people will wake up as the cliff-edge comes more clearly into view” hypothesis has quite a lot of work to do. In particular, I don’t think it’s a very defensible position in isolation anymore....if you want to argue that we do need government support but (fortunately) governments will start behaving more reasonably after a warning shot, it seems to me like these days you have to pair that with an argument about why you expect the voices of reason to be so much louder and more effectual in 2041 than they were in 2021.
(Which is then subject to a bunch of the usual skepticism that applies to arguments of the form “surely my political party will become popular, claim power, and implement policies I like”.)
I think the second view is basically correct for policy in general, although I don’t have a strong view yet of how it applies to AI governance specifically. One thing that’s become clear to me as I’ve gotten more involved in institution-focused work and research is that large governments and other similarly impactful organizations are huge, sprawling social organisms, such that I think EAs simultaneously underestimate and overestimate the amount of influence that’s possible in those settings. The more optimistic among us tend to get too excited about isolated interventions (e.g., electing a committed EA to Congress, getting a voting reform passed in one jurisdiction) that, even if successful, would only address a small part of the problem. On the other hand, skeptics see the inherent complexity and failures of past efforts and conclude that policy/advocacy/improving institutions is fundamentally hopeless, neglecting to appreciate that critical decisions by governments are, at the end of the day, made by real people with friends and colleagues and reading habits just like anyone else.
Viewed through that lens, my opinion and one that I think you will find is shared by people with experience in this domain is that the reason we have not seen more success influencing large-scale bureaucratic systems is that we have have been under-resourcing it as a community. By “under-resourcing it” I don’t just mean in terms of money, because as the Flynn campaign showed us it’s easy to throw millions of dollars at a solution that hits rapidly diminishing returns. I mean that we have not been investing enough in strategic clarity, a broad diversity of approaches that complement one another and collectively increase the chances of success, and the patience to see those approaches through. In the policy world outside of EA, activists consider it normal to have a 6-10 year timeline to get significant legislation or reforms enacted, with the full expectation that there will be many failed efforts along the way. But reforms do happen—just look at the success of the YIMBY movement, which Matt Yglesias wrote about today, or recent legislation to allow Medicare to negotiate prescription drug prices, which was in no small part the result of an 8-year, $100M campaign by Arnold Ventures.
Progress in the institutional sphere is not linear. It is indeed disappointing that the United States was not able to get a pandemic preparedness bill passed in the wake of COVID, or that the NIH is still funding ill-advised research. But we should not confuse this for the claim that we’ve been able to do “approximately nothing.” The overall trend for EA and longtermist ideas being taken seriously at increasingly senior levels over the past couple of years is strongly positive. Some of the diverse factors include the launch of the Future Fund and the emergence of SBF as a key political donor; the publication of Will’s book and the resulting book tour; the networking among high-placed government officials by EA-focused or -influenced organizations such as Open Philanthropy, CSET, CLTR, the Simon Institute, Metaculus, fp21, Schmidt Futures, and more; and the natural emergence of the initial cohort of EA leaders into the middle third of their careers. Just recently, I had one senior person tell me that Longview Philanthropy’s hiring of Carl Robichaud, a nuclear security grantmaker with 20 years of experience, is what got them to pay attention to EA for the first time. All of it is, by itself, not enough to make a difference, and judged on its own terms will look like a failure. But all of it combined is what creates the possibility that more can be accomplished the next time around, and all of the time in between.
“I think the second view is basically correct for policy in general, although I don’t have a strong view yet of how it applies to AI governance specifically. One thing that’s become clear to me as I’ve gotten more involved in institution-focused work and research is that large governments and other similarly impactful organizations are huge, sprawling social organisms, such that I think EAs simultaneously underestimate and overestimate the amount of influence that’s possible in those settings.”
This is a problem I’ve spoken often about, and I’m currently writing an essay on for this forum based on some research I co-authored.
People wildly underestimate how hard it is to not only pass governance, but make sure it is abided to, and to balance the various stakeholders that are required. The AI Governance field has a massive sociological, socio-legal, and even ops-experience gap that means a lot of very good policy and governance ideas die in their infancy because no-one who wrote them have any idea how to enact them feasibly. My PhD is on the governance end of this and I do a bunch of work within government AI policy, and I see a lot of very good governance pitches go splat against the complex, ever-shifting beast that is the human organisation purely because the researchers never thought to consult a sociologist, or incorporate any socio-legal research methods.
I think the second view is basically correct for policy in general, although I don’t have a strong view yet of how it applies to AI governance specifically. One thing that’s become clear to me as I’ve gotten more involved in institution-focused work and research is that large governments and other similarly impactful organizations are huge, sprawling social organisms, such that I think EAs simultaneously underestimate and overestimate the amount of influence that’s possible in those settings. The more optimistic among us tend to get too excited about isolated interventions (e.g., electing a committed EA to Congress, getting a voting reform passed in one jurisdiction) that, even if successful, would only address a small part of the problem. On the other hand, skeptics see the inherent complexity and failures of past efforts and conclude that policy/advocacy/improving institutions is fundamentally hopeless, neglecting to appreciate that critical decisions by governments are, at the end of the day, made by real people with friends and colleagues and reading habits just like anyone else.
Viewed through that lens, my opinion and one that I think you will find is shared by people with experience in this domain is that the reason we have not seen more success influencing large-scale bureaucratic systems is that we have have been under-resourcing it as a community. By “under-resourcing it” I don’t just mean in terms of money, because as the Flynn campaign showed us it’s easy to throw millions of dollars at a solution that hits rapidly diminishing returns. I mean that we have not been investing enough in strategic clarity, a broad diversity of approaches that complement one another and collectively increase the chances of success, and the patience to see those approaches through. In the policy world outside of EA, activists consider it normal to have a 6-10 year timeline to get significant legislation or reforms enacted, with the full expectation that there will be many failed efforts along the way. But reforms do happen—just look at the success of the YIMBY movement, which Matt Yglesias wrote about today, or recent legislation to allow Medicare to negotiate prescription drug prices, which was in no small part the result of an 8-year, $100M campaign by Arnold Ventures.
Progress in the institutional sphere is not linear. It is indeed disappointing that the United States was not able to get a pandemic preparedness bill passed in the wake of COVID, or that the NIH is still funding ill-advised research. But we should not confuse this for the claim that we’ve been able to do “approximately nothing.” The overall trend for EA and longtermist ideas being taken seriously at increasingly senior levels over the past couple of years is strongly positive. Some of the diverse factors include the launch of the Future Fund and the emergence of SBF as a key political donor; the publication of Will’s book and the resulting book tour; the networking among high-placed government officials by EA-focused or -influenced organizations such as Open Philanthropy, CSET, CLTR, the Simon Institute, Metaculus, fp21, Schmidt Futures, and more; and the natural emergence of the initial cohort of EA leaders into the middle third of their careers. Just recently, I had one senior person tell me that Longview Philanthropy’s hiring of Carl Robichaud, a nuclear security grantmaker with 20 years of experience, is what got them to pay attention to EA for the first time. All of it is, by itself, not enough to make a difference, and judged on its own terms will look like a failure. But all of it combined is what creates the possibility that more can be accomplished the next time around, and all of the time in between.
This is a problem I’ve spoken often about, and I’m currently writing an essay on for this forum based on some research I co-authored.
People wildly underestimate how hard it is to not only pass governance, but make sure it is abided to, and to balance the various stakeholders that are required. The AI Governance field has a massive sociological, socio-legal, and even ops-experience gap that means a lot of very good policy and governance ideas die in their infancy because no-one who wrote them have any idea how to enact them feasibly. My PhD is on the governance end of this and I do a bunch of work within government AI policy, and I see a lot of very good governance pitches go splat against the complex, ever-shifting beast that is the human organisation purely because the researchers never thought to consult a sociologist, or incorporate any socio-legal research methods.