Cool arguments on the impact of policy work for AI safety. I find myself agreeing with Richard Ngo’s support of AI policy given the scale of government influence and the uncertain nature of AI risk. Here’s a few quotes from the piece.
How AI could be influenced by policy experts:
in a few decades (assuming long timelines and slow takeoff) AIs that are less generally intelligent that humans will be causing political and economic shockwaves, whether that’s via mass unemployment, enabling large-scale security breaches, designing more destructive weapons, psychological manipulation, or something even less predictable. At this point, governments will panic and AI policy advisors will have real influence. If competent and aligned people were the obvious choice for those positions, that’d be fantastic. If those people had spent several decades researching what interventions would be most valuable, that’d be even better.
This perspective is inspired by Milton Friedman, who argued that the way to create large-scale change is by nurturing ideas which will be seized upon in a crisis.
Why EA specifically could succeed:
… From the outside view, our chances are pretty good. We’re a movement comprising many very competent, clever and committed people. We’ve got the sort of backing that makes policymakers take people seriously: we’re affiliated with leading universities, tech companies, and public figures. It’s likely that a number of EAs at the best universities already have friends who will end up in top government positions. We have enough money to do extensive lobbying, if that’s judged a good idea.
These opposing opinions are driven by different views on timelines, takeoff speeds, and sources of risk:
More generally, Ben and I disagree on where the bottleneck to AI safety is. I think that finding a technical solution is probable, but that most solutions would still require careful oversight, which may or may not happen (maybe 50-50). Ben thinks that finding a technical solution is improbable, but that if it’s found it’ll probably be implemented well. I also have more credence on long timelines and slow takeoffs than he does. I think that these disagreements affect our views on the importance of influencing governments in particular.
Cool arguments on the impact of policy work for AI safety. I find myself agreeing with Richard Ngo’s support of AI policy given the scale of government influence and the uncertain nature of AI risk. Here’s a few quotes from the piece.
How AI could be influenced by policy experts:
Why EA specifically could succeed:
These opposing opinions are driven by different views on timelines, takeoff speeds, and sources of risk:
Thanks for sharing LW4EA! Particularly the AI safety stuff. It’s an act of community service.