The Bay Area rationalist scene is a hive of techno-optimisitic libertarians.[1] These people have a negative view of state/government effectiveness at a philosophical and ideological level, so their default perspective is that the government doesn’t know what it’s doing and won’t do anything
The attitude of expecting very few regulations made little sense to me, because—as someone who broadly shares these background biases—my prior is that governments will generally regulate a new scary technology that comes out by default. I just don’t expect that regulations will always be thoughtful, or that they will weigh the risks and rewards of new technologies appropriately.
There’s an old adage that describes how government sometimes operates in response to a crisis: “We must do something; this is something; therefore, we must do this.” Eliezer Yudkowsky himself once said,
So there really is a reason to be allergic to people who go around saying, “Ah, but technology has risks as well as benefits”. There’s a historical record showing over-conservativeness, the many silent deaths of regulation being outweighed by a few visible deaths of nonregulation. If you’re really playing the middle, why not say, “Ah, but technology has benefits as well as risks”?
Thanks for the reply Matthew, I’m going to try to tease out some slight nuances here:
Your prior that governments will gradually ‘wake up’ and get involved to the increasing power and potential of AI risk is I think one that’s more realistic than others I’ve come across.
I do think that a lot of projections of AI risk/doom either explicitly or implicitly have no way of incorporating a negative societal feedback loop that slows/pauses AI progress for example. My original point 1 was to say that I think this prior may be linked to the strong Libertarian beliefs of many working on AI risk in or close to the Bay Area.
This may be an argument that’s downstream of views on alignment difficulty and timelines. If you have short timelines and high difficult, bad regulation doesn’t help the impending disaster. If you have medium/longer timelines but think alignment will be easy-ish (which is my model of what the Eleuther team believes, for example), then backfiring regulations like DMCA actually become a potential risk rather than the alignment problem itself.
I’m well aware of Sir Humphrey’s wisdom. I think we may have different priors on that but I don’t think that’s really much of a crux here, I definitely agree we want regulations to be targeted and helpful
I think my issue with this is probably downstream of my scepticism in short timelines and fast takeoff. I think there will be ‘warning shots’, and I think that societies and governments will take notice—they already are! To hold that combination of beliefs you have to think that either even when things start getting ‘crazy’ governments won’t/can’t act, or you get a sudden deceptive sharp-left turn
So basically I agree that AI x-risk modelling should be re-evaluated in a world where AI Safety is no longer a particularly neglected area. At the very least, models that have no socio-political levers (off the top of my head Open Phil’s ‘Bio Anchors’ and ‘A Compute Centric Framework’ come to mind) should have that qualification up-front and in glowing neon letters.
tl;dr—Writing that all out I don’t think we disagree much at all, I think your prior that government would get involved is accurate. The ‘vibe’ I got from a lot of early AI Safety work that’s MIRI-adjacent/Bay Area focused/Libertarian-ish was different though. It seemed to assume this technology would develop, have great consequences, and there would be no socio-political reaction at all, which seems very false to me.
(side note—I really appreciate your AI takes btw. I find them very useful and informative. pls keeping sharing)
The attitude of expecting very few regulations made little sense to me, because—as someone who broadly shares these background biases—my prior is that governments will generally regulate a new scary technology that comes out by default. I just don’t expect that regulations will always be thoughtful, or that they will weigh the risks and rewards of new technologies appropriately.
There’s an old adage that describes how government sometimes operates in response to a crisis: “We must do something; this is something; therefore, we must do this.” Eliezer Yudkowsky himself once said,
Thanks for the reply Matthew, I’m going to try to tease out some slight nuances here:
Your prior that governments will gradually ‘wake up’ and get involved to the increasing power and potential of AI risk is I think one that’s more realistic than others I’ve come across.
I do think that a lot of projections of AI risk/doom either explicitly or implicitly have no way of incorporating a negative societal feedback loop that slows/pauses AI progress for example. My original point 1 was to say that I think this prior may be linked to the strong Libertarian beliefs of many working on AI risk in or close to the Bay Area.
This may be an argument that’s downstream of views on alignment difficulty and timelines. If you have short timelines and high difficult, bad regulation doesn’t help the impending disaster. If you have medium/longer timelines but think alignment will be easy-ish (which is my model of what the Eleuther team believes, for example), then backfiring regulations like DMCA actually become a potential risk rather than the alignment problem itself.
I’m well aware of Sir Humphrey’s wisdom. I think we may have different priors on that but I don’t think that’s really much of a crux here, I definitely agree we want regulations to be targeted and helpful
I think my issue with this is probably downstream of my scepticism in short timelines and fast takeoff. I think there will be ‘warning shots’, and I think that societies and governments will take notice—they already are! To hold that combination of beliefs you have to think that either even when things start getting ‘crazy’ governments won’t/can’t act, or you get a sudden deceptive sharp-left turn
So basically I agree that AI x-risk modelling should be re-evaluated in a world where AI Safety is no longer a particularly neglected area. At the very least, models that have no socio-political levers (off the top of my head Open Phil’s ‘Bio Anchors’ and ‘A Compute Centric Framework’ come to mind) should have that qualification up-front and in glowing neon letters.
tl;dr—Writing that all out I don’t think we disagree much at all, I think your prior that government would get involved is accurate. The ‘vibe’ I got from a lot of early AI Safety work that’s MIRI-adjacent/Bay Area focused/Libertarian-ish was different though. It seemed to assume this technology would develop, have great consequences, and there would be no socio-political reaction at all, which seems very false to me.
(side note—I really appreciate your AI takes btw. I find them very useful and informative. pls keeping sharing)