Thanks for the reply Matthew, I’m going to try to tease out some slight nuances here:
Your prior that governments will gradually ‘wake up’ and get involved to the increasing power and potential of AI risk is I think one that’s more realistic than others I’ve come across.
I do think that a lot of projections of AI risk/​doom either explicitly or implicitly have no way of incorporating a negative societal feedback loop that slows/​pauses AI progress for example. My original point 1 was to say that I think this prior may be linked to the strong Libertarian beliefs of many working on AI risk in or close to the Bay Area.
This may be an argument that’s downstream of views on alignment difficulty and timelines. If you have short timelines and high difficult, bad regulation doesn’t help the impending disaster. If you have medium/​longer timelines but think alignment will be easy-ish (which is my model of what the Eleuther team believes, for example), then backfiring regulations like DMCA actually become a potential risk rather than the alignment problem itself.
I’m well aware of Sir Humphrey’s wisdom. I think we may have different priors on that but I don’t think that’s really much of a crux here, I definitely agree we want regulations to be targeted and helpful
I think my issue with this is probably downstream of my scepticism in short timelines and fast takeoff. I think there will be ‘warning shots’, and I think that societies and governments will take notice—they already are! To hold that combination of beliefs you have to think that either even when things start getting ‘crazy’ governments won’t/​can’t act, or you get a sudden deceptive sharp-left turn
So basically I agree that AI x-risk modelling should be re-evaluated in a world where AI Safety is no longer a particularly neglected area. At the very least, models that have no socio-political levers (off the top of my head Open Phil’s ‘Bio Anchors’ and ‘A Compute Centric Framework’ come to mind) should have that qualification up-front and in glowing neon letters.
tl;dr—Writing that all out I don’t think we disagree much at all, I think your prior that government would get involved is accurate. The ‘vibe’ I got from a lot of early AI Safety work that’s MIRI-adjacent/​Bay Area focused/​Libertarian-ish was different though. It seemed to assume this technology would develop, have great consequences, and there would be no socio-political reaction at all, which seems very false to me.
(side note—I really appreciate your AI takes btw. I find them very useful and informative. pls keeping sharing)
Thanks for the reply Matthew, I’m going to try to tease out some slight nuances here:
Your prior that governments will gradually ‘wake up’ and get involved to the increasing power and potential of AI risk is I think one that’s more realistic than others I’ve come across.
I do think that a lot of projections of AI risk/​doom either explicitly or implicitly have no way of incorporating a negative societal feedback loop that slows/​pauses AI progress for example. My original point 1 was to say that I think this prior may be linked to the strong Libertarian beliefs of many working on AI risk in or close to the Bay Area.
This may be an argument that’s downstream of views on alignment difficulty and timelines. If you have short timelines and high difficult, bad regulation doesn’t help the impending disaster. If you have medium/​longer timelines but think alignment will be easy-ish (which is my model of what the Eleuther team believes, for example), then backfiring regulations like DMCA actually become a potential risk rather than the alignment problem itself.
I’m well aware of Sir Humphrey’s wisdom. I think we may have different priors on that but I don’t think that’s really much of a crux here, I definitely agree we want regulations to be targeted and helpful
I think my issue with this is probably downstream of my scepticism in short timelines and fast takeoff. I think there will be ‘warning shots’, and I think that societies and governments will take notice—they already are! To hold that combination of beliefs you have to think that either even when things start getting ‘crazy’ governments won’t/​can’t act, or you get a sudden deceptive sharp-left turn
So basically I agree that AI x-risk modelling should be re-evaluated in a world where AI Safety is no longer a particularly neglected area. At the very least, models that have no socio-political levers (off the top of my head Open Phil’s ‘Bio Anchors’ and ‘A Compute Centric Framework’ come to mind) should have that qualification up-front and in glowing neon letters.
tl;dr—Writing that all out I don’t think we disagree much at all, I think your prior that government would get involved is accurate. The ‘vibe’ I got from a lot of early AI Safety work that’s MIRI-adjacent/​Bay Area focused/​Libertarian-ish was different though. It seemed to assume this technology would develop, have great consequences, and there would be no socio-political reaction at all, which seems very false to me.
(side note—I really appreciate your AI takes btw. I find them very useful and informative. pls keeping sharing)