I directionally very strongly agree with this Matthew. Some reasons why I think this oversight occured in the AI x-risk community:
The Bay Area rationalist scene is a hive of techno-optimisitic libertarians.[1] These people have a negative view of state/âgovernment effectiveness at a philosophical and ideological level, so their default perspective is that the government doesnât know what itâs doing and wonât do anything [edit: Re-reading this paragraph it comes off as perhaps mean as well as harsh, which I apologise for]
Similary, âPolitics is the Mind-Killerâ might be the rationalist idea that has aged worstâespecially for its influences on EA. EA is a political projectâfor example, the conclusions of Famine, Affluence, and Morality are fundamentally political
Overly-short timelines and FOOM. If you think takeoff is going to be so fast that we get no firealarms, then what governments do doesnât matter. I think thatâs quite a load bearing assumption that isnât holding up too well
Thinking of AI x-risk as only a technical problem to solve, and undervaluing AI Governance. Some of that might be comparative advantage (Iâll do the coding and leave political co-ordination to those better suited). But itâd be interesting to see x-risk estimates include effectiveness of governance and attention of politicians/âthe public to this issue as input parameters.
I feel like this year has shown pretty credible evidence that these assumptions are flawed, and in any case itâs a semi-mainstream political issue now and the genie canât be put back in the bottle. The AI x-risk community will have to meet reality where it is.
Similary, âPolitics is the Mind-Killerâ might be the rationalist idea that has aged worstâespecially for its influences on EA.
What influence are you thinking about? The position argued in the essay seems pretty measured.
Politics is an important domain to which we should individually apply our rationalityâbut itâs a terrible domain in which to learn rationality, or discuss rationality, unless all the discussants are already rational. [...]
Iâm not saying that I think we should be apolitical, or even that we should adopt Wikipediaâs ideal of the Neutral Point of View. But try to resist getting in those good, solid digs if you can possibly avoid it. If your topic legitimately relates to attempts to ban evolution in school curricula, then go ahead and talk about itâbut donât blame it explicitly on the whole Republican Party; some of your readers may be Republicans, and they may feel that the problem is a few rogues, not the entire party.
Iâm relying on my social experience and intuition here, so I donât expect Iâve got it 100% right, and others may indeed have different interpretations of the communityâs history with engaging with politics.
But concern about people over-extrapolating from Eliezerâs initial post (many such cases) and treating it more of a norm to ignore politics full-stop seems to have been an established concern many years ago (related discussion here). I think that thereâs probably an interaction effect with the âlatent libertarianismâ in early LessWrong/âRationalist space as well.
The Bay Area rationalist scene is a hive of techno-optimisitic libertarians.[1] These people have a negative view of state/âgovernment effectiveness at a philosophical and ideological level, so their default perspective is that the government doesnât know what itâs doing and wonât do anything
The attitude of expecting very few regulations made little sense to me, becauseâas someone who broadly shares these background biasesâmy prior is that governments will generally regulate a new scary technology that comes out by default. I just donât expect that regulations will always be thoughtful, or that they will weigh the risks and rewards of new technologies appropriately.
Thereâs an old adage that describes how government sometimes operates in response to a crisis: âWe must do something; this is something; therefore, we must do this.â Eliezer Yudkowsky himself once said,
So there really is a reason to be allergic to people who go around saying, âAh, but technology has risks as well as benefitsâ. Thereâs a historical record showing over-conservativeness, the many silent deaths of regulation being outweighed by a few visible deaths of nonregulation. If youâre really playing the middle, why not say, âAh, but technology has benefits as well as risksâ?
Thanks for the reply Matthew, Iâm going to try to tease out some slight nuances here:
Your prior that governments will gradually âwake upâ and get involved to the increasing power and potential of AI risk is I think one thatâs more realistic than others Iâve come across.
I do think that a lot of projections of AI risk/âdoom either explicitly or implicitly have no way of incorporating a negative societal feedback loop that slows/âpauses AI progress for example. My original point 1 was to say that I think this prior may be linked to the strong Libertarian beliefs of many working on AI risk in or close to the Bay Area.
This may be an argument thatâs downstream of views on alignment difficulty and timelines. If you have short timelines and high difficult, bad regulation doesnât help the impending disaster. If you have medium/âlonger timelines but think alignment will be easy-ish (which is my model of what the Eleuther team believes, for example), then backfiring regulations like DMCA actually become a potential risk rather than the alignment problem itself.
Iâm well aware of Sir Humphreyâs wisdom. I think we may have different priors on that but I donât think thatâs really much of a crux here, I definitely agree we want regulations to be targeted and helpful
I think my issue with this is probably downstream of my scepticism in short timelines and fast takeoff. I think there will be âwarning shotsâ, and I think that societies and governments will take noticeâthey already are! To hold that combination of beliefs you have to think that either even when things start getting âcrazyâ governments wonât/âcanât act, or you get a sudden deceptive sharp-left turn
So basically I agree that AI x-risk modelling should be re-evaluated in a world where AI Safety is no longer a particularly neglected area. At the very least, models that have no socio-political levers (off the top of my head Open Philâs âBio Anchorsâ and âA Compute Centric Frameworkâ come to mind) should have that qualification up-front and in glowing neon letters.
tl;drâWriting that all out I donât think we disagree much at all, I think your prior that government would get involved is accurate. The âvibeâ I got from a lot of early AI Safety work thatâs MIRI-adjacent/âBay Area focused/âLibertarian-ish was different though. It seemed to assume this technology would develop, have great consequences, and there would be no socio-political reaction at all, which seems very false to me.
(side noteâI really appreciate your AI takes btw. I find them very useful and informative. pls keeping sharing)
The Bay Area rationalist scene is a hive of techno-optimisitic libertarians.[1] These people have a negative view of state/âgovernment effectiveness at a philosophical and ideological level, so their default perspective is that the government doesnât know what itâs doing and wonât do anything. [edit: Re-reading this paragraph it comes off as perhaps mean as well as harsh, which I apologise for]
Yeah, I kinda of have to agree with this, I think the Bay Area rationalist scene underrates government competence, though even I was surprised at how little politicking happened, and how little it ended up being politicized.
Similary, âPolitics is the Mind-Killerâ might be the rationalist idea that has aged worstâespecially for its influences on EA. EA is a political projectâfor example, the conclusions of Famine, Affluence, and Morality are fundamentally political.
I think that AI was a surprisingly good exception to the rule that politicizing something would make it harder to get, and I think this is mostly due to the popularity of AI regulations. I will say though that thereâs clear evidence that at least for now, AI safety is in a privileged position, and the heuristic no longer applies.
Overly-short timelines and FOOM. If you think takeoff is going to be so fast that we get no firealarms, then what governments do doesnât matter. I think thatâs quite a load bearing assumption that isnât holding up too well
Not just that though, I also think being overly pessimistic around AI safety sort of contributed, as a lot of peopleâs mental health was almost certainly not great at best, making them catastrophize the situation and being ineffective.
This is a real issue in the climate change movement, and I expect that AI safetyâs embrace of pessimism was not good at all for thinking clearly.
Thinking of AI x-risk as only a technical problem to solve, and undervaluing AI Governance. Some of that might be comparative advantage (Iâll do the coding and leave political co-ordination to those better suited). But itâd be interesting to see x-risk estimates include effectiveness of governance and attention of politicians/âthe public to this issue as input parameters.
I agree with this, at least for the general problem of AI governance, though I disagree if we talk about AI alignment, though I agree that rationalists underestimate the governance work required to achieve a flourishing future.
I directionally very strongly agree with this Matthew. Some reasons why I think this oversight occured in the AI x-risk community:
The Bay Area rationalist scene is a hive of techno-optimisitic libertarians.[1] These people have a negative view of state/âgovernment effectiveness at a philosophical and ideological level, so their default perspective is that the government doesnât know what itâs doing and wonât do anything [edit: Re-reading this paragraph it comes off as perhaps mean as well as harsh, which I apologise for]
Similary, âPolitics is the Mind-Killerâ might be the rationalist idea that has aged worstâespecially for its influences on EA. EA is a political projectâfor example, the conclusions of Famine, Affluence, and Morality are fundamentally political
Overly-short timelines and FOOM. If you think takeoff is going to be so fast that we get no firealarms, then what governments do doesnât matter. I think thatâs quite a load bearing assumption that isnât holding up too well
Thinking of AI x-risk as only a technical problem to solve, and undervaluing AI Governance. Some of that might be comparative advantage (Iâll do the coding and leave political co-ordination to those better suited). But itâd be interesting to see x-risk estimates include effectiveness of governance and attention of politicians/âthe public to this issue as input parameters.
I feel like this year has shown pretty credible evidence that these assumptions are flawed, and in any case itâs a semi-mainstream political issue now and the genie canât be put back in the bottle. The AI x-risk community will have to meet reality where it is.
Yes, an overly broad stereotype. But that I hope most people can grok and go âyeah thatâs kinda on pointâ
What influence are you thinking about? The position argued in the essay seems pretty measured.
Iâm relying on my social experience and intuition here, so I donât expect Iâve got it 100% right, and others may indeed have different interpretations of the communityâs history with engaging with politics.
But concern about people over-extrapolating from Eliezerâs initial post (many such cases) and treating it more of a norm to ignore politics full-stop seems to have been an established concern many years ago (related discussion here). I think that thereâs probably an interaction effect with the âlatent libertarianismâ in early LessWrong/âRationalist space as well.
The attitude of expecting very few regulations made little sense to me, becauseâas someone who broadly shares these background biasesâmy prior is that governments will generally regulate a new scary technology that comes out by default. I just donât expect that regulations will always be thoughtful, or that they will weigh the risks and rewards of new technologies appropriately.
Thereâs an old adage that describes how government sometimes operates in response to a crisis: âWe must do something; this is something; therefore, we must do this.â Eliezer Yudkowsky himself once said,
Thanks for the reply Matthew, Iâm going to try to tease out some slight nuances here:
Your prior that governments will gradually âwake upâ and get involved to the increasing power and potential of AI risk is I think one thatâs more realistic than others Iâve come across.
I do think that a lot of projections of AI risk/âdoom either explicitly or implicitly have no way of incorporating a negative societal feedback loop that slows/âpauses AI progress for example. My original point 1 was to say that I think this prior may be linked to the strong Libertarian beliefs of many working on AI risk in or close to the Bay Area.
This may be an argument thatâs downstream of views on alignment difficulty and timelines. If you have short timelines and high difficult, bad regulation doesnât help the impending disaster. If you have medium/âlonger timelines but think alignment will be easy-ish (which is my model of what the Eleuther team believes, for example), then backfiring regulations like DMCA actually become a potential risk rather than the alignment problem itself.
Iâm well aware of Sir Humphreyâs wisdom. I think we may have different priors on that but I donât think thatâs really much of a crux here, I definitely agree we want regulations to be targeted and helpful
I think my issue with this is probably downstream of my scepticism in short timelines and fast takeoff. I think there will be âwarning shotsâ, and I think that societies and governments will take noticeâthey already are! To hold that combination of beliefs you have to think that either even when things start getting âcrazyâ governments wonât/âcanât act, or you get a sudden deceptive sharp-left turn
So basically I agree that AI x-risk modelling should be re-evaluated in a world where AI Safety is no longer a particularly neglected area. At the very least, models that have no socio-political levers (off the top of my head Open Philâs âBio Anchorsâ and âA Compute Centric Frameworkâ come to mind) should have that qualification up-front and in glowing neon letters.
tl;drâWriting that all out I donât think we disagree much at all, I think your prior that government would get involved is accurate. The âvibeâ I got from a lot of early AI Safety work thatâs MIRI-adjacent/âBay Area focused/âLibertarian-ish was different though. It seemed to assume this technology would develop, have great consequences, and there would be no socio-political reaction at all, which seems very false to me.
(side noteâI really appreciate your AI takes btw. I find them very useful and informative. pls keeping sharing)
Yeah, I kinda of have to agree with this, I think the Bay Area rationalist scene underrates government competence, though even I was surprised at how little politicking happened, and how little it ended up being politicized.
I think that AI was a surprisingly good exception to the rule that politicizing something would make it harder to get, and I think this is mostly due to the popularity of AI regulations. I will say though that thereâs clear evidence that at least for now, AI safety is in a privileged position, and the heuristic no longer applies.
Not just that though, I also think being overly pessimistic around AI safety sort of contributed, as a lot of peopleâs mental health was almost certainly not great at best, making them catastrophize the situation and being ineffective.
This is a real issue in the climate change movement, and I expect that AI safetyâs embrace of pessimism was not good at all for thinking clearly.
I agree with this, at least for the general problem of AI governance, though I disagree if we talk about AI alignment, though I agree that rationalists underestimate the governance work required to achieve a flourishing future.