The Bay Area rationalist scene is a hive of techno-optimisitic libertarians.[1] These people have a negative view of state/government effectiveness at a philosophical and ideological level, so their default perspective is that the government doesn’t know what it’s doing and won’t do anything. [edit: Re-reading this paragraph it comes off as perhaps mean as well as harsh, which I apologise for]
Yeah, I kinda of have to agree with this, I think the Bay Area rationalist scene underrates government competence, though even I was surprised at how little politicking happened, and how little it ended up being politicized.
Similary, ‘Politics is the Mind-Killer’ might be the rationalist idea that has aged worst—especially for its influences on EA. EA is a political project—for example, the conclusions of Famine, Affluence, and Morality are fundamentally political.
I think that AI was a surprisingly good exception to the rule that politicizing something would make it harder to get, and I think this is mostly due to the popularity of AI regulations. I will say though that there’s clear evidence that at least for now, AI safety is in a privileged position, and the heuristic no longer applies.
Overly-short timelines and FOOM. If you think takeoff is going to be so fast that we get no firealarms, then what governments do doesn’t matter. I think that’s quite a load bearing assumption that isn’t holding up too well
Not just that though, I also think being overly pessimistic around AI safety sort of contributed, as a lot of people’s mental health was almost certainly not great at best, making them catastrophize the situation and being ineffective.
This is a real issue in the climate change movement, and I expect that AI safety’s embrace of pessimism was not good at all for thinking clearly.
Thinking of AI x-risk as only a technical problem to solve, and undervaluing AI Governance. Some of that might be comparative advantage (I’ll do the coding and leave political co-ordination to those better suited). But it’d be interesting to see x-risk estimates include effectiveness of governance and attention of politicians/the public to this issue as input parameters.
I agree with this, at least for the general problem of AI governance, though I disagree if we talk about AI alignment, though I agree that rationalists underestimate the governance work required to achieve a flourishing future.
Yeah, I kinda of have to agree with this, I think the Bay Area rationalist scene underrates government competence, though even I was surprised at how little politicking happened, and how little it ended up being politicized.
I think that AI was a surprisingly good exception to the rule that politicizing something would make it harder to get, and I think this is mostly due to the popularity of AI regulations. I will say though that there’s clear evidence that at least for now, AI safety is in a privileged position, and the heuristic no longer applies.
Not just that though, I also think being overly pessimistic around AI safety sort of contributed, as a lot of people’s mental health was almost certainly not great at best, making them catastrophize the situation and being ineffective.
This is a real issue in the climate change movement, and I expect that AI safety’s embrace of pessimism was not good at all for thinking clearly.
I agree with this, at least for the general problem of AI governance, though I disagree if we talk about AI alignment, though I agree that rationalists underestimate the governance work required to achieve a flourishing future.