I think you probably need multiple kinds of skill and some level of cognitive style diversity within a political campaign. You definitely need a lot of people with people skills, and I am sure that the first gut instincts of people with good social skills about what messaging will work are better than those of people with worse social skills. Those socially skilled people should undoubtedly be doing detailed messaging and networking for the campaign. But you also need people who are prepared to tell campaigns things they donât want to hear, even when there is severe social pressure not to, including things about what data (rather than gut instinct) actually shows about public opinion and messaging. (Yes, it is possible to overrate such data which will no doubt be misleading in various ways, but it also possible to underrate it.) My guess is that âprepared to tell people really hard truthsâ is at least somewhat anticorrelated with people skills and somewhat correlated with STEM background. (There is of course a trade-off where the people most prepared to tell hard truths are probably less good at selling those truths than more socially agreeable people.) For what itâs worth Matt Yglesiasâ seems pretty similar to the median EA in personality, and I recall reading that Biden advisors did read his blog. Ezra Klein also seems like a genuinely politically influential figure who is fairly EA-ish. There is more than one way to contribute to a political movement.
I personally donât think EA should be doing much to combat authoritarianism (other than ideally stopping its occasional minor contributions to it via the right-wing of rationalism and being clear-eyed about what the 2nd Trump admin might mean for things like âwe want democratic countries to beat Chinaâ) because I donât think it is particularly tractable or neglected. But I donât think it is a skill issue, unless youâre talking about completely EA run projects (and even then, you donât necessarily have to put the median EA in charge; presumably some EAs have above average social skills.)
(other than ideally stopping its occasional minor contributions to it via the right-wing of rationalism and being clear-eyed about what the 2nd Trump admin might mean for things like âwe want democratic countries to beat Chinaâ)
Actually I think this is the one thing that EAs could realistically do as their comparative advantage, considering who they are socially and ideologically adjacent to, if they are afraid of AGI being reached under an illiberal, anti-secular, and anti-cosmopolitan administration: to be blunt, press Karnofsky and Amodei to shut up about âententeâ and ârealismâ and cut ties with Thiel-aligned national security state companies like Palantir.
I donât think cutting ties with Palantir would move the date of AGI much, and I doubt it is the key point of leverage for whether the US becomes a soft dictatorship under Trump. As for the other stuff, people could certainly try, but I think it is probably unlikely to succeed, since it basically requires getting the people who run Anthropic to act against the very clear interests of Anthropic and the people who run it (And I doubt Amodei in particular, sees himself as accountable to the EA community in any way whatsoever.)
For what itâs worth I also think this complicated territory and that there is genuinely a risk of very bad outcomes from China winning an AI race too, and that the US might recover relatively quickly from its current disaster. I expect the US to remain somewhat less dictatorial than China even in the worst outcomes, though it is also true that even the democratic US has generally been a lot more keen to intervene, often but not always to bad effect, in other countryâs business.
Conditional on AGI happening under this administration, how much AGI companies have embedded with the national security state is a crux for the future of the lightcone, and I donât expect institutional inertia (the reasons why one would expect âthe US might recover relatively quickly from its current disasterâ and âthe US to remain somewhat less dictatorial than China even in the worst outcomesâ) to hold if AGI dictatorship is a possibility for the powers that be to reach for.
It increases the AI arms race thus shortening AGI timelines, and, after AGI, increases chances of the singleton being either unaligned or technically aligned to being an AGI dictatorship or other kind of dystopian outcome.
I think you probably need multiple kinds of skill and some level of cognitive style diversity within a political campaign. You definitely need a lot of people with people skills, and I am sure that the first gut instincts of people with good social skills about what messaging will work are better than those of people with worse social skills. Those socially skilled people should undoubtedly be doing detailed messaging and networking for the campaign. But you also need people who are prepared to tell campaigns things they donât want to hear, even when there is severe social pressure not to, including things about what data (rather than gut instinct) actually shows about public opinion and messaging. (Yes, it is possible to overrate such data which will no doubt be misleading in various ways, but it also possible to underrate it.) My guess is that âprepared to tell people really hard truthsâ is at least somewhat anticorrelated with people skills and somewhat correlated with STEM background. (There is of course a trade-off where the people most prepared to tell hard truths are probably less good at selling those truths than more socially agreeable people.) For what itâs worth Matt Yglesiasâ seems pretty similar to the median EA in personality, and I recall reading that Biden advisors did read his blog. Ezra Klein also seems like a genuinely politically influential figure who is fairly EA-ish. There is more than one way to contribute to a political movement.
I personally donât think EA should be doing much to combat authoritarianism (other than ideally stopping its occasional minor contributions to it via the right-wing of rationalism and being clear-eyed about what the 2nd Trump admin might mean for things like âwe want democratic countries to beat Chinaâ) because I donât think it is particularly tractable or neglected. But I donât think it is a skill issue, unless youâre talking about completely EA run projects (and even then, you donât necessarily have to put the median EA in charge; presumably some EAs have above average social skills.)
Actually I think this is the one thing that EAs could realistically do as their comparative advantage, considering who they are socially and ideologically adjacent to, if they are afraid of AGI being reached under an illiberal, anti-secular, and anti-cosmopolitan administration: to be blunt, press Karnofsky and Amodei to shut up about âententeâ and ârealismâ and cut ties with Thiel-aligned national security state companies like Palantir.
I donât think cutting ties with Palantir would move the date of AGI much, and I doubt it is the key point of leverage for whether the US becomes a soft dictatorship under Trump. As for the other stuff, people could certainly try, but I think it is probably unlikely to succeed, since it basically requires getting the people who run Anthropic to act against the very clear interests of Anthropic and the people who run it (And I doubt Amodei in particular, sees himself as accountable to the EA community in any way whatsoever.)
For what itâs worth I also think this complicated territory and that there is genuinely a risk of very bad outcomes from China winning an AI race too, and that the US might recover relatively quickly from its current disaster. I expect the US to remain somewhat less dictatorial than China even in the worst outcomes, though it is also true that even the democratic US has generally been a lot more keen to intervene, often but not always to bad effect, in other countryâs business.
Conditional on AGI happening under this administration, how much AGI companies have embedded with the national security state is a crux for the future of the lightcone, and I donât expect institutional inertia (the reasons why one would expect âthe US might recover relatively quickly from its current disasterâ and âthe US to remain somewhat less dictatorial than China even in the worst outcomesâ) to hold if AGI dictatorship is a possibility for the powers that be to reach for.
âhow much AGI companies have embedded with the national security state is a crux for the future of the lightconeâ
Whatâs the line of thought here?
It increases the AI arms race thus shortening AGI timelines, and, after AGI, increases chances of the singleton being either unaligned or technically aligned to being an AGI dictatorship or other kind of dystopian outcome.