(other than ideally stopping its occasional minor contributions to it via the right-wing of rationalism and being clear-eyed about what the 2nd Trump admin might mean for things like “we want democratic countries to beat China”)
Actually I think this is the one thing that EAs could realistically do as their comparative advantage, considering who they are socially and ideologically adjacent to, if they are afraid of AGI being reached under an illiberal, anti-secular, and anti-cosmopolitan administration: to be blunt, press Karnofsky and Amodei to shut up about “entente” and “realism” and cut ties with Thiel-aligned national security state companies like Palantir.
I don’t think cutting ties with Palantir would move the date of AGI much, and I doubt it is the key point of leverage for whether the US becomes a soft dictatorship under Trump. As for the other stuff, people could certainly try, but I think it is probably unlikely to succeed, since it basically requires getting the people who run Anthropic to act against the very clear interests of Anthropic and the people who run it (And I doubt Amodei in particular, sees himself as accountable to the EA community in any way whatsoever.)
For what it’s worth I also think this complicated territory and that there is genuinely a risk of very bad outcomes from China winning an AI race too, and that the US might recover relatively quickly from its current disaster. I expect the US to remain somewhat less dictatorial than China even in the worst outcomes, though it is also true that even the democratic US has generally been a lot more keen to intervene, often but not always to bad effect, in other country’s business.
Conditional on AGI happening under this administration, how much AGI companies have embedded with the national security state is a crux for the future of the lightcone, and I don’t expect institutional inertia (the reasons why one would expect “the US might recover relatively quickly from its current disaster” and “the US to remain somewhat less dictatorial than China even in the worst outcomes”) to hold if AGI dictatorship is a possibility for the powers that be to reach for.
It increases the AI arms race thus shortening AGI timelines, and, after AGI, increases chances of the singleton being either unaligned or technically aligned to being an AGI dictatorship or other kind of dystopian outcome.
Actually I think this is the one thing that EAs could realistically do as their comparative advantage, considering who they are socially and ideologically adjacent to, if they are afraid of AGI being reached under an illiberal, anti-secular, and anti-cosmopolitan administration: to be blunt, press Karnofsky and Amodei to shut up about “entente” and “realism” and cut ties with Thiel-aligned national security state companies like Palantir.
I don’t think cutting ties with Palantir would move the date of AGI much, and I doubt it is the key point of leverage for whether the US becomes a soft dictatorship under Trump. As for the other stuff, people could certainly try, but I think it is probably unlikely to succeed, since it basically requires getting the people who run Anthropic to act against the very clear interests of Anthropic and the people who run it (And I doubt Amodei in particular, sees himself as accountable to the EA community in any way whatsoever.)
For what it’s worth I also think this complicated territory and that there is genuinely a risk of very bad outcomes from China winning an AI race too, and that the US might recover relatively quickly from its current disaster. I expect the US to remain somewhat less dictatorial than China even in the worst outcomes, though it is also true that even the democratic US has generally been a lot more keen to intervene, often but not always to bad effect, in other country’s business.
Conditional on AGI happening under this administration, how much AGI companies have embedded with the national security state is a crux for the future of the lightcone, and I don’t expect institutional inertia (the reasons why one would expect “the US might recover relatively quickly from its current disaster” and “the US to remain somewhat less dictatorial than China even in the worst outcomes”) to hold if AGI dictatorship is a possibility for the powers that be to reach for.
“how much AGI companies have embedded with the national security state is a crux for the future of the lightcone”
What’s the line of thought here?
It increases the AI arms race thus shortening AGI timelines, and, after AGI, increases chances of the singleton being either unaligned or technically aligned to being an AGI dictatorship or other kind of dystopian outcome.