1. It’s a great question. My biggest uncertainty here is whether the fact that tech is aligning on this rhetoric is purely reactive to Trump’s foreign policy goals, or whether this rhetoric from tech is actively shaping the frame here (e.g. through mouthpieces like Thiel/Musk). I don’t really have a sense of to what extent either is true, but my gut take is that it’s more the former. In which case while I think the EA community could work towards challenging this narrative within tech, I wouldn’t expect this to overpower the strong incentives to align with the administration as politics increasingly shifts from programmatic to personalist. Although at the very least, I think there should be more work done on the margins to mitigate what I think could be the harms of AGI realist rhetoric. Hedging strategies like maintaining attempts at collaboration (even if unlikely); more work done to reduce the odds that conflict goes nuclear; more work on even mitigating the harms of war, certainly a lot more work to mitigate autonomous AI risk, etc. But yeah, seems totally plausible the ship has sailed here.
2. AGI Realism is the title that Leopold Aschenbrenner gives to his views in the parting thoughts of his blog series, though I entirely agree with you here. I think if Aschenbrenner et al. truly held realist assumptions then they’d be sceptical of this idea that unipolarity through AI dominance could lead to the stable imposition of a safe AI order over China. If anything, I think realists who genuinely buy that superintelligence could be unlike anything ever seen before in terms of how much it could empower a hegemon would be concerned about levels of counterbalancing that are also unprecedented (i.e. not just China alone). I think if I’m a realist, I’m really looking for interstate checks and balances to constrain conflict. I think the exact danger with “AGI Realist” rhetoric is precisely that it’s actually built on the kind of flawed liberal institutionalism that underpins the failures of liberal interventionism.
Thank you!
1. It’s a great question. My biggest uncertainty here is whether the fact that tech is aligning on this rhetoric is purely reactive to Trump’s foreign policy goals, or whether this rhetoric from tech is actively shaping the frame here (e.g. through mouthpieces like Thiel/Musk). I don’t really have a sense of to what extent either is true, but my gut take is that it’s more the former. In which case while I think the EA community could work towards challenging this narrative within tech, I wouldn’t expect this to overpower the strong incentives to align with the administration as politics increasingly shifts from programmatic to personalist. Although at the very least, I think there should be more work done on the margins to mitigate what I think could be the harms of AGI realist rhetoric. Hedging strategies like maintaining attempts at collaboration (even if unlikely); more work done to reduce the odds that conflict goes nuclear; more work on even mitigating the harms of war, certainly a lot more work to mitigate autonomous AI risk, etc. But yeah, seems totally plausible the ship has sailed here.
2. AGI Realism is the title that Leopold Aschenbrenner gives to his views in the parting thoughts of his blog series, though I entirely agree with you here. I think if Aschenbrenner et al. truly held realist assumptions then they’d be sceptical of this idea that unipolarity through AI dominance could lead to the stable imposition of a safe AI order over China. If anything, I think realists who genuinely buy that superintelligence could be unlike anything ever seen before in terms of how much it could empower a hegemon would be concerned about levels of counterbalancing that are also unprecedented (i.e. not just China alone). I think if I’m a realist, I’m really looking for interstate checks and balances to constrain conflict. I think the exact danger with “AGI Realist” rhetoric is precisely that it’s actually built on the kind of flawed liberal institutionalism that underpins the failures of liberal interventionism.