This neglects a considerable amount of my probability mass that says “ASI is dangerous”, due to not considering the possibility of an Oracle ASI, or otherwise one with bad outcomes that would be worsened by China’s AI plausibly getting to ASI before us.
For a further reason “But China!” does matter, consider the greatly reduced bargaining position under that scenario. Much easier I think (with admittedly no understanding global-power bargaining dynamics) that building international agreements is easier when costs aren’t to the competitive disadvantage of the opposing side.
I’m not convinced that alignment is not ~90% capabilities. That Open AI and Anthropic are at least somewhat dedicated to explicitly pursuing alignment also shouldn’t be taken for granted.
This neglects a considerable amount of my probability mass that says “ASI is dangerous”, due to not considering the possibility of an Oracle ASI, or otherwise one with bad outcomes that would be worsened by China’s AI plausibly getting to ASI before us.
For a further reason “But China!” does matter, consider the greatly reduced bargaining position under that scenario. Much easier I think (with admittedly no understanding global-power bargaining dynamics) that building international agreements is easier when costs aren’t to the competitive disadvantage of the opposing side.
I’m not convinced that alignment is not ~90% capabilities. That Open AI and Anthropic are at least somewhat dedicated to explicitly pursuing alignment also shouldn’t be taken for granted.