I’ve seen this a few times but I’m skeptical about taking this rhetorical approach.
I think a large fraction of AI risk comes from worlds where the ex ante probability of catastrophe is more like 50% than 100%. And in many of those worlds, the counterfactual impact of individual developers move faster is several times smaller (since someone else is likely to kill us all in the bad 50% of worlds). On top of that, reasonable people might disagree about probabilities and think 10% in a case where I think 50%.
So putting that together they may conclude that racing faster increases the risk of doom by 0.03% for every 1% that it increases your share of the future (whether measured in profit, or reduced opportunity for misuse of frontier systems). And that’s just not going to be compelling.
I think you will have an extremely hard time convincing people that the race is obviously suicidal. I know some folks are confident about this, but I don’t really find that position credible today and I’ve spent a very long time thinking about the problem and engaging with pessimistic people. Maybe it will become obvious tomorrow, and maybe it’s OK for some people to be betting their chips on that, but I don’t want to get lumped in with them (because I think their political position is going to become increasingly untenable over time).
On the flip side, I don’t think it’s controversial to say: “If the probability of AI takeover is 10%, AI developers need to stop racing.”
It’s a tiny bit unclear what that means, so to be a bit more precise: “If people didn’t stop AI development until things look significantly more dangerous than they do today, then then the probability of takeover would be more than 10%.” I don’t think that’s true today, but will likely become true.
I’m pushing back against the framing: “this is a suicide race with no benefit from winning.”
If there is a 10% chance of AI takeover, then there is a real and potentially huge benefit from winning the race. But we still should not be OK with someone unilaterally taking that risk.
I agree that AI developers should have to prove that the systems they build are reasonably safe. I don’t think 100% is a reasonable ask, but 90% or 99% seem pretty safe (i.e. robustly reasonable asks).
(Edited to complete cutoff sentence and clarify “safe.”)