if delaying AI by 5 years reduced existential risk by something like 10 percentage points—then I think the case for PauseAI would be much stronger
This is the crux. I think it would reduce existential risk by at least 10% (probably a lot more). And 5 years would just be a start—obviously any Pause should (and in practice will) only be lifted conditionally. I link your AGI timelines are relatively short? And I don’t think your reasons for expecting the default outcome from AGI to be good are sound (as you even allude to yourself).
This is the crux. I think it would reduce existential risk by at least 10% (probably a lot more). And 5 years would just be a start—obviously any Pause should (and in practice will) only be lifted conditionally. I link your AGI timelines are relatively short? And I don’t think your reasons for expecting the default outcome from AGI to be good are sound (as you even allude to yourself).