You’re discussing catastrophes that are big enough to set the world back by at least 100 years. But I’m wondering if a smaller threshold might be appropriate. Setting the world back by even 10 years could be enough to mean re-running a lot of the time of perils; and we might think that catastrophes of that magnitude are more likely. (This is my current view.)
With the smaller setbacks you probably have to get more granular in terms of asking “in precisely which ways is this setting us back?”, rather than just analysing it in the abstract. But that can just be faced.
Yes, I think the ’100 years’ criterion isn’t quite what we want. E.g. if there is a catastrophic setback more than 100 years after we build an aligned ASI, thenw e don’t need to rerun the alignment problem. (In practice, perhaps 100 years should be ample time to build good global governance and reduce catastrophic setback risk to near 0, but conceptually we want to clarify this.)
And I agree with Owen that shorter setbacks also seem important. In fact, in a simple binary model we could just define a catastrophic setback to be one that takes you from a society that has built aligned ASI to one where all aligned ASIs are destroyed. ie the key thing is not how many years back you go, but whether you regres back beneath the critical ‘crunch time’ period.
You’re discussing catastrophes that are big enough to set the world back by at least 100 years. But I’m wondering if a smaller threshold might be appropriate. Setting the world back by even 10 years could be enough to mean re-running a lot of the time of perils; and we might think that catastrophes of that magnitude are more likely. (This is my current view.)
With the smaller setbacks you probably have to get more granular in terms of asking “in precisely which ways is this setting us back?”, rather than just analysing it in the abstract. But that can just be faced.
Yes, I think the ’100 years’ criterion isn’t quite what we want. E.g. if there is a catastrophic setback more than 100 years after we build an aligned ASI, thenw e don’t need to rerun the alignment problem. (In practice, perhaps 100 years should be ample time to build good global governance and reduce catastrophic setback risk to near 0, but conceptually we want to clarify this.)
And I agree with Owen that shorter setbacks also seem important. In fact, in a simple binary model we could just define a catastrophic setback to be one that takes you from a society that has built aligned ASI to one where all aligned ASIs are destroyed. ie the key thing is not how many years back you go, but whether you regres back beneath the critical ‘crunch time’ period.