âAll leading labs coordinate to slow during crunch time: great. This delays dangerous AI and lengthens crunch time. Ideally the leading labs slow until risk of inaction is as great as risk of action on the margin, then deploy critical systems.
All leading labs coordinate to slow now: bad. This delays dangerous AI. But it burns leading labsâ lead time, making them less able to slow progress later (because further slowing would cause them to fall behind, such that other labs would drive AI progress and the slowed labsâ safety practices would be irrelevant).â
I would be more inclined to agree with this if there was a set of criteria we had that indicated we were in âcrunch timeâ which we are very likely to meet before dangerous systems and havenât met now. Have people generated such a set? Without that, how do we know when âcrunch timeâ is, or for that matter, if weâre already here?
I would be more inclined to agree with this if there was a set of criteria we had that indicated we were in âcrunch timeâ which we are very likely to meet before dangerous systems and havenât met now. Have people generated such a set? Without that, how do we know when âcrunch timeâ is, or for that matter, if weâre already here?