While I’m very uncertain, on balance I think it provides more serial time to do alignment research. As model capabilities improve and we get more legible evidence of AI risk, the will to pause should increase, and so the expected length of a pause should also increase [footnote explaining that the mechanism here is that the dangers of GPT-5 galvanize more support than GPT-4]
I appreciate flagging the uncertainty; this argument doesn’t seem right to me.
One factor affecting the length of a pause would be the (opportunity cost from pause) / (risk of catastrophe from unpause) ratio of marginal pause days, or what is the ratio of the costs to the benefits. I expect both the costs and the benefits of AI pause days to go up in the future — because risks of misalignment/misuse are greater, and because AIs will be deployed in a way that adds a bunch of value to society (whether the marginal improvements are huge remains unclear, e.g., GPT-6 might add tons of value, but it’s unclear how much more GPT-6.5 adds on top of that, seems hard to tell). I don’t know how the ratio will change, which is probably what actually matters. But I wouldn’t be surprised if that numerator (opportunity cost) shot up a ton.
I think it’s reasonable to expect that marginal improvements to AI systems in the future (e.g., scaling up 5x) could map on to automating an additional 1-7% of a nation’s economy. Delaying this by a month would be a huge loss (or a benefit, depending on how the transition is going).
What relevant decision makers think the costs and benefits are is what actually matters, not the true values. So even if right now I can look ahead and see that an immediate pause pushes back future tremendous economic growth, this feature may not become apparent to others until later.
To try and say what I’m getting at a different way: you’re suggesting that we get a longer pause if we pause later than if we pause now. I think that “races” around AI are going to ~monotonically get worse and that the perceived cost of pausing will shoot up a bunch. If we’re early on an exponential of AI creating value in the world, it just seems way easier to pause for longer than it will be later on. If this doesn’t make sense I can try to explain more.
I agree it’s important to think about the perceived opportunity cost as well, and that’s a large part of why I’m uncertain. I probably should have said that in the post.
I’d still guess that overall the increased clarity on risks will be the bigger factor—it seems to me that risk aversion is a much larger driver of policy than worries about economic opportunity cost (see e.g. COVID lockdowns). I would be more worried about powerful AI systems being seen as integral to national security; my understanding is that national security concerns drive a lot of policy. (But this could potentially be overcome with international agreements.)
I appreciate flagging the uncertainty; this argument doesn’t seem right to me.
One factor affecting the length of a pause would be the (opportunity cost from pause) / (risk of catastrophe from unpause) ratio of marginal pause days, or what is the ratio of the costs to the benefits. I expect both the costs and the benefits of AI pause days to go up in the future — because risks of misalignment/misuse are greater, and because AIs will be deployed in a way that adds a bunch of value to society (whether the marginal improvements are huge remains unclear, e.g., GPT-6 might add tons of value, but it’s unclear how much more GPT-6.5 adds on top of that, seems hard to tell). I don’t know how the ratio will change, which is probably what actually matters. But I wouldn’t be surprised if that numerator (opportunity cost) shot up a ton.
I think it’s reasonable to expect that marginal improvements to AI systems in the future (e.g., scaling up 5x) could map on to automating an additional 1-7% of a nation’s economy. Delaying this by a month would be a huge loss (or a benefit, depending on how the transition is going).
What relevant decision makers think the costs and benefits are is what actually matters, not the true values. So even if right now I can look ahead and see that an immediate pause pushes back future tremendous economic growth, this feature may not become apparent to others until later.
To try and say what I’m getting at a different way: you’re suggesting that we get a longer pause if we pause later than if we pause now. I think that “races” around AI are going to ~monotonically get worse and that the perceived cost of pausing will shoot up a bunch. If we’re early on an exponential of AI creating value in the world, it just seems way easier to pause for longer than it will be later on. If this doesn’t make sense I can try to explain more.
I agree it’s important to think about the perceived opportunity cost as well, and that’s a large part of why I’m uncertain. I probably should have said that in the post.
I’d still guess that overall the increased clarity on risks will be the bigger factor—it seems to me that risk aversion is a much larger driver of policy than worries about economic opportunity cost (see e.g. COVID lockdowns). I would be more worried about powerful AI systems being seen as integral to national security; my understanding is that national security concerns drive a lot of policy. (But this could potentially be overcome with international agreements.)