It’s not obvious to me what alignment optimism has to do with the pause debate
Sorry, I thought it would be fairly obvious how it’s related. If you’re optimistic about alignment then the expected benefits you might hope to get out of a pause (whether or not you actually do get those benefits) are commensurately smaller, so the unintended consequences should have more relative weight in your EV calculation.
To be clear, I think slowing down AI in general, as opposed to the moratorium proposal in particular, is a more reasonable position that’s a bit harder to argue against. I do still think the overhang concerns apply in non-pause slowdowns but in a less acute manner.
Given alignment optimism, the benefits of pause are smaller—but the unintended consequences for alignment are smaller too. I guess alignment optimism suggests pause-is-bad if e.g. your alignment optimism is super conditional on smooth progress...
Sorry, I thought it would be fairly obvious how it’s related. If you’re optimistic about alignment then the expected benefits you might hope to get out of a pause (whether or not you actually do get those benefits) are commensurately smaller, so the unintended consequences should have more relative weight in your EV calculation.
To be clear, I think slowing down AI in general, as opposed to the moratorium proposal in particular, is a more reasonable position that’s a bit harder to argue against. I do still think the overhang concerns apply in non-pause slowdowns but in a less acute manner.
Given alignment optimism, the benefits of pause are smaller—but the unintended consequences for alignment are smaller too. I guess alignment optimism suggests pause-is-bad if e.g. your alignment optimism is super conditional on smooth progress...
Could you say more about what you see as the practical distinction between a “slow down AI in general” proposal vs. a “pause” proposal?