Great post. I want to focus on one particular claim because I think it’s important.
Though when it comes to the communities of professionals already working on helping the AI transition go well, I think they are already hedging strongly against early transformative AI.
There is (relatively) too much work on trying to solve alignment, and not enough work on trying to lengthen timelines. The shorter timelines are, the less likely it is that we’ll be able to solve alignment, which makes alignment work look less compelling—although that’s not the only factor in the tradeoff (there are also tractability concerns).
My argument in We won’t solve non-alignment problems by doing research could be rephrased as: a lot of people are doing research on non-alignment problems on the assumption that timelines are long, but this strategy won’t work if timelines are short. I’ve seen approximately zero research reports on non-alignment problems ask the question of how they’ll get solved under short timelines (AFAICT the best answer is to pause AI* but at the moment I’m actively thinking about if there are other answers). So again in this case I don’t think people are doing a good job of hedging against short timelines.
So I agree with the thesis in OP (that we should have “broad timelines”), but I disagree that people are actually doing a good job of hedging against early transformative AI.
*That is, it’s the best answer conditional on short timelines. I lean toward it being the best answer unconditionally, but it’s less clear—some research agendas may have sufficiently high EV under long(er) timelines that they’re the best-EV choice overall.
Great post. I want to focus on one particular claim because I think it’s important.
There is (relatively) too much work on trying to solve alignment, and not enough work on trying to lengthen timelines. The shorter timelines are, the less likely it is that we’ll be able to solve alignment, which makes alignment work look less compelling—although that’s not the only factor in the tradeoff (there are also tractability concerns).
My argument in We won’t solve non-alignment problems by doing research could be rephrased as: a lot of people are doing research on non-alignment problems on the assumption that timelines are long, but this strategy won’t work if timelines are short. I’ve seen approximately zero research reports on non-alignment problems ask the question of how they’ll get solved under short timelines (AFAICT the best answer is to pause AI* but at the moment I’m actively thinking about if there are other answers). So again in this case I don’t think people are doing a good job of hedging against short timelines.
So I agree with the thesis in OP (that we should have “broad timelines”), but I disagree that people are actually doing a good job of hedging against early transformative AI.
*That is, it’s the best answer conditional on short timelines. I lean toward it being the best answer unconditionally, but it’s less clear—some research agendas may have sufficiently high EV under long(er) timelines that they’re the best-EV choice overall.