This post was an excellent read, and I think you should publish it on LessWrong too.
I have the intuition that, at the moment, getting an answer to “how fast is AI takeoff going to be?” has the most strategic leverage and that this topic influences the probability we’re going extinct due to AI the most, together with timelines (although it seems to me that we’re less uncertain about timelines than takeoff speeds). I also think that a big part of why the other AI forecasting questions are important is because they inform takeoff speeds (and timelines). Do you agree with these intuitions?
Relatedly: If you had to rank AI-forecasting questions according to their strategic importance and influence on P(doom), what would those rankings look like?
This post was an excellent read, and I think you should publish it on LessWrong too.
I have the intuition that, at the moment, getting an answer to “how fast is AI takeoff going to be?” has the most strategic leverage and that this topic influences the probability we’re going extinct due to AI the most, together with timelines (although it seems to me that we’re less uncertain about timelines than takeoff speeds). I also think that a big part of why the other AI forecasting questions are important is because they inform takeoff speeds (and timelines). Do you agree with these intuitions?
Relatedly: If you had to rank AI-forecasting questions according to their strategic importance and influence on P(doom), what would those rankings look like?