Fair! Sorry for the slow reply, I missed the comment notification earlier.
I could have been clearer in what I was trying to point at with my comment. I didn’t mean to fault you for not meeting an (unmade) challenge to list all your assumptions—I agree that would be unreasonable.
Instead, I meant to suggest an object-level point: that the argument you mentioned seems pretty reliant on a controversial discontinuity assumption—enough that the argument alone (along with other, largely uncontroversial assumptions) doesn’t make it “quite easy to reach extremely dire forecasts about AGI.” (Though I was thinking more about 90%+ forecasts.)
(That assumption—i.e. the main claims in the 3rd paragraph of your response—seems much more controversial/non-obvious among people in AI safety than the other assumptions you mention, as evidenced by researchers criticizing it and researchers doing prosaic AI safety work.)
Fair! Sorry for the slow reply, I missed the comment notification earlier.
I could have been clearer in what I was trying to point at with my comment. I didn’t mean to fault you for not meeting an (unmade) challenge to list all your assumptions—I agree that would be unreasonable.
Instead, I meant to suggest an object-level point: that the argument you mentioned seems pretty reliant on a controversial discontinuity assumption—enough that the argument alone (along with other, largely uncontroversial assumptions) doesn’t make it “quite easy to reach extremely dire forecasts about AGI.” (Though I was thinking more about 90%+ forecasts.)
(That assumption—i.e. the main claims in the 3rd paragraph of your response—seems much more controversial/non-obvious among people in AI safety than the other assumptions you mention, as evidenced by researchers criticizing it and researchers doing prosaic AI safety work.)