One quick response, since it was easy (might respond more later):
Overall, then, I do think it’s fair to consider a fast-takeoff to be a core premise of the classic arguments. It wasn’t incidental or a secondary consideration.
I do think takeoff speeds between 1 week and 10 years are a core premise of the classic arguments. I do think the situation looks very different if we spend 5+ years in the human domain, but I don’t think there are many who believe that that is going to happen.
I don’t think the distinction between 1 week and 1 year is that relevant to the core argument for AI Risk, since it seems in either case more than enough cause for likely doom, and that premise seems very likely to be true to me. I do think Eliezer believes things more on the order of 1 week than 1 year, but I don’t think the basic argument structure is that different in either case (though I do agree that the 1 year opens us up to some more potential mitigating strategies).
One quick response, since it was easy (might respond more later):
I do think takeoff speeds between 1 week and 10 years are a core premise of the classic arguments. I do think the situation looks very different if we spend 5+ years in the human domain, but I don’t think there are many who believe that that is going to happen.
I don’t think the distinction between 1 week and 1 year is that relevant to the core argument for AI Risk, since it seems in either case more than enough cause for likely doom, and that premise seems very likely to be true to me. I do think Eliezer believes things more on the order of 1 week than 1 year, but I don’t think the basic argument structure is that different in either case (though I do agree that the 1 year opens us up to some more potential mitigating strategies).