First, market predictions for the intermediate term are mostly garbage. (Intermediate being 3-5 years.) It gets much much less predictive after that. The market’s ability to predict is constrained by the investment time-frame of most investors, fundamental constraints on how noisy the market is, and other factors. But given all of that, the ridiculous valuation of tech firms—uber, twitter, etc. not to mention the crazy P/E rations for google, amazon, etc. seems to imply that the market thinks something important will happen there,
Second, I don’t think you’re defining the timelines question clearly. (Neither is anyone else.) One version is; “conditional on a moderately fast takeoff, when would we need to have solved the alignment problem before in order to prevent runaway value misalignment.” Another is “regardless of takeoff speed, when will a given AI surpass the best performance by all humans across every domain.” A third is “when will all AI systems overall be able to do more than any one single person.” A fourth is “when will a specific AI be able to do more than one average person.” And lastly, “when will people stop finding strange edge cases to argue that AI isn’t yet more capable than humans, despite outperforming it on nearly every task”
I could see good arguments for 10 years as 10% probable for questions one and three. I think that most AI experts are thinking of something akin to questions two and four when they say 50 years. And I see good arguments that there is epsilon probability of question five in the next century.
A few points.
First, market predictions for the intermediate term are mostly garbage. (Intermediate being 3-5 years.) It gets much much less predictive after that. The market’s ability to predict is constrained by the investment time-frame of most investors, fundamental constraints on how noisy the market is, and other factors. But given all of that, the ridiculous valuation of tech firms—uber, twitter, etc. not to mention the crazy P/E rations for google, amazon, etc. seems to imply that the market thinks something important will happen there,
Second, I don’t think you’re defining the timelines question clearly. (Neither is anyone else.) One version is; “conditional on a moderately fast takeoff, when would we need to have solved the alignment problem before in order to prevent runaway value misalignment.” Another is “regardless of takeoff speed, when will a given AI surpass the best performance by all humans across every domain.” A third is “when will all AI systems overall be able to do more than any one single person.” A fourth is “when will a specific AI be able to do more than one average person.” And lastly, “when will people stop finding strange edge cases to argue that AI isn’t yet more capable than humans, despite outperforming it on nearly every task”
I could see good arguments for 10 years as 10% probable for questions one and three. I think that most AI experts are thinking of something akin to questions two and four when they say 50 years. And I see good arguments that there is epsilon probability of question five in the next century.