Predictions about when we will achieve human-level AI have been wildly inaccurate in the past[1]. I don’t think the predictions of current AI researchers is particularly useful data point.
I agree that they’re not reliable, but there’s not much better. We’re basically citing the same body of surveys. On a compromise reading I suppose they suggest that AI will likely happen anywhen between this decade and a few centuries, with most of the weight in this one, which sounds right to me.
a) AI is already superhuman in a bunch of domains that are increasingly complex, starting from backgammon and chess to Jeopardy, driving, and image recognition. Computing power is still increasing, though less quickly than before, and in a more parallel direction. Algorithms are also getting better. There’s also a parallel path to superintelligence through brain emulation. So AI getting superhuman in some domains is science fact already.
Once AI gets more intelligent than one human in certain important domains, such as i) information security ii) trading iii) manipulating military hardware, or iv) persuasion, it will have significant power. See Scott’s colourful descriptions. So p < 10^-2 cannot be right here
c) This is harder. The best achievements of the AI community so far are making some interesting theoretical discoveries re cooperation and decision theory (MIRI), attracting millions in donations from eccentric billionaires (FLI), convening dozens of supportive AI experts (FLI), writing a popular book (Bostrom) and meeting with high levels of government in UK (FHI) and Germany (CSER). This is great, though none yet shows the path to friendly AI. There are suggestions for how to make friendly AI. Even if there weren’t, there’d be a nontrivial chance that this emerging x-risk-aware apparatus would find them, given that it it young and quickly gaining momentum. MIRI’s approach would require a lot more technical exploration, while a brain emulation approach would require a heap more resources, as well as progress in hardware and brain-scanning technology. I think this is the substance that has to be engaged with to push this discussion forward, and potentially also to improve AI safety efforts.
I agree that they’re not reliable, but there’s not much better. We’re basically citing the same body of surveys. On a compromise reading I suppose they suggest that AI will likely happen anywhen between this decade and a few centuries, with most of the weight in this one, which sounds right to me.
a) AI is already superhuman in a bunch of domains that are increasingly complex, starting from backgammon and chess to Jeopardy, driving, and image recognition. Computing power is still increasing, though less quickly than before, and in a more parallel direction. Algorithms are also getting better. There’s also a parallel path to superintelligence through brain emulation. So AI getting superhuman in some domains is science fact already.
Once AI gets more intelligent than one human in certain important domains, such as i) information security ii) trading iii) manipulating military hardware, or iv) persuasion, it will have significant power. See Scott’s colourful descriptions. So p < 10^-2 cannot be right here
c) This is harder. The best achievements of the AI community so far are making some interesting theoretical discoveries re cooperation and decision theory (MIRI), attracting millions in donations from eccentric billionaires (FLI), convening dozens of supportive AI experts (FLI), writing a popular book (Bostrom) and meeting with high levels of government in UK (FHI) and Germany (CSER). This is great, though none yet shows the path to friendly AI. There are suggestions for how to make friendly AI. Even if there weren’t, there’d be a nontrivial chance that this emerging x-risk-aware apparatus would find them, given that it it young and quickly gaining momentum. MIRI’s approach would require a lot more technical exploration, while a brain emulation approach would require a heap more resources, as well as progress in hardware and brain-scanning technology. I think this is the substance that has to be engaged with to push this discussion forward, and potentially also to improve AI safety efforts.