I would say the median AI expert in 2023 thought the median date of full automation was 2073, 48 years (= 2073 − 2025) away, with a 20 % chance before 2048, and 20 % chance after 2103.
Or as Stuart Russell says, if there were a fleet of alien spacecraft, and we can see them in the telescopes, approaching closer each year, with an estimated arrival date of 2060, would you respond with the attitude of dismissal? Would you write “I am skeptical of alien risk” in your profile? I hope not! That would just be crazy way to describe the situation viz. aliens!
Automation would increase economic output, and this has historically increased human welfare. I would say one needs strong evidence to overcome that prior. In contrast, it is hard to tell whether aliens would be friendly to humans, and no past evidence based on which one can establish a strong pessimistic or optimistic prior.
I can imagine someone in 2000 making an argument: “Take some future date where we have AIs solving FrontierMath problems, getting superhuman scores on every professional-level test in every field, autonomously doing most SWE-bench problems, etc. Then travel back in time 10 years. Surely there would already be AI doing much much more basic things like solving Winograd schemas, passing 8th-grade science tests, etc., at least in the hands of enthusiastic experts who are eager to work with bleeding-edge technology.” That would have sounded like a very reasonable prediction, at the time, right? But it would have been wrong!
I could also easily imagine the same person predicting large scale unemployment, and a high chance of AI catastrophes once AI could do all the tasks you mentioned, but such risks have not materialised. I think the median person in the general population has historically underestimated the rate of future progress, but vastly overestimated future risk.
Why use automation as your reference class for AI and not various other (human) groups of intelligent agents though? And if you use the later, co-operation is common historically but so are war and imperialism.
Thanks, David. I estimate the annual conflict deaths as a fraction of the global population decreased 0.121 OOM/century from 1400 to 2000 (R^2 of 8.45 %). In other words, I got a slight downwards trend despite lots of technological progress since 1400.
Even if historical data clearly pointed towards an increasing risk of conflict, the benefits could be worth it. Life expectancy at birth accounts for all sources of death, and it increases with real GDP per capita across countries.
The historical tail distribution of annual conflict deaths also suggests a very low chance of conflicts killing more than 1 % of the human population in 1 year.
I think you misunderstood David’s point. See my post “Artificial General Intelligence”: an extremely brief FAQ. It’s not that technology increases conflict between humans, but rather that the arrival of AGI amounts to the the arrival of a new intelligent species on our planet. There is no direct precedent for the arrival of a new intelligent species on our planet, apart from humans themselves, which did in fact turn out very badly for many existing species. The arrival of Europeans in North America is not quite “a new species”, but it’s at least “a new lineage”, and it also turned out very badly for the Native Americans.
Of course, there are also many disanalogies between the arrival of Europeans in the Americas, versus the arrival of AGIs on Earth. But I think it’s a less bad starting point than talking about shoe factories and whatnot!
(Like, if the Native Americans had said to each other, “well, when we invented such-and-such basket weaving technology, that turned out really good for us, so if Europeans arrive on our continent, that’s probably going to turn out good as well” … then that would be a staggering non sequitur, right? Likewise if they said “well, basket weaving and other technologies have not increased conflict between our Native American tribes so far, so if Europeans arrive, that will also probably not increase conflict, because Europeans are kinda like a new technology”. …That’s how weird your comparison feels, from my perspective :) .)
Thanks for clarifying, Steven! I am happy to think about advanced AI agents as a new species too. However, in this case, I would model them as mind children of humanity evolved through intelligent design, not Darwinian natural selection that would lead to a very adversarial relationship with humans.
Interesting points, Steven.
I would say the median AI expert in 2023 thought the median date of full automation was 2073, 48 years (= 2073 − 2025) away, with a 20 % chance before 2048, and 20 % chance after 2103.
Automation would increase economic output, and this has historically increased human welfare. I would say one needs strong evidence to overcome that prior. In contrast, it is hard to tell whether aliens would be friendly to humans, and no past evidence based on which one can establish a strong pessimistic or optimistic prior.
I could also easily imagine the same person predicting large scale unemployment, and a high chance of AI catastrophes once AI could do all the tasks you mentioned, but such risks have not materialised. I think the median person in the general population has historically underestimated the rate of future progress, but vastly overestimated future risk.
Why use automation as your reference class for AI and not various other (human) groups of intelligent agents though? And if you use the later, co-operation is common historically but so are war and imperialism.
Thanks, David. I estimate the annual conflict deaths as a fraction of the global population decreased 0.121 OOM/century from 1400 to 2000 (R^2 of 8.45 %). In other words, I got a slight downwards trend despite lots of technological progress since 1400.
Even if historical data clearly pointed towards an increasing risk of conflict, the benefits could be worth it. Life expectancy at birth accounts for all sources of death, and it increases with real GDP per capita across countries.
The historical tail distribution of annual conflict deaths also suggests a very low chance of conflicts killing more than 1 % of the human population in 1 year.
I think you misunderstood David’s point. See my post “Artificial General Intelligence”: an extremely brief FAQ. It’s not that technology increases conflict between humans, but rather that the arrival of AGI amounts to the the arrival of a new intelligent species on our planet. There is no direct precedent for the arrival of a new intelligent species on our planet, apart from humans themselves, which did in fact turn out very badly for many existing species. The arrival of Europeans in North America is not quite “a new species”, but it’s at least “a new lineage”, and it also turned out very badly for the Native Americans.
Of course, there are also many disanalogies between the arrival of Europeans in the Americas, versus the arrival of AGIs on Earth. But I think it’s a less bad starting point than talking about shoe factories and whatnot!
(Like, if the Native Americans had said to each other, “well, when we invented such-and-such basket weaving technology, that turned out really good for us, so if Europeans arrive on our continent, that’s probably going to turn out good as well” … then that would be a staggering non sequitur, right? Likewise if they said “well, basket weaving and other technologies have not increased conflict between our Native American tribes so far, so if Europeans arrive, that will also probably not increase conflict, because Europeans are kinda like a new technology”. …That’s how weird your comparison feels, from my perspective :) .)
Thanks for clarifying, Steven! I am happy to think about advanced AI agents as a new species too. However, in this case, I would model them as mind children of humanity evolved through intelligent design, not Darwinian natural selection that would lead to a very adversarial relationship with humans.