This is not just a general public/”uninformed masses” phenomenon. It’s worth noting that even among AI/ML researchers, AGI concerns and AI safety is niche. Among the people with literally the most exposure to and expertise in AI/ML systems, only a small fraction are focusing on AGI and AI safety. A comparatively larger fraction of them work on “nearterm AI ethics” (i.e. fairness, discrimination and privacy concerns for current ML systems): there is a pretty large conference on this topic area (FAccT), and I do not know if AI safety has a comparably-sized conference.
Why is this? My anecdotal experience with ML researcher friends who don’t work on AI safety is that they basically see AGI as very unlikely. I am in no position to adjudicate plausibility of these arguments, but that’s the little I have seen.
Certainly some people you talk to in the fairness/bias crowd think AGI is very unlikely, but that’s definitely not a consensus view among AI researchers. E.g. see this survey of AI researchers (at top conferences in 2015, not selecting for AI safety folk), which finds that:
Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years
That’s fascinating! But I don’t know if that is the same notion of AGI and AI risk that we talk about in EA. It’s very possible to believe that AI will automate most jobs and still not believe that AI will become agentic/misaligned. That’s the notion of AGI that I was referring to.
Right, I just wanted to point out that the average AI researcher who dismisses AI x-risk doesn’t do so because they think AGI is very unlikely. But I admit to often being confused about why they do dismiss AI x-risk.
The same survey asked AI researchers about the outcome they expect from AGI:
The median probability was 25% for a “good” outcome and 20% for an “extremely
good” outcome. By contrast, the probability was 10% for a bad outcome and 5% for an
outcome described as “Extremely Bad (e.g., human extinction).”
If I learned that there was some scientific field where the median researcher assigned a 5% probability that we all die due to advances in their field, I’d be incredibly worried. Going off this data alone, it seems hard to make a case that x-risk from AI is some niche thing that almost no AI researchers think is real.
The median researcher does think it’s somewhat unlikely, but 5% extinction risk is more than enough to take it very seriously and motivate a huge research and policy effort.
I don’t think the answers are illuminating if the question is “conditional on AGI happening, would it be good or bad”—that doesn’t yield super meaningful answers from people who believe that AGI in the agentic sense is vanishingly unlikely. Or rather it is a meaningful question, but to those people AGI occurs with near zero probability so even if it was very bad it might not be a priority.
Assume for the purpose of this question that HLMI* will at some point exist. How positive or negative do you expect the overall impact of this to be on humanity, in the long run?
So it doesn’t presuppose some agentic form of AGI—but rather asks about the same type of technology that the median respondant gave a 50% chance of arriving within 45 years.
*HLMI was defined in the survey as:
“High-level machine intelligence” (HLMI) is achieved when unaided machines can accomplish every task better and more cheaply than human workers.
This is not just a general public/”uninformed masses” phenomenon. It’s worth noting that even among AI/ML researchers, AGI concerns and AI safety is niche. Among the people with literally the most exposure to and expertise in AI/ML systems, only a small fraction are focusing on AGI and AI safety. A comparatively larger fraction of them work on “nearterm AI ethics” (i.e. fairness, discrimination and privacy concerns for current ML systems): there is a pretty large conference on this topic area (FAccT), and I do not know if AI safety has a comparably-sized conference.
Why is this? My anecdotal experience with ML researcher friends who don’t work on AI safety is that they basically see AGI as very unlikely. I am in no position to adjudicate plausibility of these arguments, but that’s the little I have seen.
Certainly some people you talk to in the fairness/bias crowd think AGI is very unlikely, but that’s definitely not a consensus view among AI researchers. E.g. see this survey of AI researchers (at top conferences in 2015, not selecting for AI safety folk), which finds that:
That’s fascinating! But I don’t know if that is the same notion of AGI and AI risk that we talk about in EA. It’s very possible to believe that AI will automate most jobs and still not believe that AI will become agentic/misaligned. That’s the notion of AGI that I was referring to.
Right, I just wanted to point out that the average AI researcher who dismisses AI x-risk doesn’t do so because they think AGI is very unlikely. But I admit to often being confused about why they do dismiss AI x-risk.
The same survey asked AI researchers about the outcome they expect from AGI:
If I learned that there was some scientific field where the median researcher assigned a 5% probability that we all die due to advances in their field, I’d be incredibly worried. Going off this data alone, it seems hard to make a case that x-risk from AI is some niche thing that almost no AI researchers think is real.
The median researcher does think it’s somewhat unlikely, but 5% extinction risk is more than enough to take it very seriously and motivate a huge research and policy effort.
I don’t think the answers are illuminating if the question is “conditional on AGI happening, would it be good or bad”—that doesn’t yield super meaningful answers from people who believe that AGI in the agentic sense is vanishingly unlikely. Or rather it is a meaningful question, but to those people AGI occurs with near zero probability so even if it was very bad it might not be a priority.
The question was:
So it doesn’t presuppose some agentic form of AGI—but rather asks about the same type of technology that the median respondant gave a 50% chance of arriving within 45 years.
*HLMI was defined in the survey as:
This is a really useful (and kind of scary) perspective. Thanks for sharing.