I don’t think the answers are illuminating if the question is “conditional on AGI happening, would it be good or bad”—that doesn’t yield super meaningful answers from people who believe that AGI in the agentic sense is vanishingly unlikely. Or rather it is a meaningful question, but to those people AGI occurs with near zero probability so even if it was very bad it might not be a priority.
Assume for the purpose of this question that HLMI* will at some point exist. How positive or negative do you expect the overall impact of this to be on humanity, in the long run?
So it doesn’t presuppose some agentic form of AGI—but rather asks about the same type of technology that the median respondant gave a 50% chance of arriving within 45 years.
*HLMI was defined in the survey as:
“High-level machine intelligence” (HLMI) is achieved when unaided machines can accomplish every task better and more cheaply than human workers.
I don’t think the answers are illuminating if the question is “conditional on AGI happening, would it be good or bad”—that doesn’t yield super meaningful answers from people who believe that AGI in the agentic sense is vanishingly unlikely. Or rather it is a meaningful question, but to those people AGI occurs with near zero probability so even if it was very bad it might not be a priority.
The question was:
So it doesn’t presuppose some agentic form of AGI—but rather asks about the same type of technology that the median respondant gave a 50% chance of arriving within 45 years.
*HLMI was defined in the survey as: