Nobody I have ever met outside of the EA sphere seriously believes that superintelligent computer systems could take over the world within decades.
Yep, this roughly matches my impressions. I think very, very few people really believe that superintelligence systems will be that influential.
One notable exception, of course though, would be the AGI companies themselves. I’m fairly confident that people in these groups really do think that they have a good shot at making AGI, and that it will be transformative.
This would be an example of Response 1 that I listed.
As to the question of, “Since everyone else besides AGI companies and select longtermists doesn’t seem to think this is an issue, maybe it isn’t an issue?”; I specifically am not that interested in discussion of that question. This sort of question is just very different and gets discussed in depth elsewhere.
But I think the discrepancy is interesting to understand, to better understand why society at large is doing what it’s doing.
Agreed, and I don’t have any specific explanation of why government is unconcerned with dramatic progress in AI. As usual, government seems just a bit slow to catch up to the cutting edge of technological development and academic thought. Charles_Guthmann’s point on the ages of people in government seems relevant. Appreciate your response though, I wasn’t sure if others had the same perceptions.
I think very, very few people really believe that superintelligence systems will be that influential.
A lot of prominent scientists, technologists and intellectuals outside of EA have warned about advanced artificial intelligence too. Stephen Hawking, Elon Musk, Bill Gates, Sam Harris, everyone on this open letter back in 2015 etc.
I agree that the number of people really concerned about this is strikingly small given the emphasis longtermist EAs put on it. But I think these many counter-examples warn us that it’s not just EAs and the AGI labs being overconfident or out of left field.
Yep, this roughly matches my impressions. I think very, very few people really believe that superintelligence systems will be that influential.
One notable exception, of course though, would be the AGI companies themselves. I’m fairly confident that people in these groups really do think that they have a good shot at making AGI, and that it will be transformative.
This would be an example of Response 1 that I listed.
As to the question of, “Since everyone else besides AGI companies and select longtermists doesn’t seem to think this is an issue, maybe it isn’t an issue?”; I specifically am not that interested in discussion of that question. This sort of question is just very different and gets discussed in depth elsewhere.
But I think the discrepancy is interesting to understand, to better understand why society at large is doing what it’s doing.
Agreed, and I don’t have any specific explanation of why government is unconcerned with dramatic progress in AI. As usual, government seems just a bit slow to catch up to the cutting edge of technological development and academic thought. Charles_Guthmann’s point on the ages of people in government seems relevant. Appreciate your response though, I wasn’t sure if others had the same perceptions.
A lot of prominent scientists, technologists and intellectuals outside of EA have warned about advanced artificial intelligence too. Stephen Hawking, Elon Musk, Bill Gates, Sam Harris, everyone on this open letter back in 2015 etc.
I agree that the number of people really concerned about this is strikingly small given the emphasis longtermist EAs put on it. But I think these many counter-examples warn us that it’s not just EAs and the AGI labs being overconfident or out of left field.