This is quite surprising to me. For the record, I don’t believe that the authors believe that “carry out as much productive activity as one of today’s largest corporations” is a good—or even reasonable—description of superintelligence or of what’s “conceivable . . . within the next ten years.”
And I don’t follow Sam’s or OpenAI’s communications closely, but I’ve recently seemed to notice them declining to talk about AI as if it’s as big a deal as I think they think it is. (Context for those reading this in the future: Sam Altman recently gave congressional testimony which [I think after briefly engaging with it] was mostly good but notable in that Sam focused on weak AI and sometimes actively avoided talking about how big a deal AI will be and x-risk, in a way that felt dishonest.)
(meta note: I don’t check the forum super consistently so may miss any replies)
I think there’s probably some subtle subtext that I’m missing in your surprise or some other way in which we are coming at this from diff. angles (besides institutional affiliations, or maybe just that), since this doesn’t feel out of distribution to me—like, large corporations are super powerful/capable. Saying that “computers” could soon be similarly capable is pretty crazy to most people (I think—I am pretty immersed in AI world, ofc, which is part of the issue I am pointing at re: iteration/uncertainty on optimal comms) and loudly likening something you’re building to nuclear weapons does not feel particularly downplay-y to me. In any case, I don’t think it’s unreasonable for you/others to be skeptical re: industry folks’ motivations etc., to be clear—seems good to critically analyze stuff like this since it’s important to get right—but just sharing my 2c.
This is quite surprising to me. For the record, I don’t believe that the authors believe that “carry out as much productive activity as one of today’s largest corporations” is a good—or even reasonable—description of superintelligence or of what’s “conceivable . . . within the next ten years.”
And I don’t follow Sam’s or OpenAI’s communications closely, but I’ve recently seemed to notice them declining to talk about AI as if it’s as big a deal as I think they think it is. (Context for those reading this in the future: Sam Altman recently gave congressional testimony which [I think after briefly engaging with it] was mostly good but notable in that Sam focused on weak AI and sometimes actively avoided talking about how big a deal AI will be and x-risk, in a way that felt dishonest.)
(Thanks for engaging.)
(meta note: I don’t check the forum super consistently so may miss any replies)
I think there’s probably some subtle subtext that I’m missing in your surprise or some other way in which we are coming at this from diff. angles (besides institutional affiliations, or maybe just that), since this doesn’t feel out of distribution to me—like, large corporations are super powerful/capable. Saying that “computers” could soon be similarly capable is pretty crazy to most people (I think—I am pretty immersed in AI world, ofc, which is part of the issue I am pointing at re: iteration/uncertainty on optimal comms) and loudly likening something you’re building to nuclear weapons does not feel particularly downplay-y to me. In any case, I don’t think it’s unreasonable for you/others to be skeptical re: industry folks’ motivations etc., to be clear—seems good to critically analyze stuff like this since it’s important to get right—but just sharing my 2c.