Some parts are good. I’m confused about why OpenAI uses euphemisms like
it’s conceivable that within the next ten years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations.
Maybe they’re concerned that if they instead said things like “and quickly carry out as much scientific and economic advancement as thousands of years of progress at today’s rate” then people would just not take it seriously?
(disclosure: gave feedback on the post/work at OAI)
I don’t personally love the corporation analogy/don’t really lean on it myself but would just note that IMO there is nothing euphemistic going on here—the authors are just trying one among many possible ways of conveying the gravity of the stakes, which they individually and OAI as a company have done in various ways at different times. It’s not 100% clear which are the “correct” ones both accuracy wise and effective communication wise. I mix things up myself depending on the audience/context/my current thinking on the issue at the time, and don’t think euphemism is the right way to think about that or this.
This is quite surprising to me. For the record, I don’t believe that the authors believe that “carry out as much productive activity as one of today’s largest corporations” is a good—or even reasonable—description of superintelligence or of what’s “conceivable . . . within the next ten years.”
And I don’t follow Sam’s or OpenAI’s communications closely, but I’ve recently seemed to notice them declining to talk about AI as if it’s as big a deal as I think they think it is. (Context for those reading this in the future: Sam Altman recently gave congressional testimony which [I think after briefly engaging with it] was mostly good but notable in that Sam focused on weak AI and sometimes actively avoided talking about how big a deal AI will be and x-risk, in a way that felt dishonest.)
(meta note: I don’t check the forum super consistently so may miss any replies)
I think there’s probably some subtle subtext that I’m missing in your surprise or some other way in which we are coming at this from diff. angles (besides institutional affiliations, or maybe just that), since this doesn’t feel out of distribution to me—like, large corporations are super powerful/capable. Saying that “computers” could soon be similarly capable is pretty crazy to most people (I think—I am pretty immersed in AI world, ofc, which is part of the issue I am pointing at re: iteration/uncertainty on optimal comms) and loudly likening something you’re building to nuclear weapons does not feel particularly downplay-y to me. In any case, I don’t think it’s unreasonable for you/others to be skeptical re: industry folks’ motivations etc., to be clear—seems good to critically analyze stuff like this since it’s important to get right—but just sharing my 2c.
IMHO this is quite an accurate and helpful statement, not a euphemism. I offer this perspective as someone who has worked many years in a corporate research environment—actually, in one of the best corporate research environments out there.
There are three threads to the comment:
Even before we reach AGI, it is very realistic to expect AI to become stronger than humans in many specific domains. Today, we have that in very narrow domains, like Chess / Go / Protein-folding. These domains will broaden. For example, a lot of chemistry research these days is done with simulations, which car just confirmed by experiment. An AI managing such a system could develop better chemicals, eventually better drugs, more efficiently than humans. This will happen, if it hasn’t happened already.
One domain which is particularly susceptible to this kind of advance is IT, and so it’s reasonable to assume that AI systems will get very good at IT very quickly—which can quickly lead to a point where AI is working on improving AI, leading to exponential progress (in the literal sense of the word “exponential”) relative to what humans can do.
Once AI has a given capability, its capacity is far less limited than humans. Humans need to be educated, to undergo 4-year university courses and PhD’s just to become competent researchers in their chosen field. With AI, you just copy the software 100 times and you have 100 researchers who can work 24⁄7, who never forget any data, who can instantaneously keep abreast of all the progress in their field, who will never fall victim to internal politics or “not invented here” mentality, but will collaborate perfectly and flawlessly.
Put all that together, and it’s logical that once we have an AI that can do a specific domain task as well as a human (e.g. design and interpret simulated research into potentially interesting candidate molecules for drugs to fight a given disease), it is almost a no-brainer to reach the point where a corporation could use AI to massively accelerate their progress.
As AI gets closer to AGI, the domains in which AI can work independently will grow, the need for human involvement will decrease and the pace of innovation will grow. Yes, there will be some limits, like physical testing, where AI will still need humans, but even there robots already do much of the work, so human involvement is decreasing every day.
it’s also important to consider who was saying this: OpenAI. So their message was NOT that AI is bad. What they wanted us to take away was that AI has huge potential for good—like the way it can accelerate the development of medical cures, for example—BUT that it is moving forward so fast and most people do not realise how fast this can happen, and so we (in the know) need to keep pushing the regulators (mostly not experts) to regulate this while we still can.
This was hard to read, emotionally.
Some parts are good. I’m confused about why OpenAI uses euphemisms like
Maybe they’re concerned that if they instead said things like “and quickly carry out as much scientific and economic advancement as thousands of years of progress at today’s rate” then people would just not take it seriously?
(disclosure: gave feedback on the post/work at OAI)
I don’t personally love the corporation analogy/don’t really lean on it myself but would just note that IMO there is nothing euphemistic going on here—the authors are just trying one among many possible ways of conveying the gravity of the stakes, which they individually and OAI as a company have done in various ways at different times. It’s not 100% clear which are the “correct” ones both accuracy wise and effective communication wise. I mix things up myself depending on the audience/context/my current thinking on the issue at the time, and don’t think euphemism is the right way to think about that or this.
This is quite surprising to me. For the record, I don’t believe that the authors believe that “carry out as much productive activity as one of today’s largest corporations” is a good—or even reasonable—description of superintelligence or of what’s “conceivable . . . within the next ten years.”
And I don’t follow Sam’s or OpenAI’s communications closely, but I’ve recently seemed to notice them declining to talk about AI as if it’s as big a deal as I think they think it is. (Context for those reading this in the future: Sam Altman recently gave congressional testimony which [I think after briefly engaging with it] was mostly good but notable in that Sam focused on weak AI and sometimes actively avoided talking about how big a deal AI will be and x-risk, in a way that felt dishonest.)
(Thanks for engaging.)
(meta note: I don’t check the forum super consistently so may miss any replies)
I think there’s probably some subtle subtext that I’m missing in your surprise or some other way in which we are coming at this from diff. angles (besides institutional affiliations, or maybe just that), since this doesn’t feel out of distribution to me—like, large corporations are super powerful/capable. Saying that “computers” could soon be similarly capable is pretty crazy to most people (I think—I am pretty immersed in AI world, ofc, which is part of the issue I am pointing at re: iteration/uncertainty on optimal comms) and loudly likening something you’re building to nuclear weapons does not feel particularly downplay-y to me. In any case, I don’t think it’s unreasonable for you/others to be skeptical re: industry folks’ motivations etc., to be clear—seems good to critically analyze stuff like this since it’s important to get right—but just sharing my 2c.
IMHO this is quite an accurate and helpful statement, not a euphemism. I offer this perspective as someone who has worked many years in a corporate research environment—actually, in one of the best corporate research environments out there.
There are three threads to the comment:
Even before we reach AGI, it is very realistic to expect AI to become stronger than humans in many specific domains. Today, we have that in very narrow domains, like Chess / Go / Protein-folding. These domains will broaden. For example, a lot of chemistry research these days is done with simulations, which car just confirmed by experiment. An AI managing such a system could develop better chemicals, eventually better drugs, more efficiently than humans. This will happen, if it hasn’t happened already.
One domain which is particularly susceptible to this kind of advance is IT, and so it’s reasonable to assume that AI systems will get very good at IT very quickly—which can quickly lead to a point where AI is working on improving AI, leading to exponential progress (in the literal sense of the word “exponential”) relative to what humans can do.
Once AI has a given capability, its capacity is far less limited than humans. Humans need to be educated, to undergo 4-year university courses and PhD’s just to become competent researchers in their chosen field. With AI, you just copy the software 100 times and you have 100 researchers who can work 24⁄7, who never forget any data, who can instantaneously keep abreast of all the progress in their field, who will never fall victim to internal politics or “not invented here” mentality, but will collaborate perfectly and flawlessly.
Put all that together, and it’s logical that once we have an AI that can do a specific domain task as well as a human (e.g. design and interpret simulated research into potentially interesting candidate molecules for drugs to fight a given disease), it is almost a no-brainer to reach the point where a corporation could use AI to massively accelerate their progress.
As AI gets closer to AGI, the domains in which AI can work independently will grow, the need for human involvement will decrease and the pace of innovation will grow. Yes, there will be some limits, like physical testing, where AI will still need humans, but even there robots already do much of the work, so human involvement is decreasing every day.
it’s also important to consider who was saying this: OpenAI. So their message was NOT that AI is bad. What they wanted us to take away was that AI has huge potential for good—like the way it can accelerate the development of medical cures, for example—BUT that it is moving forward so fast and most people do not realise how fast this can happen, and so we (in the know) need to keep pushing the regulators (mostly not experts) to regulate this while we still can.