IMHO this is quite an accurate and helpful statement, not a euphemism. I offer this perspective as someone who has worked many years in a corporate research environment—actually, in one of the best corporate research environments out there.
There are three threads to the comment:
Even before we reach AGI, it is very realistic to expect AI to become stronger than humans in many specific domains. Today, we have that in very narrow domains, like Chess / Go / Protein-folding. These domains will broaden. For example, a lot of chemistry research these days is done with simulations, which car just confirmed by experiment. An AI managing such a system could develop better chemicals, eventually better drugs, more efficiently than humans. This will happen, if it hasn’t happened already.
One domain which is particularly susceptible to this kind of advance is IT, and so it’s reasonable to assume that AI systems will get very good at IT very quickly—which can quickly lead to a point where AI is working on improving AI, leading to exponential progress (in the literal sense of the word “exponential”) relative to what humans can do.
Once AI has a given capability, its capacity is far less limited than humans. Humans need to be educated, to undergo 4-year university courses and PhD’s just to become competent researchers in their chosen field. With AI, you just copy the software 100 times and you have 100 researchers who can work 24⁄7, who never forget any data, who can instantaneously keep abreast of all the progress in their field, who will never fall victim to internal politics or “not invented here” mentality, but will collaborate perfectly and flawlessly.
Put all that together, and it’s logical that once we have an AI that can do a specific domain task as well as a human (e.g. design and interpret simulated research into potentially interesting candidate molecules for drugs to fight a given disease), it is almost a no-brainer to reach the point where a corporation could use AI to massively accelerate their progress.
As AI gets closer to AGI, the domains in which AI can work independently will grow, the need for human involvement will decrease and the pace of innovation will grow. Yes, there will be some limits, like physical testing, where AI will still need humans, but even there robots already do much of the work, so human involvement is decreasing every day.
it’s also important to consider who was saying this: OpenAI. So their message was NOT that AI is bad. What they wanted us to take away was that AI has huge potential for good—like the way it can accelerate the development of medical cures, for example—BUT that it is moving forward so fast and most people do not realise how fast this can happen, and so we (in the know) need to keep pushing the regulators (mostly not experts) to regulate this while we still can.
IMHO this is quite an accurate and helpful statement, not a euphemism. I offer this perspective as someone who has worked many years in a corporate research environment—actually, in one of the best corporate research environments out there.
There are three threads to the comment:
Even before we reach AGI, it is very realistic to expect AI to become stronger than humans in many specific domains. Today, we have that in very narrow domains, like Chess / Go / Protein-folding. These domains will broaden. For example, a lot of chemistry research these days is done with simulations, which car just confirmed by experiment. An AI managing such a system could develop better chemicals, eventually better drugs, more efficiently than humans. This will happen, if it hasn’t happened already.
One domain which is particularly susceptible to this kind of advance is IT, and so it’s reasonable to assume that AI systems will get very good at IT very quickly—which can quickly lead to a point where AI is working on improving AI, leading to exponential progress (in the literal sense of the word “exponential”) relative to what humans can do.
Once AI has a given capability, its capacity is far less limited than humans. Humans need to be educated, to undergo 4-year university courses and PhD’s just to become competent researchers in their chosen field. With AI, you just copy the software 100 times and you have 100 researchers who can work 24⁄7, who never forget any data, who can instantaneously keep abreast of all the progress in their field, who will never fall victim to internal politics or “not invented here” mentality, but will collaborate perfectly and flawlessly.
Put all that together, and it’s logical that once we have an AI that can do a specific domain task as well as a human (e.g. design and interpret simulated research into potentially interesting candidate molecules for drugs to fight a given disease), it is almost a no-brainer to reach the point where a corporation could use AI to massively accelerate their progress.
As AI gets closer to AGI, the domains in which AI can work independently will grow, the need for human involvement will decrease and the pace of innovation will grow. Yes, there will be some limits, like physical testing, where AI will still need humans, but even there robots already do much of the work, so human involvement is decreasing every day.
it’s also important to consider who was saying this: OpenAI. So their message was NOT that AI is bad. What they wanted us to take away was that AI has huge potential for good—like the way it can accelerate the development of medical cures, for example—BUT that it is moving forward so fast and most people do not realise how fast this can happen, and so we (in the know) need to keep pushing the regulators (mostly not experts) to regulate this while we still can.