Bengio and Hinton are the two most-cited researchers alive. Ilya Sutskever is the 3rd most cited AI researcher, and though he’s not on that paper, the superalignment intro blog post from OpenAI says this, “Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue.” LeCun is probably the top AI researcher who’s not worried about controlling a superintelligence (4th in total citations after Sutskever).
This is obviously a semantics disagreement, but I stand by the original claim. Note that I’m not saying that all the top AI researchers are worried about x-risk.
In regards to your overall point, it does not rebuts the idea that some people have been cynically exploiting AI fears for their own gain. I mean, remember that OpenAI was founded as an AI safety organisation. The actions of Sam Altman seem entirely consistent with someone hyping X-risk in order to get funding and support for OpenAI, then pivoting to downplaying risk as soon as ditching safety gets more profit. I doubt this applies to all people or even the majority, but it does seem like it’s happened at least once.
I largely agree with this and alluded to this possibility here:
If AI companies ever needed to rely on doomsday fears to lure investors and engineers, they definitely don’t anymore.
I might write a separate piece on the best evidence for the hype argument, which OpenAI I think has been the biggest winner of. My guess is that Altman actually did believe what he was saying about AI risk back in 2015. Superintelligence came out the year before, and it’s not a surprising view for him to have given what else we know about him.
I’d also guess that Altman and Elon are two of the people most associated with the x-risk story, which has been the biggest driver of skepticism about it.
There’s also been more recent evidence of him ditching x-risk fears now that it seems convenient. From a recent Fox News interview:
Interviewer: “A lot of people who don’t understand AI, and I would put myself in that category, have got a basic understanding, but they worry about AI becoming sentient, about it making autonomous decisions, about it telling humans you’re no longer in charge?”
Altman: “It doesn’t seem to me to be where things are heading…is it conscious or not will not be the right question, it will be how complex of a task can it do on its own?”
Interviewer: “What about when the tool gets smarter than we are? Or the tool decides to take over?”
Altman: “I think tools in many senses are already smarter than we are. I think that the internet is smarter than you or I, the internet knows a lot of things. In fact, society itself is vastly smarter and more capable than any one person. I think we’re already good at working with tools, institutions, structures, whatever you want to call it, that are vastly more capable than one person and as long as we have a reasonably level playing field where one person or one company has vastly more power than anybody else, I think we know how to deal with that.”
Bengio and Hinton are the two most-cited researchers alive. Ilya Sutskever is the 3rd most cited AI researcher, and though he’s not on that paper, the superalignment intro blog post from OpenAI says this, “Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue.” LeCun is probably the top AI researcher who’s not worried about controlling a superintelligence (4th in total citations after Sutskever).
This is obviously a semantics disagreement, but I stand by the original claim. Note that I’m not saying that all the top AI researchers are worried about x-risk.
I largely agree with this and alluded to this possibility here:
I might write a separate piece on the best evidence for the hype argument, which OpenAI I think has been the biggest winner of. My guess is that Altman actually did believe what he was saying about AI risk back in 2015. Superintelligence came out the year before, and it’s not a surprising view for him to have given what else we know about him.
I’d also guess that Altman and Elon are two of the people most associated with the x-risk story, which has been the biggest driver of skepticism about it.
There’s also been more recent evidence of him ditching x-risk fears now that it seems convenient. From a recent Fox News interview: