Most Leading AI Experts Believe That Advanced AI Could Be Extremely Dangerous to Humanity

Link post

This is written primarily for a non-EA audience. Posting here mostly for reference/​visibility.

In 2018, the ACM Turing Award was given to three pioneers of the deep learning revolution: Yoshua Bengio, Geoffrey Hinton, and Yann LeCun.

Last month, Yoshua Bengio endorsed a pause on advanced AI capabilities research, saying “Our ability to understand what could go wrong with very powerful A.I. systems is very weak.

Three days ago, Geoffrey Hinton left Google so that he could speak openly about the dangers of advanced AI, agreeing that “it could figure out how to kill humans” and saying “it’s not clear to me that we can solve this problem.”

Yann LeCun continues to refer to anyone suggesting that we’re facing severe and imminent risk as “professional scaremongers” and says it’s a “simple fact” that “the people who are terrified of AGI are rarely the people who actually build AI models.”

The beliefs LeCun has are the beliefs LeCun has, but at this point it’s fair to say that he’s misrepresenting the field. There is not a consensus among professional researchers that AI research is safe. Rather, there is considerable and growing concern that advanced AI could pose extreme risk, and this concern is shared by not only both of LeCun’s award co-recipients, but the leaders of all three leading AI labs (OpenAI, Anthropic, and Google DeepMind):

When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful. Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.

- Demis Hassabis, CEO of DeepMind, in an interview with Time magazine, Jan 2023

One particularly important dimension of uncertainty is how difficult it will be to develop advanced AI systems that are broadly safe and pose little risk to humans. Developing such systems could lie anywhere on the spectrum from very easy to impossible.

- Anthropic, Core Views on AI Safety, Mar 2023

“Some people in the AI field think the risks of AGI (and successor systems) are fictitious; we would be delighted if they turn out to be right, but we are going to operate as if these risks are existential.”

- OpenAI, Planning for AGI and Beyond, Feb 2023

There are objections one could raise to the idea that advanced AI poses significant risk to humanity, but “it’s a fringe idea that actual AI experts don’t take seriously” is no longer among them. To a first approximation, “we have no idea how dangerous this is and we think there’s a decent chance it’s actually extremely dangerous” appears to be the dominant perspective among experts.