This is exactly what I’m afraid of. That some human will build machines that are going to be—not just superior to us—but not attached to what we want, but what they want. And I think it’s playing dice with humanity’s future. I personally think this should be criminalized, like we criminalize cloning of humans.
- Yoshua Bengio
My next guest is about as responsible as anybody for the state of AI capabilities today. But he’s recently begun to wonder whether the field he spent his life helping build might lead to the end of the world.
Following in the tradition of the Manhattan Project physicists who later opposed the hydrogen bomb, Dr. Yoshua Bengio started warning last year that advanced AI systems could drive humanity extinct.
Dr. Bengio is the second-most cited living scientist and one of the so-called “Godfathers of deep learning.” He and the other “Godfathers,” Geoffrey Hinton and Yann LeCun, shared the 2018 Turing Award, computing’s Nobel prize.
In November, Dr. Bengio was commissioned to lead production of the first “State of the Science” report on the “capabilities and risks of frontier AI” — the first significant attempt to create something like the Intergovernmental Panel on Climate Change (IPCC) for AI.
I spoke with him last fall while reporting my cover story for Jacobin’s winter issue, “Can Humanity Survive AI?” Dr. Bengio made waves last May when he and Geoffrey Hinton began warning that advanced AI systems could drive humanity extinct.
You can find The Most Interesting People I Know wherever you find podcasts and a full transcript here. If you’d like to support the show, sharing it with friends and reviewing it on Apple Podcasts is the most helpful! You can also subscribe to my Substack for updates on all my work.
We discuss:
His background and what motivated him to work on AI
Whether there’s evidence for existential risk (x-risk) from AI
How he initially thought about x-risk
Why he started worrying
How the machine learning community’s thoughts on x-risk have changed over time
Why reading more on the topic made him more concerned
Why he thinks Google co-founder Larry Page’s AI aspirations should be criminalized
Why labs are trying to build artificial general intelligence (AGI)
The technical and social components of aligning AI systems
The why and how of universal, international regulations on AI
Why good regulations will help with all kinds of risks
Why loss of control doesn’t need to be existential to be worth worrying about
How AI enables power concentration
Why he thinks the choice between AI ethics and safety is a false one
Podcast with Yoshua Bengio on Why AI Labs are “Playing Dice with Humanity’s Future”
Link post
- Yoshua Bengio
My next guest is about as responsible as anybody for the state of AI capabilities today. But he’s recently begun to wonder whether the field he spent his life helping build might lead to the end of the world.
Following in the tradition of the Manhattan Project physicists who later opposed the hydrogen bomb, Dr. Yoshua Bengio started warning last year that advanced AI systems could drive humanity extinct.
Dr. Bengio is the second-most cited living scientist and one of the so-called “Godfathers of deep learning.” He and the other “Godfathers,” Geoffrey Hinton and Yann LeCun, shared the 2018 Turing Award, computing’s Nobel prize.
In November, Dr. Bengio was commissioned to lead production of the first “State of the Science” report on the “capabilities and risks of frontier AI” — the first significant attempt to create something like the Intergovernmental Panel on Climate Change (IPCC) for AI.
I spoke with him last fall while reporting my cover story for Jacobin’s winter issue, “Can Humanity Survive AI?” Dr. Bengio made waves last May when he and Geoffrey Hinton began warning that advanced AI systems could drive humanity extinct.
You can find The Most Interesting People I Know wherever you find podcasts and a full transcript here. If you’d like to support the show, sharing it with friends and reviewing it on Apple Podcasts is the most helpful! You can also subscribe to my Substack for updates on all my work.
We discuss:
His background and what motivated him to work on AI
Whether there’s evidence for existential risk (x-risk) from AI
How he initially thought about x-risk
Why he started worrying
How the machine learning community’s thoughts on x-risk have changed over time
Why reading more on the topic made him more concerned
Why he thinks Google co-founder Larry Page’s AI aspirations should be criminalized
Why labs are trying to build artificial general intelligence (AGI)
The technical and social components of aligning AI systems
The why and how of universal, international regulations on AI
Why good regulations will help with all kinds of risks
Why loss of control doesn’t need to be existential to be worth worrying about
How AI enables power concentration
Why he thinks the choice between AI ethics and safety is a false one
Capitalism and AI risk
The “dangerous race” between companies
Leading indicators of AGI
Why the way we train AI models creates risks
Links
How We Can Have AI Progress Without Sacrificing Safety or Democracy by Yoshua Bengio and Daniel Privitera in TIME Magazine
AI extinction open letter
AI and Catastrophic Risk by Yoshua Bengio in the Journal of Democracy
Regulating advanced artificial agents by Michael K. Cohen, Noam Kolt, Yoshua Bengio, Gillian K. Hadfield, Stuart Russell in Science
How Rogue AIs may Arise by Yoshua Bengio
FAQ on Catastrophic AI Risks by Yoshua Bengio