Technical note: I think we need to be careful to note the difference in meaning between extinction and existential catastrophe. When Joseph Carlsmith talks about existential catastrophe, he doesn’t necessarily mean all humans dying; in this report, he’s mainly concerned about the disempowerment of humanity. Following Toby Ord in The Precipice, Carlsmith defines an existential catastrophe as “an event that drastically reduces the value of the trajectories along which human civilization could realistically develop”. It’s not straightforward to translate his estimates of existential risk to estimates of extinction risk.
Of course, you don’t need to rely on Joseph Carlsmith’s report to believe that there’s a ≥7.9% chance of human extinction conditioning on AGI.
I face enormous challenges convincing people of this. Many people don’t see, for example, widespread AI-empowered human rights infringements as an ‘existential catastrophe’ because it doesn’t directly kill people, and as a result it falls between the cracks of AI safety definitions—despite being a far more plausable threat than AGI considering it’s already happening. Severe curtailments to humanity’s potential still firmly count as an existential risk in my opinion.
Technical note: I think we need to be careful to note the difference in meaning between extinction and existential catastrophe. When Joseph Carlsmith talks about existential catastrophe, he doesn’t necessarily mean all humans dying; in this report, he’s mainly concerned about the disempowerment of humanity. Following Toby Ord in The Precipice, Carlsmith defines an existential catastrophe as “an event that drastically reduces the value of the trajectories along which human civilization could realistically develop”. It’s not straightforward to translate his estimates of existential risk to estimates of extinction risk.
Of course, you don’t need to rely on Joseph Carlsmith’s report to believe that there’s a ≥7.9% chance of human extinction conditioning on AGI.
I face enormous challenges convincing people of this. Many people don’t see, for example, widespread AI-empowered human rights infringements as an ‘existential catastrophe’ because it doesn’t directly kill people, and as a result it falls between the cracks of AI safety definitions—despite being a far more plausable threat than AGI considering it’s already happening. Severe curtailments to humanity’s potential still firmly count as an existential risk in my opinion.
I have suggested we stop conflating positive and negative longtermism. I found, for instance, the Precipice hard to read because of the way he flipped back and forth between the two.