I face enormous challenges convincing people of this. Many people don’t see, for example, widespread AI-empowered human rights infringements as an ‘existential catastrophe’ because it doesn’t directly kill people, and as a result it falls between the cracks of AI safety definitions—despite being a far more plausable threat than AGI considering it’s already happening. Severe curtailments to humanity’s potential still firmly count as an existential risk in my opinion.
I face enormous challenges convincing people of this. Many people don’t see, for example, widespread AI-empowered human rights infringements as an ‘existential catastrophe’ because it doesn’t directly kill people, and as a result it falls between the cracks of AI safety definitions—despite being a far more plausable threat than AGI considering it’s already happening. Severe curtailments to humanity’s potential still firmly count as an existential risk in my opinion.