I’m willing to discuss this over Zoom, or face to face once I return to Israel in November.
What I think my main points are:
We don’t seem to be anywhere near AGI. The amount of compute might very soon be enough but we also need major theoretical breakthroughs.
Most extinction scenarios that I’ve read about or thought about require some amount of bad luck, at least if AGI is born out of the ML paradigm
AGI is poorly defined, so it’s hard to reason on what it would do once it comes into existence, of you could even describe that as a binary event
It seems unlikely that a malignant AI succeeds in deceiving us until it is capable of preventing us from shutting it off
I’m not entirely convinced in any of them—I haven’t thought about this carefully.
Edit: there’s a doom scenario that I’m more worried about, and it doesn’t require AGI—and that’s global domination by a tyrannical government.
I’m willing to discuss this over Zoom, or face to face once I return to Israel in November.
What I think my main points are:
We don’t seem to be anywhere near AGI. The amount of compute might very soon be enough but we also need major theoretical breakthroughs.
Most extinction scenarios that I’ve read about or thought about require some amount of bad luck, at least if AGI is born out of the ML paradigm
AGI is poorly defined, so it’s hard to reason on what it would do once it comes into existence, of you could even describe that as a binary event
It seems unlikely that a malignant AI succeeds in deceiving us until it is capable of preventing us from shutting it off
I’m not entirely convinced in any of them—I haven’t thought about this carefully.
Edit: there’s a doom scenario that I’m more worried about, and it doesn’t require AGI—and that’s global domination by a tyrannical government.