I also expect/guess my disagreement with many people would be around our priors, not the specifics. I think many people have a prior of “I’m not sure, and so let’s assume we won’t all die”, which seems wrong, but I’m open to talk.
I think most of the work with changing each other’s mind will be locating the crux (as I suggested FTX would help us do with them).
Replying to a DM:
My current priors are roughly represented by AGI ruin scenarios are likely (and disjunctive).
I also expect/guess my disagreement with many people would be around our priors, not the specifics. I think many people have a prior of “I’m not sure, and so let’s assume we won’t all die”, which seems wrong, but I’m open to talk.
I think most of the work with changing each other’s mind will be locating the crux (as I suggested FTX would help us do with them).
I’m willing to discuss this over Zoom, or face to face once I return to Israel in November.
What I think my main points are:
We don’t seem to be anywhere near AGI. The amount of compute might very soon be enough but we also need major theoretical breakthroughs.
Most extinction scenarios that I’ve read about or thought about require some amount of bad luck, at least if AGI is born out of the ML paradigm
AGI is poorly defined, so it’s hard to reason on what it would do once it comes into existence, of you could even describe that as a binary event
It seems unlikely that a malignant AI succeeds in deceiving us until it is capable of preventing us from shutting it off
I’m not entirely convinced in any of them—I haven’t thought about this carefully.
Edit: there’s a doom scenario that I’m more worried about, and it doesn’t require AGI—and that’s global domination by a tyrannical government.