In any case, both that quoted statement of yours and my tweaked version of it seem very different from the claim “if we don’t currently know how to align/control AIs, it’s inevitable there’ll eventually be significantly non-aligned AIs someday”?
Yes, I agree that there’s a difference.
I wrote up a longer reply to your first comment (the one marked “Answer’), but then I looked up your AI safety doc and realized that I might better read through the readings in that first.
Yes, I agree that there’s a difference.
I wrote up a longer reply to your first comment (the one marked “Answer’), but then I looked up your AI safety doc and realized that I might better read through the readings in that first.