I think that’s a third issue, not a matter of timeline opinions either.
Seems relevant in that if you surveyed timeline opinions of AI dev researchers 20 years ago, you’d probably get responses ranging from “200 years out” to “AGI? That’s apocalyptic hogwash. Now, if you’d excuse me...”
I don’t know which premise here is more greatly at odds with the real beliefs of AI researchers—that they didn’t worry about AI safety because they didn’t think that AGI would be built, or that there has ever been a time when they thought it would take >200 years to do it.
Seems relevant in that if you surveyed timeline opinions of AI dev researchers 20 years ago, you’d probably get responses ranging from “200 years out” to “AGI? That’s apocalyptic hogwash. Now, if you’d excuse me...”
I don’t know which premise here is more greatly at odds with the real beliefs of AI researchers—that they didn’t worry about AI safety because they didn’t think that AGI would be built, or that there has ever been a time when they thought it would take >200 years to do it.