I’d search the “list of lethalities” post for “Facebook AI Research” (especially point 5)
TL;DR: If one group makes an AI which doesn’t have such strong capabilities, Facebook AI Research can still build a dangerous AI a 6 months.
Yudkowsky also points out that the AGI might suggest a plan that is too complicated for us to understand, and if we could understand it we’d come up with it ourselves. This seems wrong to me (because “understanding” is easier than “coming up with it” in some cases), but I’m guessing it’s part of what he’d reply, if that helps
Hey!
I’d search the “list of lethalities” post for “Facebook AI Research” (especially point 5)
TL;DR: If one group makes an AI which doesn’t have such strong capabilities, Facebook AI Research can still build a dangerous AI a 6 months.
Yudkowsky also points out that the AGI might suggest a plan that is too complicated for us to understand, and if we could understand it we’d come up with it ourselves. This seems wrong to me (because “understanding” is easier than “coming up with it” in some cases), but I’m guessing it’s part of what he’d reply, if that helps