If the AGI is so intelligent and powerful that it represents an existential risk to humanity, surely it is definitionally impossible for us to rein it in? And therefore surely the best approach would be … to prevent work to develop AI
I’m starting to think that this intuition may be right (further thoughts in linked comment thread).
Cheers! Here’s to being first against the wall when the basilisk comes.
I’m starting to think that this intuition may be right (further thoughts in linked comment thread).
Cheers! Here’s to being first against the wall when the basilisk comes.