Is symbol grounding necessary for (dangerous) AGI?
The way I think of risk from AGI is in terms of a giant unconscious pile of linear algebra that manipulates the external world, via an unstoppable optimisation process, to a configuration that is incompatible with biological life. A blind, unfeeling, unknowing “idiot god” that can destroy worlds. Of course you could argue that this is not true “AGI” (i.e. because there is no true “understanding” on the part of the AGI, and it’s all just, at base level, statistical learning), but that’s missing the point.
I think current AI is already dangerous. But that is not so much my concern. I am answering the question of whether AGI is possible at all in the foreseeable future.
Ok, that’s more of a semantic issue with the definition of AGI then. FTX Future Fund care about AI that poses an existential threat, not about whether such AI is AGI, or strong AI or true AI or whatever. Perhaps Transformative AI or TAI (as per OpenPhil’s definition) would be better used in this case.
I’m not sure what Future Fund care about, but they do go into some length defining what they mean by AGI, and they do care about when this AGI will be achieved. This is what I am responding to.
Is symbol grounding necessary for (dangerous) AGI?
The way I think of risk from AGI is in terms of a giant unconscious pile of linear algebra that manipulates the external world, via an unstoppable optimisation process, to a configuration that is incompatible with biological life. A blind, unfeeling, unknowing “idiot god” that can destroy worlds. Of course you could argue that this is not true “AGI” (i.e. because there is no true “understanding” on the part of the AGI, and it’s all just, at base level, statistical learning), but that’s missing the point.
I think current AI is already dangerous. But that is not so much my concern. I am answering the question of whether AGI is possible at all in the foreseeable future.
Ok, that’s more of a semantic issue with the definition of AGI then. FTX Future Fund care about AI that poses an existential threat, not about whether such AI is AGI, or strong AI or true AI or whatever. Perhaps Transformative AI or TAI (as per OpenPhil’s definition) would be better used in this case.
I’m not sure what Future Fund care about, but they do go into some length defining what they mean by AGI, and they do care about when this AGI will be achieved. This is what I am responding to.