One thing I have noticed is goalpost shifting on what AGI is—it used to be the Turing test, until that was passed. Then a bunch of other criteria that were developed were passed and and now the definition of ‘AGI’ now seems to default to what previously what have been called ‘strong AI’.
GPT-4 seems to be able to solve problems it wasn’t trained on, reason and argue as well as many professionals, and we are just getting started to learn it’s capabilities.
Of course, it also isn’t a conscious entity—it’s style of intelligence is strange and foreign to us! Does this mean that goalposts will continue to shift as long as any humans intelligence is different in any way from the artificial version?
I think GPT-4 is an early AGI. I don’t think it makes sense to use a binary threshold, because various intelligences (from bacteria to ants to humans to superintelligences) have varying degrees of generality.
The goalpost shifting seems like the AI effect to me: “AI is anything that has not been done yet.”
I don’t think it’s obvious that GPT-4 isn’t conscious (even for non-panpsychists), nor is it obvious that its style of intelligence is that different from what happens in our brains.
It seems to me that consciousness is a different concept than intelligence, and one that isn’t well understood and communicated because it’s tough for us to differentiate them from inside our little meat-boxes!
We need better definitions of intelligence and consciousness; I’m sure someone is working on it, and so perhaps just finding those people and communicating their findings is an easy way to help?
I 100% agree that these things aren’t obvious—which is a great indicator that we should talk about them more!
I actually like the Turing Test a lot (and wrote about it in my ‘Mating Mind’ book as a metaphor for human courtship & sexual selection).
But it’s not a very high bar to pass. The early chatbot Eliza passed the Turing Test (sort of, arguably) in 1966, when many people interacting with it really thought it was human.
I think the mistake a lot of people from Turing onwards made was assuming that a few minutes of interaction makes a good Turing Test. I’d argue that a few months of sustained interaction is a more reliable and valid way to assess intelligence—the kind of thing that humans do when courting, choosing mates, and falling in love.
I’m referring to the 2014 event which was a ‘weak’ version of the Turing test; since then, the people who were running yearly events have lost interest, and claims that that the Turing test is a ‘poor test of intelligence’—highlighting the way that goalposts seem to have shifted.
Is GPT-4 an AGI?
One thing I have noticed is goalpost shifting on what AGI is—it used to be the Turing test, until that was passed. Then a bunch of other criteria that were developed were passed and and now the definition of ‘AGI’ now seems to default to what previously what have been called ‘strong AI’.
GPT-4 seems to be able to solve problems it wasn’t trained on, reason and argue as well as many professionals, and we are just getting started to learn it’s capabilities.
Of course, it also isn’t a conscious entity—it’s style of intelligence is strange and foreign to us! Does this mean that goalposts will continue to shift as long as any humans intelligence is different in any way from the artificial version?
I think GPT-4 is an early AGI. I don’t think it makes sense to use a binary threshold, because various intelligences (from bacteria to ants to humans to superintelligences) have varying degrees of generality.
The goalpost shifting seems like the AI effect to me: “AI is anything that has not been done yet.”
I don’t think it’s obvious that GPT-4 isn’t conscious (even for non-panpsychists), nor is it obvious that its style of intelligence is that different from what happens in our brains.
It seems to me that consciousness is a different concept than intelligence, and one that isn’t well understood and communicated because it’s tough for us to differentiate them from inside our little meat-boxes!
We need better definitions of intelligence and consciousness; I’m sure someone is working on it, and so perhaps just finding those people and communicating their findings is an easy way to help?
I 100% agree that these things aren’t obvious—which is a great indicator that we should talk about them more!
I actually like the Turing Test a lot (and wrote about it in my ‘Mating Mind’ book as a metaphor for human courtship & sexual selection).
But it’s not a very high bar to pass. The early chatbot Eliza passed the Turing Test (sort of, arguably) in 1966, when many people interacting with it really thought it was human.
I think the mistake a lot of people from Turing onwards made was assuming that a few minutes of interaction makes a good Turing Test. I’d argue that a few months of sustained interaction is a more reliable and valid way to assess intelligence—the kind of thing that humans do when courting, choosing mates, and falling in love.
Wait when was the Turing test passed?
I’m referring to the 2014 event which was a ‘weak’ version of the Turing test; since then, the people who were running yearly events have lost interest, and claims that that the Turing test is a ‘poor test of intelligence’—highlighting the way that goalposts seem to have shifted.
https://gizmodo.com/why-the-turing-test-is-bullshit-1588051412