I suspect this might be two distinct uses of “AI” as a term. While GPT-type chatbots can be helpful (such as in the educational examples you refer to), they are very different from artificial general intelligence of the type that most AI alignment/safety work is expecting to happen.
To paraphrase AI Snake Oil,[1] it is like one person talking about vehicles while discussing about how improved spacecraft will open up new possibilities for humanity, and a second person mentions how vehicles are also helping his area because cars are becoming more energy efficient. While they do both fall under the category of “vehicles,” they are quite different concepts. So I’m wondering if this might be verging near to talking past each other territory.
The full quote is this: “Imagine an alternate universe in which people don’t have words for different forms of transportation—only the collective noun “vehicle.” They use that word to refer to cars, buses, bikes, spacecraft, and all other ways of getting from place A to place B. Conversations in this world are confusing. There are furious debates about whether or not vehicles are environmentally friendly, even though no one realizes that one side of the debate is talking about bikes and the other side is talking about trucks. There is a breakthrough in rocketry, but the media focuses on how vehicles have gotten faster—so people call their car dealer (oops, vehicle dealer) to ask when faster models will be available. Meanwhile, fraudsters have capitalized on the fact that consumers don’t know what to believe when it comes to vehicle technology, so scams are rampant in the vehicle sector. Now replace the word “vehicle” with “artificial intelligence,” and we have a pretty good description of the world we live in.”
Thanks for the comment! I might be missing something, but GPT-type chatbots are based on large language models, which play a key role in scaling toward AGI. I do think that extrapolating progress from them is valuable but also agree that tying discussions about future AI systems too closely to current models’ capabilities can be misleading.
That said, my post intentionally assumes a more limited claim: that AI will transform the world in significant ways relatively soon. This assumption seems both more likely and increasingly foreseeable. In contrast, assumptions about a world ‘incredibly radically’ transformed by superintelligence are less likely and less foreseeable. There have been lots of arguments around why you should work on AI Safety, and I agree with many of them. I’m mainly trying to reach the EAs who buy into the limited claim but currently act as if they don’t.
Regarding the example: It would likely be a mistake to focus only on current AI capabilities for education. However, it could be important to seriously evaluate scenarios like, ‘AI teachers better than every human teacher soon’.
That strikes me as very reasonable, especially considering the likelihood and foreseeability. Especially since the education examples you mentioned really are currently capable of transforming parts of the world.
I suspect this might be two distinct uses of “AI” as a term. While GPT-type chatbots can be helpful (such as in the educational examples you refer to), they are very different from artificial general intelligence of the type that most AI alignment/safety work is expecting to happen.
To paraphrase AI Snake Oil,[1] it is like one person talking about vehicles while discussing about how improved spacecraft will open up new possibilities for humanity, and a second person mentions how vehicles are also helping his area because cars are becoming more energy efficient. While they do both fall under the category of “vehicles,” they are quite different concepts. So I’m wondering if this might be verging near to talking past each other territory.
The full quote is this: “Imagine an alternate universe in which people don’t have words for different forms of transportation—only the collective noun “vehicle.” They use that word to refer to cars, buses, bikes, spacecraft, and all other ways of getting from place A to place B. Conversations in this world are confusing. There are furious debates about whether or not vehicles are environmentally friendly, even though no one realizes that one side of the debate is talking about bikes and the other side is talking about trucks. There is a breakthrough in rocketry, but the media focuses on how vehicles have gotten faster—so people call their car dealer (oops, vehicle dealer) to ask when faster models will be available. Meanwhile, fraudsters have capitalized on the fact that consumers don’t know what to believe when it comes to vehicle technology, so scams are rampant in the vehicle sector. Now replace the word “vehicle” with “artificial intelligence,” and we have a pretty good description of the world we live in.”
Thanks for the comment! I might be missing something, but GPT-type chatbots are based on large language models, which play a key role in scaling toward AGI. I do think that extrapolating progress from them is valuable but also agree that tying discussions about future AI systems too closely to current models’ capabilities can be misleading.
That said, my post intentionally assumes a more limited claim: that AI will transform the world in significant ways relatively soon. This assumption seems both more likely and increasingly foreseeable. In contrast, assumptions about a world ‘incredibly radically’ transformed by superintelligence are less likely and less foreseeable. There have been lots of arguments around why you should work on AI Safety, and I agree with many of them. I’m mainly trying to reach the EAs who buy into the limited claim but currently act as if they don’t.
Regarding the example: It would likely be a mistake to focus only on current AI capabilities for education. However, it could be important to seriously evaluate scenarios like, ‘AI teachers better than every human teacher soon’.
That strikes me as very reasonable, especially considering the likelihood and foreseeability. Especially since the education examples you mentioned really are currently capable of transforming parts of the world.