It doesn’t seem too conceptually murky. You could imagine a super-advanced GPT, which when you ask it any questions like ‘how do I become world leader?’ gives in-depth practical advice, but which never itself outputs anything other than token predictions.
Hi Arepo, thanks for your idea. I don’t see how it could give advice so concrete and relevant for something like that without being a superintelligence, which makes it extremely hard to control.
You might be right, but that might also just be a failure of imagination. 20 years ago, I suspect many people would have assumed by the time we got AI the level of ChatGPT it would basically be agentic—as I understand it, the Turing test was basically predicated on that idea, and ChatGPT has pretty much nailed that while having very few characteristics that we might recognise in an agent. I’m less clear, but also have the sense that people would have believed something similar about calculators before they appeared.
I’m not asserting that this is obviously the most likely outcome, just that I don’t see convincing reasons for thinking it’s extremely unlikely.
I am extremely confused (theoretically) how we can simultaneously have:
1. An Artificial Superintelligence
2. It be controlled by humans (therefore creating misuse of concentration of power issues)
The argument doesn’t get off the ground for me
It doesn’t seem too conceptually murky. You could imagine a super-advanced GPT, which when you ask it any questions like ‘how do I become world leader?’ gives in-depth practical advice, but which never itself outputs anything other than token predictions.
Hi Arepo, thanks for your idea. I don’t see how it could give advice so concrete and relevant for something like that without being a superintelligence, which makes it extremely hard to control.
You might be right, but that might also just be a failure of imagination. 20 years ago, I suspect many people would have assumed by the time we got AI the level of ChatGPT it would basically be agentic—as I understand it, the Turing test was basically predicated on that idea, and ChatGPT has pretty much nailed that while having very few characteristics that we might recognise in an agent. I’m less clear, but also have the sense that people would have believed something similar about calculators before they appeared.
I’m not asserting that this is obviously the most likely outcome, just that I don’t see convincing reasons for thinking it’s extremely unlikely.