Within AI risk, it seems plausible the community is somewhat too focused on risks from misalignment rather than mis-use or concentration of power.
My strong bet is that most interventions targeted toward concentration of power end up being net-negative by further proliferating dual-use technologies that can’t adequately be defended against.
Do you have any proposed interventions that don’t contain this drawback?
Further, why should this be prioritised when there are already many powerful actors deadset on proliferating these technologies as quickly as possible, if you count the large open-source labs, plus all of the money that governments are spending on accelerating commercialization which dwarfs spending on AI safety. And all the efforts by various universities and researchers at commercial labs to publish as much as possible about how to build such systems.
It doesn’t seem too conceptually murky. You could imagine a super-advanced GPT, which when you ask it any questions like ‘how do I become world leader?’ gives in-depth practical advice, but which never itself outputs anything other than token predictions.
Hi Arepo, thanks for your idea. I don’t see how it could give advice so concrete and relevant for something like that without being a superintelligence, which makes it extremely hard to control.
You might be right, but that might also just be a failure of imagination. 20 years ago, I suspect many people would have assumed by the time we got AI the level of ChatGPT it would basically be agentic—as I understand it, the Turing test was basically predicated on that idea, and ChatGPT has pretty much nailed that while having very few characteristics that we might recognise in an agent. I’m less clear, but also have the sense that people would have believed something similar about calculators before they appeared.
I’m not asserting that this is obviously the most likely outcome, just that I don’t see convincing reasons for thinking it’s extremely unlikely.
My strong bet is that most interventions targeted toward concentration of power end up being net-negative by further proliferating dual-use technologies that can’t adequately be defended against.
Do you have any proposed interventions that don’t contain this drawback?
Further, why should this be prioritised when there are already many powerful actors deadset on proliferating these technologies as quickly as possible, if you count the large open-source labs, plus all of the money that governments are spending on accelerating commercialization which dwarfs spending on AI safety. And all the efforts by various universities and researchers at commercial labs to publish as much as possible about how to build such systems.
I am extremely confused (theoretically) how we can simultaneously have:
1. An Artificial Superintelligence
2. It be controlled by humans (therefore creating misuse of concentration of power issues)
The argument doesn’t get off the ground for me
It doesn’t seem too conceptually murky. You could imagine a super-advanced GPT, which when you ask it any questions like ‘how do I become world leader?’ gives in-depth practical advice, but which never itself outputs anything other than token predictions.
Hi Arepo, thanks for your idea. I don’t see how it could give advice so concrete and relevant for something like that without being a superintelligence, which makes it extremely hard to control.
You might be right, but that might also just be a failure of imagination. 20 years ago, I suspect many people would have assumed by the time we got AI the level of ChatGPT it would basically be agentic—as I understand it, the Turing test was basically predicated on that idea, and ChatGPT has pretty much nailed that while having very few characteristics that we might recognise in an agent. I’m less clear, but also have the sense that people would have believed something similar about calculators before they appeared.
I’m not asserting that this is obviously the most likely outcome, just that I don’t see convincing reasons for thinking it’s extremely unlikely.