One of the reasons I am skeptical, is that I struggle to see the commercial incentives to develop AI in a direction that is X-risk level.
e.g. paperclip scenario, commercially, a business would use an ai to develop and present a solution to a human. Like how google maps will suggest the optimal route. But the ai would never be given free reign to both design the solution and to action it, and to have no human oversight. There’s no commercial incentive for a business to act like that.
Especially for “dumb” AI as you put it, AI is there to suggest things to humans, in commercial applications, but rarely to implement (I can’t think of a good example—maybe automated call centre?) the solution and to implement the solution without oversight by a human.
In a normal workplace, management signs off on the solution suggested my juniors. And that seems to be how AI is used in business. AI presents a solution, and then a human/ approves it and a human implements it also.
I’d argue that the implementation of the solution is work and a customer would be inclined to pay for this extra work.
For example right now GPT-4 can write you the code for a website, but you still need to deploy the server, buy a domain and put the code on the server. I can very well see an “end to end” solution provided by a company that directly does all these steps for you.
In the same way I very well see commercial incentive to provide customers with an AI where they can e.g. upload their codebase and then say, based on our codebase, please write us a new feature with the following specs.
Of course the company offering this doesn’t intent that their tool where a company can upload their codebase to develop a feature get’s used by some terrorist organisation. That terrorist organisation uploads a ton of virus code to the model and says, please develop something similar that’s new and bypasses current malware detection.
I can even see there being no oversight, because of course companies would be hesitant to upload their codebase if anyone could just view what they’re uploading, probably the data you upload is encrypted and therefor there is oversight.
I can see there being regulation for it, but at least currently regulators are really far behind the tech. Also this is just one example I can think of and it’s related to a field I’m familiar with, there might be a lot of other even more plausible / scarier examples in fields I’m not as familiar with like biology, nano-technology, pharmaceuticals you name it.
Respectfully disagree with your example of a website.
In a commercial setting, the client would want to examine and approve the solution (website) in some sort of test environment first.
Even if the company provided end to end service, the implementation (buying domain etc) would be done by a human or non-AI software.
However, I do think it’s possible the AI might choose to inject malicious code, that is hard to review.
And I do like your example about terrorism with AI. However, police/govt can also counter the terrorists with AI too, similar to how all tools made by humans are used by good/bad actors. And generally, the govt should have access to the more powerful AI & cybersecurity tools. I expect the govt AI would come up with solutions too, at least as good, and probably better than the attacks by terrorists.
Yea big companies wouldn’t really use the website service, I was more thinking of non technical 1 man shops, things like restaurants and similar.
Agree that governments definitely will try to counter it, but it’s a cat and mouse game I don’t really like to explore, sometimes the government wins and catches the terrorists before any damage gets done, but sometimes the terrorists manage to get through. Right not getting through often means several people dead because right now a terrorist can only do so much damage, but with more powerful tools they can do a lot more damage.
One of the reasons I am skeptical, is that I struggle to see the commercial incentives to develop AI in a direction that is X-risk level.
e.g. paperclip scenario, commercially, a business would use an ai to develop and present a solution to a human. Like how google maps will suggest the optimal route. But the ai would never be given free reign to both design the solution and to action it, and to have no human oversight. There’s no commercial incentive for a business to act like that.
Especially for “dumb” AI as you put it, AI is there to suggest things to humans, in commercial applications, but rarely to implement (I can’t think of a good example—maybe automated call centre?) the solution and to implement the solution without oversight by a human.
In a normal workplace, management signs off on the solution suggested my juniors. And that seems to be how AI is used in business. AI presents a solution, and then a human/ approves it and a human implements it also.
I’d argue that the implementation of the solution is work and a customer would be inclined to pay for this extra work.
For example right now GPT-4 can write you the code for a website, but you still need to deploy the server, buy a domain and put the code on the server. I can very well see an “end to end” solution provided by a company that directly does all these steps for you.
In the same way I very well see commercial incentive to provide customers with an AI where they can e.g. upload their codebase and then say, based on our codebase, please write us a new feature with the following specs.
Of course the company offering this doesn’t intent that their tool where a company can upload their codebase to develop a feature get’s used by some terrorist organisation. That terrorist organisation uploads a ton of virus code to the model and says, please develop something similar that’s new and bypasses current malware detection.
I can even see there being no oversight, because of course companies would be hesitant to upload their codebase if anyone could just view what they’re uploading, probably the data you upload is encrypted and therefor there is oversight.
I can see there being regulation for it, but at least currently regulators are really far behind the tech. Also this is just one example I can think of and it’s related to a field I’m familiar with, there might be a lot of other even more plausible / scarier examples in fields I’m not as familiar with like biology, nano-technology, pharmaceuticals you name it.
Respectfully disagree with your example of a website.
In a commercial setting, the client would want to examine and approve the solution (website) in some sort of test environment first.
Even if the company provided end to end service, the implementation (buying domain etc) would be done by a human or non-AI software.
However, I do think it’s possible the AI might choose to inject malicious code, that is hard to review.
And I do like your example about terrorism with AI. However, police/govt can also counter the terrorists with AI too, similar to how all tools made by humans are used by good/bad actors. And generally, the govt should have access to the more powerful AI & cybersecurity tools. I expect the govt AI would come up with solutions too, at least as good, and probably better than the attacks by terrorists.
Yea big companies wouldn’t really use the website service, I was more thinking of non technical 1 man shops, things like restaurants and similar.
Agree that governments definitely will try to counter it, but it’s a cat and mouse game I don’t really like to explore, sometimes the government wins and catches the terrorists before any damage gets done, but sometimes the terrorists manage to get through. Right not getting through often means several people dead because right now a terrorist can only do so much damage, but with more powerful tools they can do a lot more damage.