I’d argue that the implementation of the solution is work and a customer would be inclined to pay for this extra work.
For example right now GPT-4 can write you the code for a website, but you still need to deploy the server, buy a domain and put the code on the server. I can very well see an “end to end” solution provided by a company that directly does all these steps for you.
In the same way I very well see commercial incentive to provide customers with an AI where they can e.g. upload their codebase and then say, based on our codebase, please write us a new feature with the following specs.
Of course the company offering this doesn’t intent that their tool where a company can upload their codebase to develop a feature get’s used by some terrorist organisation. That terrorist organisation uploads a ton of virus code to the model and says, please develop something similar that’s new and bypasses current malware detection.
I can even see there being no oversight, because of course companies would be hesitant to upload their codebase if anyone could just view what they’re uploading, probably the data you upload is encrypted and therefor there is oversight.
I can see there being regulation for it, but at least currently regulators are really far behind the tech. Also this is just one example I can think of and it’s related to a field I’m familiar with, there might be a lot of other even more plausible / scarier examples in fields I’m not as familiar with like biology, nano-technology, pharmaceuticals you name it.
Respectfully disagree with your example of a website.
In a commercial setting, the client would want to examine and approve the solution (website) in some sort of test environment first.
Even if the company provided end to end service, the implementation (buying domain etc) would be done by a human or non-AI software.
However, I do think it’s possible the AI might choose to inject malicious code, that is hard to review.
And I do like your example about terrorism with AI. However, police/govt can also counter the terrorists with AI too, similar to how all tools made by humans are used by good/bad actors. And generally, the govt should have access to the more powerful AI & cybersecurity tools. I expect the govt AI would come up with solutions too, at least as good, and probably better than the attacks by terrorists.
Yea big companies wouldn’t really use the website service, I was more thinking of non technical 1 man shops, things like restaurants and similar.
Agree that governments definitely will try to counter it, but it’s a cat and mouse game I don’t really like to explore, sometimes the government wins and catches the terrorists before any damage gets done, but sometimes the terrorists manage to get through. Right not getting through often means several people dead because right now a terrorist can only do so much damage, but with more powerful tools they can do a lot more damage.
I’d argue that the implementation of the solution is work and a customer would be inclined to pay for this extra work.
For example right now GPT-4 can write you the code for a website, but you still need to deploy the server, buy a domain and put the code on the server. I can very well see an “end to end” solution provided by a company that directly does all these steps for you.
In the same way I very well see commercial incentive to provide customers with an AI where they can e.g. upload their codebase and then say, based on our codebase, please write us a new feature with the following specs.
Of course the company offering this doesn’t intent that their tool where a company can upload their codebase to develop a feature get’s used by some terrorist organisation. That terrorist organisation uploads a ton of virus code to the model and says, please develop something similar that’s new and bypasses current malware detection.
I can even see there being no oversight, because of course companies would be hesitant to upload their codebase if anyone could just view what they’re uploading, probably the data you upload is encrypted and therefor there is oversight.
I can see there being regulation for it, but at least currently regulators are really far behind the tech. Also this is just one example I can think of and it’s related to a field I’m familiar with, there might be a lot of other even more plausible / scarier examples in fields I’m not as familiar with like biology, nano-technology, pharmaceuticals you name it.
Respectfully disagree with your example of a website.
In a commercial setting, the client would want to examine and approve the solution (website) in some sort of test environment first.
Even if the company provided end to end service, the implementation (buying domain etc) would be done by a human or non-AI software.
However, I do think it’s possible the AI might choose to inject malicious code, that is hard to review.
And I do like your example about terrorism with AI. However, police/govt can also counter the terrorists with AI too, similar to how all tools made by humans are used by good/bad actors. And generally, the govt should have access to the more powerful AI & cybersecurity tools. I expect the govt AI would come up with solutions too, at least as good, and probably better than the attacks by terrorists.
Yea big companies wouldn’t really use the website service, I was more thinking of non technical 1 man shops, things like restaurants and similar.
Agree that governments definitely will try to counter it, but it’s a cat and mouse game I don’t really like to explore, sometimes the government wins and catches the terrorists before any damage gets done, but sometimes the terrorists manage to get through. Right not getting through often means several people dead because right now a terrorist can only do so much damage, but with more powerful tools they can do a lot more damage.