Interesting, Miguel. Thanks for posting this about what’s happening in the real world. Yeah, an AI that can develop into a tool that writes assembly (or machine code?) to spec, errmm,that has worrisome applications well before AGI make the scene....
Yes, the capacity to develop code insertions to machines is something we should avoid, like it’s a gateway to a skynet situation—I do not see that we are all prepared to what power that can enable us as a species.
Until we find a solution to the AI alignment problem, we (humans) should avoid tinkering technologies that can turn the world upside-down in an instant.
yeah, agreed, though I’m guessing the code isn’t very good … yet. Code that writes code is not a new idea, and using that in various tools is not new, either, I’ve read about such things before, however, these deep learning language tools are a little unpredictable, so training these machines on code is folly.
I’m sure corporations want to automate code-writing, that means paying programmers less, enforcing coding standards easier, drastically shortening coding time, removing some types of bugs and reducing others. There’s various approaches toward that end, something like chatGPT would be a poor choice.
Which makes me wonder why the darn thing was trained to write code at all.
Yeah, why chatgpt was installed with the feature of from natural human language to computer software language and then the hardware language i find this feature very difficult to control once an AGI is implementing hardware code by itself.
I dug deeper and found a coding assistant calles github copilot where it is accessible to write cleaner code but only developers can operate it through developer IDEs (integredated developer environment). Atleast copilot is accessible only to devs with a monthly fee after the trial period.
I hope that feature be eliminated in future chatgpt iterations.
Github copilot has been making waves for a few years among coders, it was one of those meme things on twitter for the last year or so, it’s not AI, more code completion with crowd-sourced code samples from stack overflow or wherever. There’s another competitor to it that does something similar, I forget the name.
It’s not a real worry as far as dangerous AGI, it’s about taking advantage of existing code and making it easy to auto-complete with it, basically.
it’s not AI, more code completion with crowd-sourced code
Copilot is based on GPT3, so imho it is just as much AI or not AI as ChatGPT is. And given it’s pretty much at the forefront of currently available ML technology, I’d be very inclined to call it AI, even if it’s (superficially) limited to the use case of completing code.
Sure, I agree. Technically it’s based on OpenAI Codex, a descendant of GPT3. But thanks for the correction, although I will add that its code is alleged to be more copied from than inspired by its training data. Here’s a link:
Butterick et al’s lawsuit lists other examples, including code that bears significant similarities to sample code from the books Mastering JS and Think JavaScript. The complaint also notes that, in regurgitating commonly-used code, Copilot reproduces common mistakes, so its suggestions are often buggy and inefficient. The plaintiffs allege that this proves Copilot is not “writing” in any meaningful way–it’s merely copying the code it has encountered most often.
and further down:
Should you choose to allow Copilot, we advise you to take the following precautions:
Disable telemetry
Block public code suggestions
Thoroughly test all Copilot code
Run projects through license checking tools that analyze code for plagiarism
I think the point of the conversation was a take on how creative the AI could be in generating code, that is, would it create novel code suited to task by “understanding” the task or the context. I chose to describe the AI’s code as not novel code by by saying that the AI is a code-completion tool. A lot of people would also hesitate to call a simple logic program an AI, or a coded decision table an AI, when technically, they are AI. The term is a moving target. But you’re right, the tool doing the interpreting of prompts and suggesting of alternatives is an AI tool.
Interesting, Miguel. Thanks for posting this about what’s happening in the real world. Yeah, an AI that can develop into a tool that writes assembly (or machine code?) to spec, errmm,that has worrisome applications well before AGI make the scene....
Yes, the capacity to develop code insertions to machines is something we should avoid, like it’s a gateway to a skynet situation—I do not see that we are all prepared to what power that can enable us as a species.
Until we find a solution to the AI alignment problem, we (humans) should avoid tinkering technologies that can turn the world upside-down in an instant.
yeah, agreed, though I’m guessing the code isn’t very good … yet. Code that writes code is not a new idea, and using that in various tools is not new, either, I’ve read about such things before, however, these deep learning language tools are a little unpredictable, so training these machines on code is folly.
I’m sure corporations want to automate code-writing, that means paying programmers less, enforcing coding standards easier, drastically shortening coding time, removing some types of bugs and reducing others. There’s various approaches toward that end, something like chatGPT would be a poor choice.
Which makes me wonder why the darn thing was trained to write code at all.
Yeah, why chatgpt was installed with the feature of from natural human language to computer software language and then the hardware language i find this feature very difficult to control once an AGI is implementing hardware code by itself.
I dug deeper and found a coding assistant calles github copilot where it is accessible to write cleaner code but only developers can operate it through developer IDEs (integredated developer environment). Atleast copilot is accessible only to devs with a monthly fee after the trial period.
I hope that feature be eliminated in future chatgpt iterations.
Github copilot has been making waves for a few years among coders, it was one of those meme things on twitter for the last year or so, it’s not AI, more code completion with crowd-sourced code samples from stack overflow or wherever. There’s another competitor to it that does something similar, I forget the name.
It’s not a real worry as far as dangerous AGI, it’s about taking advantage of existing code and making it easy to auto-complete with it, basically.
Copilot is based on GPT3, so imho it is just as much AI or not AI as ChatGPT is. And given it’s pretty much at the forefront of currently available ML technology, I’d be very inclined to call it AI, even if it’s (superficially) limited to the use case of completing code.
Sure, I agree. Technically it’s based on OpenAI Codex, a descendant of GPT3. But thanks for the correction, although I will add that its code is alleged to be more copied from than inspired by its training data. Here’s a link:
and further down:
I think the point of the conversation was a take on how creative the AI could be in generating code, that is, would it create novel code suited to task by “understanding” the task or the context. I chose to describe the AI’s code as not novel code by by saying that the AI is a code-completion tool. A lot of people would also hesitate to call a simple logic program an AI, or a coded decision table an AI, when technically, they are AI. The term is a moving target. But you’re right, the tool doing the interpreting of prompts and suggesting of alternatives is an AI tool.