Github copilot has been making waves for a few years among coders, it was one of those meme things on twitter for the last year or so, it’s not AI, more code completion with crowd-sourced code samples from stack overflow or wherever. There’s another competitor to it that does something similar, I forget the name.
It’s not a real worry as far as dangerous AGI, it’s about taking advantage of existing code and making it easy to auto-complete with it, basically.
it’s not AI, more code completion with crowd-sourced code
Copilot is based on GPT3, so imho it is just as much AI or not AI as ChatGPT is. And given it’s pretty much at the forefront of currently available ML technology, I’d be very inclined to call it AI, even if it’s (superficially) limited to the use case of completing code.
Sure, I agree. Technically it’s based on OpenAI Codex, a descendant of GPT3. But thanks for the correction, although I will add that its code is alleged to be more copied from than inspired by its training data. Here’s a link:
Butterick et al’s lawsuit lists other examples, including code that bears significant similarities to sample code from the books Mastering JS and Think JavaScript. The complaint also notes that, in regurgitating commonly-used code, Copilot reproduces common mistakes, so its suggestions are often buggy and inefficient. The plaintiffs allege that this proves Copilot is not “writing” in any meaningful way–it’s merely copying the code it has encountered most often.
and further down:
Should you choose to allow Copilot, we advise you to take the following precautions:
Disable telemetry
Block public code suggestions
Thoroughly test all Copilot code
Run projects through license checking tools that analyze code for plagiarism
I think the point of the conversation was a take on how creative the AI could be in generating code, that is, would it create novel code suited to task by “understanding” the task or the context. I chose to describe the AI’s code as not novel code by by saying that the AI is a code-completion tool. A lot of people would also hesitate to call a simple logic program an AI, or a coded decision table an AI, when technically, they are AI. The term is a moving target. But you’re right, the tool doing the interpreting of prompts and suggesting of alternatives is an AI tool.
Github copilot has been making waves for a few years among coders, it was one of those meme things on twitter for the last year or so, it’s not AI, more code completion with crowd-sourced code samples from stack overflow or wherever. There’s another competitor to it that does something similar, I forget the name.
It’s not a real worry as far as dangerous AGI, it’s about taking advantage of existing code and making it easy to auto-complete with it, basically.
Copilot is based on GPT3, so imho it is just as much AI or not AI as ChatGPT is. And given it’s pretty much at the forefront of currently available ML technology, I’d be very inclined to call it AI, even if it’s (superficially) limited to the use case of completing code.
Sure, I agree. Technically it’s based on OpenAI Codex, a descendant of GPT3. But thanks for the correction, although I will add that its code is alleged to be more copied from than inspired by its training data. Here’s a link:
and further down:
I think the point of the conversation was a take on how creative the AI could be in generating code, that is, would it create novel code suited to task by “understanding” the task or the context. I chose to describe the AI’s code as not novel code by by saying that the AI is a code-completion tool. A lot of people would also hesitate to call a simple logic program an AI, or a coded decision table an AI, when technically, they are AI. The term is a moving target. But you’re right, the tool doing the interpreting of prompts and suggesting of alternatives is an AI tool.