How different is the process of how AIs “learn” to draw from how humans learn for ethical purposes? It seems to me that we consciously or unconsciously “scrape” art (and writing) we encounter to develop our own artistic (or writing) skills. The scraping student then competes with other artists. In other words, there’s an element of human-to-human appropriation that we have previously found unremarkable as long as it doesn’t come too close to being copying. Moreover, this process strikes me as an important mechanism by which culture is transmitted and developed.
Of course, one could try to identify problematic ways in which AI learning from images it encounters differs from the traditional way humans learn. But for me, I think there needs to be that something more, not just the use in training alone.
Most art is, I think, for “decorations”—that way of characterizing most art is a double edged sword for your argument to me. It reduces the cost of abstaining from AI art, but also makes me think protecting human art is less important.
I’ve seen this machine/human analogy made before, and I don’t understand why it goes through. I think people over-index on the fact that the “learning” terminology is so common. If the field of ML were instead “automatic encoding” I don’t think it would change the IP issues.
I think the argument fails for two reasons:
I assume we are operating in some type of intellectual property framework. Otherwise whats the issue? Artists don’t have a free-stranding right to high demand for their work. The argument has to be that they have ownership rights which were violated. But in that case, the human/machine distinction makes complete sense. If you own a work, you can give permission for certain people/uses but not others (like only giving permission for people who pay you to use the work). Thus, artists may argue, however it was we made our works available, it was clear/reasonable that we were making it available for people but not for use in training AI systems. If developers had a license to use the works for training then of course there would be no issue.
We could reverse the analogy. Let’s say I go watch a play. The performers have the right to perform the work, but I haven’t secured any rights to do something like copy the script. As I watch, surely I will remember some parts of the play. Have I “copied” the work within the meaning of IP laws? I think we can reject this idea just on a fundamental human freedom argument. Even if the neurons in my brain contain a copy of a work that I don’t have the rights for, it doesn’t matter. There is a human/machine difference because, below a certain threshold of machine capabilities, we probably believe humans have these types of rights while machines don’t. If we get to a place where we begin to think machines do have such rights, then the argument does work (perhaps with some added non-discrimination against AIs idea to answer my #1).
At the same time though I don’t think I personally feel a strong obligation not to use AI art just because I don’t feel a strong obligation to strongly respect IP rights in general. On a policy level I think they have to exist, but lets say I’m listening to a cover of a song and I find out that actually the cover artist doesn’t have the appropriate rights secured. I’m not gonna be broken up about it.
A different consideration though is what a movement that wants to potentially be part of a coalition with people who are more concerned about AI art should do. A tough question in my view.
A difference between how human artists learn and AI models learn is that humans have their own experiences in the real world to draw from and combine these with the examples of other people’s art. Conversely, current AI models are trained exclusively on existing art and images and lack independent experiences.
It’s also well known that AI art models are frequently prompted to generate images in the style of particular artists like Greg Rutkowski, or more recently, Studio Ghibli. Human artists tend to develop their own style, and when they choose to deliberately copy someone else’s style, this is often looked down upon as forgeries. AI models seem to be especially good at stylistic forgeries, and it might be argued that, given the lack of original experiences to draw from, all AI art is essentially forgeries or mixtures of forgeries.
Can you cite a source for that? All I can find is that the First Amendment covers parody and to a lesser extent satire, which are different from pastiche.
Also, pastiche usually is an obvious homage and/or gives credit to the style’s origins. What AI art makers often do is use the name of a famous artist in the prompt to make an image in their style, and then not credit the artist when distributing the resulting image as their own. To me, even if this isn’t technically forgery (which would involve pretending this artwork was actually made by the famous artist), it’s still ethically questionable.
That link has to do with copyright. I will give you that pastiche isn’t a violation of copyright. Even outright forgeries don’t violate copyright. Forgeries are a type of fraud.
Again, pastiche in common parlance describes something that credits the original, usually by being an obvious homage. I consider AI art different from pastiche because it usually doesn’t credit the original in the same way. The Studio Ghibli example is an exception because it is very obvious, but for instance, the Greg Rutkowski prompted AI art is very often much harder to identify as such.
I admit this isn’t the same thing as a forgery, but it does seem like something unethical in the sense that you are not crediting the originator of the style. This may violate no laws, but it can still be wrong.
So I think we may have a crux—are “independent experiences” necessary for work to be transformative enough to make the use of existing art OK? If so, do the experiences of the human user(s) of AI count?
Here, I suspect Toby contributed to the Bulby image in a meaningful way; this is not something the AI would have generated itself or on bland, generic instructions. To be sure, the AI did more to produce this masterpiece than a camera does to produce a photograph—but did Toby do significantly less than the minimum we would expect from a human photographer to classify the output as human art? (I don’t mean to imply we should treat Bulby as human art, only as art with a human element.)
That people can prompt an AI to generate art in a way that crosses the line of so-called “stylistic forgeries” doesn’t strike me as a good reason to condemn all AI art output. It doesn’t undermine the idea that an artist whose work is only a tiny, indirect influence on another artist’s work has not suffered a cognizable injury because that is inherent in how culture is transmitted and developed. Rather, I think the better argument there is that too much copying from a particular source makes the output not transformative enough.
You could argue that Toby’s contribution is more what the commissioner of an artwork does than what an artist does.
On the question of harm, a human artist can compete with another human artist, but that’s just one artist, with limited time and resources. An AI art model could conceivably be copied extensively and used en masse to put all or many artists out of work, which seems like a much greater level of harm possible.
How different is the process of how AIs “learn” to draw from how humans learn for ethical purposes? It seems to me that we consciously or unconsciously “scrape” art (and writing) we encounter to develop our own artistic (or writing) skills. The scraping student then competes with other artists. In other words, there’s an element of human-to-human appropriation that we have previously found unremarkable as long as it doesn’t come too close to being copying. Moreover, this process strikes me as an important mechanism by which culture is transmitted and developed.
Of course, one could try to identify problematic ways in which AI learning from images it encounters differs from the traditional way humans learn. But for me, I think there needs to be that something more, not just the use in training alone.
Most art is, I think, for “decorations”—that way of characterizing most art is a double edged sword for your argument to me. It reduces the cost of abstaining from AI art, but also makes me think protecting human art is less important.
I’ve seen this machine/human analogy made before, and I don’t understand why it goes through. I think people over-index on the fact that the “learning” terminology is so common. If the field of ML were instead “automatic encoding” I don’t think it would change the IP issues.
I think the argument fails for two reasons:
I assume we are operating in some type of intellectual property framework. Otherwise whats the issue? Artists don’t have a free-stranding right to high demand for their work. The argument has to be that they have ownership rights which were violated. But in that case, the human/machine distinction makes complete sense. If you own a work, you can give permission for certain people/uses but not others (like only giving permission for people who pay you to use the work). Thus, artists may argue, however it was we made our works available, it was clear/reasonable that we were making it available for people but not for use in training AI systems. If developers had a license to use the works for training then of course there would be no issue.
We could reverse the analogy. Let’s say I go watch a play. The performers have the right to perform the work, but I haven’t secured any rights to do something like copy the script. As I watch, surely I will remember some parts of the play. Have I “copied” the work within the meaning of IP laws? I think we can reject this idea just on a fundamental human freedom argument. Even if the neurons in my brain contain a copy of a work that I don’t have the rights for, it doesn’t matter. There is a human/machine difference because, below a certain threshold of machine capabilities, we probably believe humans have these types of rights while machines don’t. If we get to a place where we begin to think machines do have such rights, then the argument does work (perhaps with some added non-discrimination against AIs idea to answer my #1).
At the same time though I don’t think I personally feel a strong obligation not to use AI art just because I don’t feel a strong obligation to strongly respect IP rights in general. On a policy level I think they have to exist, but lets say I’m listening to a cover of a song and I find out that actually the cover artist doesn’t have the appropriate rights secured. I’m not gonna be broken up about it.
A different consideration though is what a movement that wants to potentially be part of a coalition with people who are more concerned about AI art should do. A tough question in my view.
A difference between how human artists learn and AI models learn is that humans have their own experiences in the real world to draw from and combine these with the examples of other people’s art. Conversely, current AI models are trained exclusively on existing art and images and lack independent experiences.
It’s also well known that AI art models are frequently prompted to generate images in the style of particular artists like Greg Rutkowski, or more recently, Studio Ghibli. Human artists tend to develop their own style, and when they choose to deliberately copy someone else’s style, this is often looked down upon as forgeries. AI models seem to be especially good at stylistic forgeries, and it might be argued that, given the lack of original experiences to draw from, all AI art is essentially forgeries or mixtures of forgeries.
Stylistic pastiche is unambiguously protected by the First Amendment, not “forgery”.
Can you cite a source for that? All I can find is that the First Amendment covers parody and to a lesser extent satire, which are different from pastiche.
Also, pastiche usually is an obvious homage and/or gives credit to the style’s origins. What AI art makers often do is use the name of a famous artist in the prompt to make an image in their style, and then not credit the artist when distributing the resulting image as their own. To me, even if this isn’t technically forgery (which would involve pretending this artwork was actually made by the famous artist), it’s still ethically questionable.
This is more a copyright law question than a First Amendment one, at least under current law. E.g., https://www.trails.umd.edu/news/ai-imitating-artist-style-drives-call-to-rethink-copyright-law.
I believe whether the 1A requires this outcome is unclear at present. Of course, there’s a lot of activity protected by the 1A that is horrible to do.
https://law.stackexchange.com/questions/98968/are-art-styles-subject-to-ip-protection
That link has to do with copyright. I will give you that pastiche isn’t a violation of copyright. Even outright forgeries don’t violate copyright. Forgeries are a type of fraud.
Again, pastiche in common parlance describes something that credits the original, usually by being an obvious homage. I consider AI art different from pastiche because it usually doesn’t credit the original in the same way. The Studio Ghibli example is an exception because it is very obvious, but for instance, the Greg Rutkowski prompted AI art is very often much harder to identify as such.
I admit this isn’t the same thing as a forgery, but it does seem like something unethical in the sense that you are not crediting the originator of the style. This may violate no laws, but it can still be wrong.
So I think we may have a crux—are “independent experiences” necessary for work to be transformative enough to make the use of existing art OK? If so, do the experiences of the human user(s) of AI count?
Here, I suspect Toby contributed to the Bulby image in a meaningful way; this is not something the AI would have generated itself or on bland, generic instructions. To be sure, the AI did more to produce this masterpiece than a camera does to produce a photograph—but did Toby do significantly less than the minimum we would expect from a human photographer to classify the output as human art? (I don’t mean to imply we should treat Bulby as human art, only as art with a human element.)
That people can prompt an AI to generate art in a way that crosses the line of so-called “stylistic forgeries” doesn’t strike me as a good reason to condemn all AI art output. It doesn’t undermine the idea that an artist whose work is only a tiny, indirect influence on another artist’s work has not suffered a cognizable injury because that is inherent in how culture is transmitted and developed. Rather, I think the better argument there is that too much copying from a particular source makes the output not transformative enough.
You could argue that Toby’s contribution is more what the commissioner of an artwork does than what an artist does.
On the question of harm, a human artist can compete with another human artist, but that’s just one artist, with limited time and resources. An AI art model could conceivably be copied extensively and used en masse to put all or many artists out of work, which seems like a much greater level of harm possible.